Статті в журналах з теми "Cloud compute services"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Cloud compute services.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Cloud compute services".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Choudhary, Anurag. "A walkthrough of Amazon Elastic Compute Cloud (Amazon EC2): A Review." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (November 30, 2021): 93–97. http://dx.doi.org/10.22214/ijraset.2021.38764.

Повний текст джерела
Анотація:
Abstract: Cloud services are being provided by various giant corporations notably Amazon Web Services, Microsoft Azure, Google Cloud Platform, and others. In this scenario, we address the most prominent web service provider, which is Amazon Web Services, which comprises the Elastic Compute Cloud functionality. Amazon offers a comprehensive package of computing solutions to let businesses establish dedicated virtual clouds while maintaining complete configuration control over their working environment. An organization needs to interact with several other technologies; however, instead of installing the technologies, the company may just buy the technology available online as a service. Amazon's Elastic Compute Cloud Web service, delivers highly customizable computing capacity throughout the cloud, allowing developers to establish applications with high scalability. Explicitly put, an Elastic Compute Cloud is a virtual platform that replicates a physical server on which you may host your applications. Instead of acquiring your own hardware and connecting it to a network, Amazon provides you with almost endless virtual machines to deploy your applications while they control the hardware. This review will focus on the quick overview of the Amazon Web Services Elastic Compute Cloud which also containing the features, pricing, and challenges. Finally, unanswered obstacles, and future research directions in Amazon Web Services Elastic Compute Cloud, are addressed. Keywords: Cloud Computing, Cloud Service Provider, Amazon Web Services, Amazon Elastic Compute Cloud, AWS EC2
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Irfan, Muhammad, Zhu Hong, Nueraimaiti Aimaier, and Zhu Guo Li. "SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services." Advanced Materials Research 660 (February 2013): 196–201. http://dx.doi.org/10.4028/www.scientific.net/amr.660.196.

Повний текст джерела
Анотація:
Cloud Computing is not a revolution; it’s an evolution of computer science and technology emerging by leaps and bounds, in order to merge all computer science tools and technologies. Cloud Computing technology is hottest to do research and explore new horizons of next generations of Computer Science. There are number of cloud services providers (Amazon EC2), Rackspace Cloud, Terremark and Google Compute Engine) but still enterprises and common users have a number of concerns over cloud service providers. Still there is lot of weakness, challenges and issues are barrier for cloud service providers in order to provide cloud services according to SLA (Service Level agreement). Especially, service provisioning according to SLAs is core objective of each cloud service provider with maximum performance as per SLA. We have identified those challenges issues, as well as proposed new methodology as “SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services”. Currently, cloud service providers are using “orchestrations” fully or partially to automate service provisioning but we are trying to integrate and drive orchestration flows from SLAs. It would be new approach to provision cloud service and deliver cloud service as per SLA, satisfying QoS standards.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Himmat, Mubarak, Gada Algazoli, Nazar Hammam, and Ashraf Gasim Elsid Abdalla. "Review on the Current State of the Internet of Things and its Extension and its Challenges." European Journal of Information Technologies and Computer Science 2, no. 2 (April 15, 2022): 1–5. http://dx.doi.org/10.24018/compute.2022.2.2.58.

Повний текст джерела
Анотація:
This era witnessed the rapidly growing of internet technology, a huge amount of people all over the world using the internet and it applications, there are various ways and purposes for using the internet which contributed in so many fields, and one of these techniques which will be more increasing and usage is the future is the Internet of Things (IoT), which facilitate using of many embedded devices that help will be connected and controlled through the internet. The technology of the Internet of Things (IoT) relies on Cloud Computing. Cloud Computing provides a baseline that supports IoT and is built on the idea of allowing individuals to perform computing activities via internet-based services. This article presents a comprehensive survey on addresses the concerns, challenges, and existing state-of-the-art standards to emphasize and overcome most of the proposed technical solutions and it will deepen the importance of the IoT & The Internet of Everything (IoE) technology, the paper will cover and the domain of the internet of things and several technologies that widely used. The paper covered the architecture of the resulting Cloud-based IoT paradigm, as well as its new application scenarios, which will be discussed. Finally, new research directions and challenges are offered.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kapse, Prashant Shankar, Onkar Dhananjaya Swami, Dadaso Sopan Keskar, Ruturaj Avinash Kadam, and Dr S. N. Gujar. "StoreSim: Optimizing Information Leakage in Multi-Cloud Storage Services." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1048–51. http://dx.doi.org/10.22214/ijraset.2022.42427.

Повний текст джерела
Анотація:
Abstract: Many schemes have been recently advanced for storing data on multiple clouds. Distributing data over different cloud storage providers (CSPs) automatically provides users with a certain degree of information leakage control, as no single point of attack can leak all a user’s information. However, unplanned distribution of data chunks can lead to high information disclosure even while using multiple clouds. In this paper, to address this problem, we present Store-Sim, an information leakage-aware storage system in multi cloud. Store-Sim aims to store syntactically similar data on the same cloud, thus minimizing the user’s information leakage across multiple clouds. We design an approximate algorithm to efficiently generate similarity- preserving signatures for data chunks based on Min-Hash and Bloom filters, and design a function to compute the information leakage based on these signatures. Next, we present an effective storage plan generation algorithm based on clustering for distributing data chunks with minimal information leakage across multiple clouds. Finally, we evaluated our scheme using two real datasets from Wikipedia and GitHub. We show that our scheme can reduce information leakage by up to 60-70 Percent. Keywords: Min-Hash, Bloom-Filter, Multi-Cloud, StoreSim, Data Leakage.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ahuja, Sanjay P., Emily Czarnecki, and Sean Willison. "Multi-Factor Performance Comparison of Amazon Web Services Elastic Compute Cluster and Google Cloud Platform Compute Engine." International Journal of Cloud Applications and Computing 10, no. 3 (July 2020): 1–16. http://dx.doi.org/10.4018/ijcac.2020070101.

Повний текст джерела
Анотація:
Cloud computing has rapidly become a viable competitor to on-premise infrastructure from both management and cost perspectives. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers. A comparative examination of the two cloud platforms using synthetic benchmarks is provided. In this article, we compared the performance of Amazon Web Services Elastic Compute Cluster (EC2) to the Google Cloud Platform (GCP) Compute Engine using three benchmarks: STREAM, IOR, and NPB-EP. Experiments were conducted on clusters with increasing nodes from one to eight. We also performed experiments over the course of two weeks where benchmarks were run at similar times. The benchmarks provided performance metrics for bandwidth (STREAM), read and write performance (IOR), and operations per second (NPB-EP). We found that EC2 outperformed GCP for bandwidth. Both provided good scalability and reliability for bandwidth with GCP showing a slight deviation during the two-week trial. GCP outperformed EC2 in both the read and write tests (IOR) as well as the operations per second test. However, GCP was extremely variable during the read and write tests over the two-week trial. Overall, each platform excelled in different benchmarks and we found EC2 to be more reliable in general.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

TRUONG, HONG-LINH, SCHAHRAM DUSTDAR, and KAMAL BHATTACHARYA. "CONCEPTUALIZING AND PROGRAMMING HYBRID SERVICES IN THE CLOUD." International Journal of Cooperative Information Systems 22, no. 04 (December 2013): 1341003. http://dx.doi.org/10.1142/s0218843013410037.

Повний текст джерела
Анотація:
For solving complex problems, in many cases, software alone might not be sufficient and we need hybrid systems of software and humans in which humans not only direct the software performance but also perform computing and vice versa. Therefore, we advocate constructing "social computers" which combine software and human services. However, to date, human capabilities cannot be easily programmed into complex applications in a similar way like software capabilities. There is a lack of techniques to conceptualize and program human and software capabilities in a unified way. In this paper, we explore a new way to virtualize, provision and program human capabilities using cloud computing concepts and service delivery models. We propose novel methods for conceptualizing and modeling clouds of human-based services (HBS) and combine HBS with software-based services (SBS) to establish clouds of hybrid services. In our model, we present common APIs, similar to well-developed APIs for software services, to access individual and team-based compute units in clouds of HBS. Based on that, we propose a framework for utilizing SBS and HBS to solve complex problems. We present several programming primitives for hybrid services, also covering forming hybrid solutions consisting of software and humans. We illustrate our concepts via some examples of using our cloud APIs and existing cloud APIs for software.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Roobini, M. S., Selvasurya Sampathkumar, Shaik Khadar Basha, and Anitha Ponraj. "Serverless Computing Using Amazon Web Services." Journal of Computational and Theoretical Nanoscience 17, no. 8 (August 1, 2020): 3581–85. http://dx.doi.org/10.1166/jctn.2020.9235.

Повний текст джерела
Анотація:
In the last decade cloud computing transformed the way in which we build applications. The boom in cloud computing helped to develop new software design and architecture. Helping the developers to focus more on the business logic than the infrastructure. FaaS (function as a service) compute model it gave developers to concentrate only on the application code and rest of the factors will be taken care by the cloud provider. Here we present a serverless architecture of a web application built using AWS services and provide detail analysis of lambda function and micro service software design implemented using these AWS services.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kupriyanov, Dmitriy O. "MATHEMATICAL MODELING OF REQUESTS FLOW TO CLOUD COMPUTE CLUSTER." T-Comm 14, no. 10 (2020): 39–44. http://dx.doi.org/10.36724/2072-8735-2020-14-10-39-44.

Повний текст джерела
Анотація:
Quality of service parameters estimation becomes even more valuable when using cloud compute services. Mathematical model of described cloud-deployed web application in terms of Processor Sharing (PS) policy for mono-service traffic type proposed in this research. This model has Poisson distribution of the incoming requests flow with intensity ?. Requests awaiting in queue, queue length is considered to be unlimited and all requests should be served. Request in this system is HTTP request with a special payload in JSON format. The size of this payload is different for each request but it lies in a narrow band of values (bytes or decades of bytes). A model of cloud compute cluster was built. Characteristics of relative serving efficiency and relative bandwidth of a single requests flow was calculated using this model for different amount of resource provided for processing of a single request. The dependency of these characteristics from cluster load coefficient is demonstrated in charts. Some conclusions on cloud cluster QoS parameters behavior after the change of input requests flow size. Proposed model helps estimating quality of service parameters and adopting the infrastructure to increased or decreased number of requests from customers and could be used for architecting, deploying and administrating web services.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Biswas, Pijush, and Kailash Verma. "Compute and Services (CAS) optimized strategy for multi hybrid cloud deployment." International Journal of Computer & Organization Trends 10, no. 3 (June 25, 2020): 1–3. http://dx.doi.org/10.14445/22492593/ijcot-v10i3p301.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dr B Raghu, Dr V Khanaa, Niraja Jain,. "Probabilistic Model for Resource Demand Prediction in Cloud." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 5, 2021): 1766–71. http://dx.doi.org/10.17762/turcomat.v12i6.3908.

Повний текст джерела
Анотація:
Dynamic cloud infrastructure provisioning is possible with the virtualization technology. Cost, agility and time to market are the key elements of the cloud services. Virtualization is the software layer responsible for interaction with multiple servers, bringing entire IT resources together and provide standardized Virtual compute centers that drives the entire infrastructure. The increased pooling of shared resources helps in improving self-provisioning and automation of service delivery. Probabilistic model proposed in this article is based on the hypothesis that the accurate resource demand predictions can benefit in improving the virtualization layer efficiency. The probabilistic method, uses the laws of combinatorics. The probability space gives an idea about both the partial certainty and randomness of the variable. The method is popular in theoretical computer science. The probabilistic models provide the predictions considering the randomness of the variables. In the cloud environment there are multiple factors dynamically affecting the resource demand needs. The resource demand has a certain degree of certainty but the randomness of requirements. This further leads to decrease in risk related to leveraging cloud services. It accelerates development and implementation of cloud services that overall improves the services pertaining to SLA.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kumar, Rakesh Ranjan, and Chiranjeev Kumar. "A Multi Criteria Decision Making Method for Cloud Service Selection and Ranking." International Journal of Ambient Computing and Intelligence 9, no. 3 (July 2018): 1–14. http://dx.doi.org/10.4018/ijaci.2018070101.

Повний текст джерела
Анотація:
This article describes how with the rapid growth of cloud services in recent years, it is very difficult to choose the most suitable cloud services among those services that provide similar functionality. The quality of services (QoS) is considered the most significant factor for appropriate service selection and user satisfaction in cloud computing. However, with a vast diversity in the cloud services, selection of a suitable cloud service is a very challenging task for a customer under an unpredictable environment. Due to the multidimensional attributes of QoS, cloud service selection problems are treated as a multiple criteria decision-making (MCDM) problem. This study introduces a methodology for determining the appropriate cloud service by integrating the AHP weighing method with TOPSIS method. Using AHP, the authors define the architecture for selection process of cloud services and compute the criteria weights using pairwise comparison. Thereafter, with the TOPSIS method, the authors obtain the final ranking of the cloud service based on overall performance. A real-time cloud case study affirms the potential of our proposed methodology, when compared to other MCDM methods. Finally, a sensitivity analysis testifies the effectiveness and the robustness of our proposed methodology.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Chałubińska-Jentkiewicz, Katarzyna. "Liability within the scope of Cloud Computing services." Opolskie Studia Administracyjno-Prawne 18, no. 4 (February 23, 2021): 9–22. http://dx.doi.org/10.25167/osap.3427.

Повний текст джерела
Анотація:
The issue of acquiring large amounts of data and creating large sets of digital data, and then processing and analyzing them (Big Data) for the needs of generating artificial intelligence (AI) solutions is one of the key challenges to the development of economy and national security. Data have become a resource that will determine the power and geopolitical and geoeconomic position of countries and regions in the 21st century.The layout of data storage and processing in distributed databases has changed in recent years. Since the appearance of hosting services in the range of ICT services, we are talking about a new type of ASP (Applications Service Providers) – provision of the ICT networks as part of an application). Cloud Computing is therefore one of the versions of the ASP services. The ASP guarantees the customer access to a dedicated application running on a server. Cloud Computing, on the other hand, gives the opportunity to use theresources of a shared infrastructure for many users simultaneously (Murphy n.d.). The use of the CC model is more effective in many aspects. Cloud Computing offers the opportunity to use three basic services: data storage in the cloud (cloud storage), applications in the cloud (cloud applications) and computing in the cloud (compute cloud). Website hosting and electronic mail are still the most frequently chosen services in Cloud Computing. The article attempts to explain the responsibility for content stored in the Cloud Computing.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

R, Natarajan. "Telecom Operations in Cloud Computing." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 1323–26. http://dx.doi.org/10.22214/ijraset.2022.40051.

Повний текст джерела
Анотація:
Abstract: Previous generations of wireless connectivity have focussed on voice and data capabilities, 5G is architected to better enable consumer business models. Edge compute (both on-prem / device-edge and provider edge) and related services to support 5G use-cases appears to be the leading driver behind recent announcements. These use-cases will need to be managed by our OSS/BSS for the telco operators and their customers. Keywords: AWS Telecom, Google Telecom, VMware Telecom, Red hat Telecom
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Niranjanamurthy, M., M. P. Amulya, N. M. Niveditha, and P. Dayananda. "Creating a Custom Virtual Private Cloud and Launch an Elastic Compute Cloud (EC2) Instance in Your Virtual Private Cloud." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4509–14. http://dx.doi.org/10.1166/jctn.2020.9106.

Повний текст джерела
Анотація:
Cloud Computing is regarded to as putting away and getting to data over the web. The hard disk of your PC doesn’t hold this data. In Cloud computing, you can get to information from a remote server. Amazon Web Services (AWS) enables adaptable, solid, versatile, simple to-utilize and practical Cloud computing arrangements. AWS is an extensive, simple to utilize processing stage offered Amazon. A virtual private cloud (VPC) is devoted to the AWS account which is in the AWS cloud that acts coherently detached with different virtual systems. Amazon EC2 is a secure web administration which allows register with the modifiable limit in the cloud .In this work, we are giving route subtleties of Creating a custom VPC and dispatch an EC2 Instance in your VPC.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Vieira, Cristiano Costa Argemon, Luiz Fernando Bittencourt, and Edmundo Roberto Mauro Madeira. "A Two-Dimensional SLA for Services Scheduling in Multiple IaaS Cloud Providers." International Journal of Distributed Systems and Technologies 6, no. 4 (October 2015): 45–64. http://dx.doi.org/10.4018/ijdst.2015100103.

Повний текст джерела
Анотація:
Customers of cloud services choose the VMs profiles (SLAs) offered by the provider, and pay according to how long these VMs are utilized. Many works deal with how to decrease the cost of VM requests scheduling, but consider solely the charging models in the SLA. However, other characteristics in the SLA must be taken into account when choosing a VM to execute users' applications (e.g. processing capacity). In order to fulfill the user needs and allow proper utilization of resources available in IaaS providers, this paper models a two-dimension SLA, namely charging model and VM type. The problem is modeled as an integer linear program to compute the scheduling regarding this SLA model. Simulations show that the proposed approach computes schedules that better fits the user needs and allow better utilization of VMs, resulting in a higher number of fulfilled requests than alternative approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

K L, Anitha, and T. R. Gopalakrishnan Nair. "Online cloud performance testing in social networks at peak demand scenarios." Indonesian Journal of Electrical Engineering and Computer Science 17, no. 1 (January 1, 2020): 372. http://dx.doi.org/10.11591/ijeecs.v17.i1.pp372-378.

Повний текст джерела
Анотація:
<p>Cloud computing assures to deliver reliable services through advanced data centers built on virtualized compute and storage technologies. Users demanding more cloud services will be able to access applications and data from a Cloud anywhere in the world in a pay-as-you-go model. In this paper, we focus on cloud-based performance testing for the applications. We use Load Storm testing tool to configure and test plans to measure the performance of web applications in online social networks. The experimental observations designed to assess the performance fluctuations of social networks on maximum consumer demand days have given specific data pointers which could be utilized for further studies of web service enhancements.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ahuja, Sanjay P., and Niharika Deval. "On the Performance Evaluation of IaaS Cloud Services With System-Level Benchmarks." International Journal of Cloud Applications and Computing 8, no. 1 (January 2018): 80–96. http://dx.doi.org/10.4018/ijcac.2018010104.

Повний текст джерела
Анотація:
Infrastructure-as-a-service is a cloud service model that allows customers to outsource computing resources such as servers and storage. This article evaluates four IaaS cloud services - Amazon EC2, Microsoft Azure, Google Compute Engine and Rackspace Cloud in a vendor-neutral approach with regards to system parameter usage including server, file I/O and network utilization. Thus, system-level benchmarking provides objective comparison of cloud providers from performance standpoint. Unixbench, Dbench and Iperf are the System-level benchmarks chosen to test the performance of server, file I/O and network respectively. In order to capture the variation in performance, the tests were performed at different times on weekdays and weekends. With each offering, the benchmarks are tested on different configurations to provide an insight to the cloud users in selection of provider followed by appropriate VM sizing according to the workload requirement. In addition to the performance evaluation, price-per-performance value of all the providers is also examined and compared.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Fernandes, João, Bob Jones, Sergey Yakubov, and Andrea Chierici. "HNSciCloud, a Hybrid Cloud for Science." EPJ Web of Conferences 214 (2019): 09006. http://dx.doi.org/10.1051/epjconf/201921409006.

Повний текст джерела
Анотація:
Helix Nebula Science Cloud (HNSciCloud) has developed a hybrid cloud platform that links together commercial cloud service providers and research organizations’ in-house IT resources via the GEANT network. The platform offers data management capabilities with transparent data access where applications can be deployed with no modifications on both sides of the hybrid cloud and with compute services accessible via eduGAIN [1] and ELIXIR [2] federated identity and access management systems. In addition, it provides support services, account management facilities, full documentation and training. The cloud services are being tested by a group of 10 research organisations from across Europe [3], against the needs of use-cases from seven ESFRI infrastructures [4]. The capacity procured by ten research organisations from the commercial cloud service providers to support these use-cases during 2018 exceeds twenty thousand cores and two petabytes of storage with a network bandwidth of 40Gbps. All the services are based on open source implementations that do not require licenses in order to be deployed on the in-house IT resources of research organisations connected to the hybrid platform. An early adopter scheme has been put in place so that more research organisations can connect to the platform and procure additional capacity to support their research programmes.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hu, Qiang, and Jiaji Shen. "A Cluster and Process Collaboration-Aware Method to Achieve Service Substitution in Cloud Service Processes." Scientific Programming 2020 (August 1, 2020): 1–12. http://dx.doi.org/10.1155/2020/1298513.

Повний текст джерела
Анотація:
Some cloud services may be invalid since they are located in a dynamically changing network environment. Service substitution is necessary when a cloud service cannot be used. Existing work mainly concerned on service function and quality in service substitution. To select a more suitable substitutive service, process collaboration similarity needs to be considered. This paper proposes a cluster and process collaboration-aware method to achieve service substitution. To compute the process collaboration similarity, we use logic Petri nets to model service processes. All the service processes are transformed into path strings. Service vectors for cloud services are generated by Word2Vec from these path strings. Process collaboration similarity of two cloud services is obtained by computing the cosine value of their service vectors. Meanwhile, similar cloud services are classified as a service cluster. By calculating function similarity and quality matching, a candidate set for services substitution is generated. The service with the highest process collaboration similarity to invalid one in the candidate set is chosen as the substitutive one. Simulation experiments show the proposed method is less time-consuming than traditional methods in finding substitutive service. Meanwhile, the substitutive one has a high cooccurrence rate with neighboring services of the invalid cloud service. Thus, the proposed method is efficient and integrates process collaboration well in service substitution.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Suriansyah, Mohamad Iqbal, Iyan Mulyana, Junaidy Budi Sanger, and Sandi Winata. "Compute functional analysis leveraging the IAAS private cloud computing service model in packstack development." ILKOM Jurnal Ilmiah 13, no. 1 (April 30, 2021): 9–17. http://dx.doi.org/10.33096/ilkom.v13i1.693.9-17.

Повний текст джерела
Анотація:
Analyzing compute functions by utilizing the IAAS model for private cloud computing services in packstack development is one of the large-scale data storage solutions. Problems that often occur when implementing various applications are the increased need for server resources, the monitoring process, performance efficiency, time constraints in building servers and upgrading hardware. These problems have an impact on long server downtime. The development of private cloud computing technology could become a solution to the problem. This research employed Openstack and Packstack by applying one server controller node and two servers compute nodes. Server administration with IAAS and self-service approaches made scalability testing simpler and time-efficient. The resizing of the virtual server (instance) that has been carried out in a running condition shows that the measurement of the overhead value in private cloud computing is more optimal with a downtime of 16 seconds.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Aral, Atakan, Ivona Brandic, Rafael Brundo Uriarte, Rocco De Nicola, and Vincenzo Scoca. "Addressing Application Latency Requirements through Edge Scheduling." Journal of Grid Computing 17, no. 4 (November 5, 2019): 677–98. http://dx.doi.org/10.1007/s10723-019-09493-z.

Повний текст джерела
Анотація:
Abstract Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by Edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where extensive support systems as in cloud data centers are not usually present. To overcome these limitations, we propose a score-based edge service scheduling algorithm that evaluates network, compute, and reliability capabilities of edge nodes. The algorithm outputs the maximum scoring mapping between resources and services with regard to four critical aspects of service quality. Our simulation-based experiments on live video streaming services demonstrate significant improvements in both network delay and service time. Moreover, we compare edge computing with cloud computing and content delivery networks within the context of latency-sensitive and data-intensive applications. The results suggest that our edge-based scheduling algorithm is a viable solution for high service quality and responsiveness in deploying such applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Desmulyati, Desmulyati, and Muhammad Rizki Perdana Putra. "Load Balance Design of Google Cloud Compute Engine VPS with Round Robin Method in PT. Lintas Data Indonesia." SinkrOn 3, no. 2 (March 15, 2019): 147. http://dx.doi.org/10.33395/sinkron.v3i2.10064.

Повний текст джерела
Анотація:
One of the providers of cloud computing services is the Google Cloud Platform developed by Google LLC. PT Lintas Data Indonesia as a vendor and distributor of technology devices is in dire need of a web server for the publicity of its products. At present, PT Lintas Data Indonesia's web server system still uses the services of hostgator hosting providers with packages with limited resources and also cannot implement system load balances and fail over on the current server system, in terms of latency access speed when pinging web servers in HostGator is quite high up to 200Ms. To improve the performance of a web server so that a load balance and fail over system can be implemented, migrating to the Google Cloud Platform environment is a solution that is expected to be able to handle existing problems. The advantages of Google Cloud Platform are servers that are rented for web servers in the form of Virtual Private Servers (VPS) so that they are easy to maintain and if you want to upgrade services. The addition of three web servers in the cluster with HAProxy server makes PT Lintas Data Indonesia's web server service more reliable in handling requests, load balances with round robin methods and fail over web servers and with HAProxy it is proven that it can increase up to 150% in handling latency issues previously it was around 30Ms.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Sathya, V., A. Shiny, S. Sajid Hussain, and Ashutosh Gauda. "Secure Data Storage in Cloud System Using Modern Cryptography." Journal of Computational and Theoretical Nanoscience 17, no. 4 (April 1, 2020): 1590–94. http://dx.doi.org/10.1166/jctn.2020.8406.

Повний текст джерела
Анотація:
Cloud computing is the movement of enlisting computing services like organization servers, accumulating databases, arranging, programming examination over the Internet. Companies offering these computing services are known as cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home. It enables the companies to consume a compute resource, such as a Virtual Machine (VM), storage or an application, as a utility—just like electricity—rather than building and maintaining large computing materials in the house. Though servers are greatly protected against unauthorized access, there are incidents where classified data stored on servers are accessed by the maintenance staffs. So, the security plays a major role in cloud storage as when the user stores the data in the cloud, it stays there and anybody accessing it cannot be known at all. Hence, this paper mainly deals with the idea of storing data securely in the cloud using Symmetric and Asymmetric Cryptography algorithms including AES, 3DES, Blowfish, RSA along with modern Steganography LSB algorithm for wireless communication which hides the key inside the cover image.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tan, Xiao Long, Wen Bin Wang, and Yu Qin Yao. "Research of Network Virtualization in Data Center." Applied Mechanics and Materials 644-650 (September 2014): 2961–64. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.2961.

Повний текст джерела
Анотація:
With the rapid grow of the volume of data and internet application, as an efficient and promising infrastructure, data center has been widely deployed .data center provide a variety of perform for network services, applications such as video stream, cloud compute and so on. All this services and applications call for volume, compute, bandwidth, and latency. Existing data centers lacks enough flexible so they provide poor support in QOS, deployability, manageability, and defense when facing attacks. Virtualized data centers are a good solution to these problems. Compared to existing data centers, virtualized data centers do better in resource utilization, scalability, and flexibility.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Baldev Singh, Dr, Dr S.N. Panda, and Dr Gurpinder Singh Samra. "Slow flooding attack detection in cloud using change point detection approach." International Journal of Engineering & Technology 7, no. 2.30 (May 29, 2018): 33. http://dx.doi.org/10.14419/ijet.v7i2.30.13459.

Повний текст джерела
Анотація:
Cloud computing is one of the high-demand services and prone to numerous types of attacks due to its Internet based backbone. Flooding based attack is one such type of attack over the cloud that exhausts the numerous resources and services of an individual or an enterprise by way of sending useless huge traffic. The nature of this traffic may be of slow or fast type. Flooding attacks are caused by way of sending massive volume of packets of TCP, UDP, ICMP traffic and HTTP Posts. The legitimate volume of traffic is suppressed and lost in traffic flooding traffics. Early detection of such attacks helps in minimization of the unauthorized utilization of resources on the target machine. Various inbuilt load balancing and scalability options to absorb flooding attacks are in use by cloud service providers up to ample extent still to maintain QoS at the same time by cloud service providers is a challenge. In this proposed technique. Change Point detection approach is proposed here to detect flooding DDOS attacks in cloud which are based on the continuous variant pattern of voluminous (flooding) traffic and is calculated by using various traffic data based metrics that are primary and computed in nature. Golden ration is used to compute the threshold and this threshold is further used along with the computed metric values of normal and malicious traffic for flooding attack detection. Traffic of websites is observed by using remote java script.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Alqahtani, Khalid Mohammed, Saeed Yahya Alqahtani, and Yousef Ahmed Aleidi. "The Impacts of E-Commerce as a Service upon Fog Computing." International Research Journal of Electronics and Computer Engineering 2, no. 2 (June 15, 2016): 29. http://dx.doi.org/10.24178/irjece.2016.2.2.29.

Повний текст джерела
Анотація:
Fog Computing is a technology that extends cloud computing and services to the edge of the network. It provides data, compute, storage and application services to the users like cloud. From kitchen equipment to aeroplane, started getting an IP address which has also been a part of internet. In the past few years, the great transmission of theoretical concept of different industries such as E-commerce into real application has been used by cloud computing. Based on adopted fog features and characteristics those are encouraging small companies that providing their E-commerce products to adopt their development into fog computing. In order to assist the E-commerce small companies with the right way to start with the basic requirements and upgrading their computing resources as their fog user base grows with time, herewith the impacts of Ecommerce as the services upon fog computing is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Axelrod, Amittai. "Box: Natural Language Processing Research Using Amazon Web Services." Prague Bulletin of Mathematical Linguistics 104, no. 1 (October 1, 2015): 27–38. http://dx.doi.org/10.1515/pralin-2015-0011.

Повний текст джерела
Анотація:
Abstract We present a publicly-available state-of-the-art research and development platform for Machine Translation and Natural Language Processing that runs on the Amazon Elastic Compute Cloud. This provides a standardized research environment for all users, and enables perfect reproducibility and compatibility. Box also enables users to use their hardware budget to avoid the management and logistical overhead of maintaining a research lab, yet still participate in global research community with the same state-of-the-art tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Dewangan, Bhupesh Kumar, Amit Agarwal, Venkatadri M., and Ashutosh Pasricha. "Energy-Aware Autonomic Resource Scheduling Framework for Cloud." International Journal of Mathematical, Engineering and Management Sciences 4, no. 1 (February 1, 2019): 41–55. http://dx.doi.org/10.33889/ijmems.2019.4.1-004.

Повний текст джерела
Анотація:
Cloud computing is a platform where services are provided through the internet either free of cost or rent basis. Many cloud service providers (CSP) offer cloud services on the rental basis. Due to increasing demand for cloud services, the existing infrastructure needs to be scale. However, the scaling comes at the cost of heavy energy consumption due to the inclusion of a number of data centers, and servers. The extraneous power consumption affects the operating costs, which in turn, affects its users. In addition, CO2 emissions affect the environment as well. Moreover, inadequate allocation of resources like servers, data centers, and virtual machines increases operational costs. This may ultimately lead to customer distraction from the cloud service. In all, an optimal usage of the resources is required. This paper proposes to calculate different multi-objective functions to find the optimal solution for resource utilization and their allocation through an improved Antlion (ALO) algorithm. The proposed method simulated in cloudsim environments, and compute energy consumption for different workloads quantity and it increases the performance of different multi-objectives functions to maximize the resource utilization. It compared with existing frameworks and experiment results shows that the proposed framework performs utmost.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Anitha, H. M., and P. Jayarekha. "An Software Defined Network Based Secured Model for Malicious Virtual Machine Detection in Cloud Environment." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 526–30. http://dx.doi.org/10.1166/jctn.2020.8481.

Повний текст джерела
Анотація:
Cloud computing is an emerging technology that offers the services to all the users as per their demand. Services are leveraged according to the Service level agreement (SLA). Service level agreement is monitored so that services are offered to the users without any problem and deprival. Software Defined Network (SDN) is used in order to monitor the trust score of the deployed Virtual Machines (VM) and Quality of Service (QoS) parameters offered. Software Defined Network controller is used to compute the trust score of the Virtual Machines and find whether Virtual Machine is malicious or trusted. Genetic algorithm is used to find the trusted Virtual Machine and release the resources allocated to the malicious Virtual Machine. This monitored information is intimated to cloud provider for further action. Security is enhanced by avoiding attacks from the malicious Virtual Machine in the cloud environment. The main objective of the paper is to enhance the security in the system using Software Defined Network based secured model.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Spiga, Daniele, Enol Fernandez, Vincenzo Spinoso, Diego Ciangottini, Mirco Tracolli, Giacinto Donvito, Marica Antonacci, et al. "The DODAS Experience on the EGI Federated Cloud." EPJ Web of Conferences 245 (2020): 07033. http://dx.doi.org/10.1051/epjconf/202024507033.

Повний текст джерела
Анотація:
The EGI Cloud Compute service offers a multi-cloud IaaS federation that brings together research clouds as a scalable computing platform for research accessible with OpenID Connect Federated Identity. The federation is not limited to single sign-on, it also introduces features to facilitate the portability of applications across providers: i) a common VM image catalogue VM image replication to ensure these images will be available at providers whenever needed; ii) a GraphQL information discovery API to understand the capacities and capabilities available at each provider; and iii) integration with orchestration tools (such as Infrastructure Manager) to abstract the federation and facilitate using heterogeneous providers. EGI also monitors the correct function of every provider and collects usage information across all the infrastructure. DODAS (Dynamic On Demand Analysis Service) is an open-source Platform-as-a-Service tool, which allows to deploy software applications over heterogeneous and hybrid clouds. DODAS is one of the so-called Thematic Services of the EOSC-hub project and it instantiates on-demand container-based clusters offering a high level of abstraction to users, allowing to exploit distributed cloud infrastructures with a very limited knowledge of the underlying technologies.This work presents a comprehensive overview of DODAS integration with EGI Cloud Federation, reporting the experience of the integration with CMS Experiment submission infrastructure system.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Aguiar, Rui L., Pedro Frosi Rosa, Rodrigo Moreira, and Flávio De Oliveira Silva. "A smart network and compute-aware Orchestrator to enhance QoS on cloud-based multimedia services." International Journal of Grid and Utility Computing 11, no. 1 (2020): 49. http://dx.doi.org/10.1504/ijguc.2020.10025642.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Moreira, Rodrigo, Flávio De Oliveira Silva, Pedro Frosi Rosa, and Rui L. Aguiar. "A smart network and compute-aware Orchestrator to enhance QoS on cloud-based multimedia services." International Journal of Grid and Utility Computing 11, no. 1 (2020): 49. http://dx.doi.org/10.1504/ijguc.2020.103969.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Linden, Mikael, Michal Prochazka, Ilkka Lappalainen, Dominik Bucik, Pavel Vyskocil, Martin Kuba, Sami Silén, et al. "Common ELIXIR Service for Researcher Authentication and Authorisation." F1000Research 7 (August 6, 2018): 1199. http://dx.doi.org/10.12688/f1000research.15161.1.

Повний текст джерела
Анотація:
A common Authentication and Authorisation Infrastructure (AAI) that would allow single sign-on to services has been identified as a key enabler for European bioinformatics. ELIXIR AAI is an ELIXIR service portfolio for authenticating researchers to ELIXIR services and assisting these services on user privileges during research usage. It relieves the scientific service providers from managing the user identities and authorisation themselves, enables the researcher to have a single set of credentials to all ELIXIR services and supports meeting the requirements imposed by the data protection laws. ELIXIR AAI was launched in late 2016 and is part of the ELIXIR Compute platform portfolio. By the end of 2017 the number of users reached 1000, while the number of relying scientific services was 36. This paper presents the requirements and design of the ELIXIR AAI and the policies related to its use, and how it can be used for serving some example services, such as document management, social media, data discovery, human data access, cloud compute and training services.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Krishnan, Smitha, and Dr B. G. Prasanthi. "Prediction of CPU Utilization in Cloud Environment during Seasonal Trend." International Journal on Recent and Innovation Trends in Computing and Communication 9, no. 12 (December 7, 2021): 08–11. http://dx.doi.org/10.17762/ijritcc.v9i12.5493.

Повний текст джерела
Анотація:
Today, the most recent paradigm to emerge is that of Cloud computing, which promises reliable services delivered to the end-user through next-generation data centres which are built on virtualized compute and storage technologies Consumer will be able to access desired service from a “Cloud” anytime anywhere in the world on the bases of demand. Computing services need to be highly reliable, scalable, easy accessible and autonomic to support ever-present access, dynamic discovery and computability, consumers indicate the required service level through Quality of Service (QoS) parameters, according to Service Level Agreements (SLAs) A suitable mdel for the prediction is being developed. Here Genetic Algorithm is chosen in combination with stastical model to do the workload prediction .It is expected to give better result by producing less error rate and more accuracy of prediction compared to the previous algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Heidari, Arash, and Nima Jafari Navimipour. "A new SLA-aware method for discovering the cloud services using an improved nature-inspired optimization algorithm." PeerJ Computer Science 7 (May 10, 2021): e539. http://dx.doi.org/10.7717/peerj-cs.539.

Повний текст джерела
Анотація:
Cloud computing is one of the most important computing patterns that use a pay-as-you-go manner to process data and execute applications. Therefore, numerous enterprises are migrating their applications to cloud environments. Not only do intensive applications deal with enormous quantities of data, but they also demonstrate compute-intensive properties very frequently. The dynamicity, coupled with the ambiguity between marketed resources and resource requirement queries from users, remains important issues that hamper efficient discovery in a cloud environment. Cloud service discovery becomes a complex problem because of the increase in network size and complexity. Complexity and network size keep increasing dynamically, making it a complex NP-hard problem that requires effective service discovery approaches. One of the most famous cloud service discovery methods is the Ant Colony Optimization (ACO) algorithm; however, it suffers from a load balancing problem among the discovered nodes. If the workload balance is inefficient, it limits the use of resources. This paper solved this problem by applying an Inverted Ant Colony Optimization (IACO) algorithm for load-aware service discovery in cloud computing. The IACO considers the pheromones’ repulsion instead of attraction. We design a model for service discovery in the cloud environment to overcome the traditional shortcomings. Numerical results demonstrate that the proposed mechanism can obtain an efficient service discovery method. The algorithm is simulated using a CloudSim simulator, and the result shows better performance. Reducing energy consumption, mitigate response time, and better Service Level Agreement (SLA) violation in the cloud environments are the advantages of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Prabha, S. Lavanya. "Virtual Consolidation of Resource Sharing in Green Cloud using an Efficient Meta Scheduling Algorithm." International Journal of Computer Science and Mobile Computing 11, no. 2 (February 28, 2022): 46–55. http://dx.doi.org/10.47760/ijcsmc.2022.v11i02.006.

Повний текст джерела
Анотація:
In modern researchers, cloud parallel data processing has emerging resource that to be one of the problematic application for Infrastructure-as-a-Service (IaaS)clouds. Major Cloud processing companies include starting incorporate frameworks using VM models for parallel data processing in their resource portfolio creation too easy for a client to access these services and to set out their programs. The growing computing requires from multiple requests on the main server has led to excessive power utilization. The waiting resource in the long-term sustainability of Cloud like infrastructures in provisions of energy cost but also from cloud environmental perspective. The trouble can be addressed to require with high energy consumption resource sharing infrastructures, but in the process of resources are dynamically switch to new infrastructure. Switching is not enough to cost efficient and also need time sharing green consuming. Cloud being consists of several virtual centers like VM's under the different administrative domain, make a problem more difficult. Thus, for the reduction in energy consumption, this propose address the challenge by effectively distributing compute-intensive parallel applications on the cloud. To propose a Meta-scheduling algorithm, this exploits the heterogeneous nature of Cloud to achieve the reduction in energy consumption as the green cloud. This intent addresses these challenges by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshot transparently in a hypervisor independent fashion, ensuring high portability for different configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Heydari, Atefeh, Mohammad Ali Tavakoli, and Mohammad Riazi. "An Overview of Public Cloud Security Issues." International Journal of Management Excellence 3, no. 2 (June 30, 2014): 440–45. http://dx.doi.org/10.17722/ijme.v3i2.259.

Повний текст джерела
Анотація:
Traditionally, computational needs of organizations were alleviated by purchasing, updating and maintaining required equipments. Beside expensive devices, physical space to hold them, technical staffs to maintain them and many other side costs were essential prerequisites of this matter. Nowadays with the development of cloud computing services, a huge number of peoples and organizations are served in terms of computational needs by large scale computing platforms. Offering enormous amounts of economical compute resources on-demand motivates organizations to outsource their computational needs incrementally. Public cloud computing vendors offer their infrastructure to the customers via the internet. It means that the control of customers’ data is not in their hands anymore. Unfortunately various security issues are emerged from this subject. In this paper the security issues of public cloud computing are overviewed. More destructive security issues are highlighted in order to be used by organizations in making better decisions for moving to cloud.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ahuja, Sanjay P., Thomas F. Furman, Kerwin E. Roslie, and Jared T. Wheeler. "Empirical Performance Assessment of Public Clouds Using System Level Benchmarks." International Journal of Cloud Applications and Computing 3, no. 4 (October 2013): 81–91. http://dx.doi.org/10.4018/ijcac.2013100106.

Повний текст джерела
Анотація:
Amazon's Elastic Compute Cloud (EC2) Service is one of the leading public cloud service providers and offers many different levels of service. This paper looks into evaluating the memory, central processing unit (CPU), and input/output I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes. The results show that the scalability of the cloud is achieved by increasing resources when applicable. This paper also looks at the economic model and other cloud services offered by Amazon's EC2, Microsoft's Azure, and Google's App Engine.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Siuta, David, Gregory West, Henryk Modzelewski, Roland Schigas, and Roland Stull. "Viability of Cloud Computing for Real-Time Numerical Weather Prediction." Weather and Forecasting 31, no. 6 (December 1, 2016): 1985–96. http://dx.doi.org/10.1175/waf-d-16-0075.1.

Повний текст джерела
Анотація:
Abstract As cloud-service providers like Google, Amazon, and Microsoft decrease costs and increase performance, numerical weather prediction (NWP) in the cloud will become a reality not only for research use but for real-time use as well. The performance of the Weather Research and Forecasting (WRF) Model on the Google Cloud Platform is tested and configurations and optimizations of virtual machines that meet two main requirements of real-time NWP are found: 1) fast forecast completion (timeliness) and 2) economic cost effectiveness when compared with traditional on-premise high-performance computing hardware. Optimum performance was found by using the Intel compiler collection with no more than eight virtual CPUs per virtual machine. Using these configurations, real-time NWP on the Google Cloud Platform is found to be economically competitive when compared with the purchase of local high-performance computing hardware for NWP needs. Cloud-computing services are becoming viable alternatives to on-premise compute clusters for some applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Liu, Muhua, Lin Wang, Qingtao Wu, and Jianqiang Song. "Distributed Functional Signature with Function Privacy and Its Application." Security and Communication Networks 2021 (March 12, 2021): 1–14. http://dx.doi.org/10.1155/2021/6699974.

Повний текст джерела
Анотація:
We introduce a novel notion of distributed functional signature. In such a signature scheme, the signing key for function f will be split into n shares sk f i and distributed to different parties. Given a message m and a share sk f i , one can compute locally and obtain a pair signature f i m , σ i . When given all of the signature pairs, everyone can recover the actual value f m and corresponding signature σ . When the number signature pairs are not enough, nobody can recover the signature f m , σ . We formalize the notion of function privacy in this new model which is not possible for the standard functional signature and give a construction from standard functional signature and function secret sharing based on one-way function and learning with error assumption. We then consider the problem of hosting services in multiple untrusted clouds, in which the verifiability and program privacy are considered. The verifiability requires that the returned results from the cloud can be checked. The program privacy requires that the evaluation procedure does not reveal the program for the untrusted cloud. We give a verifiable distributed secure cloud service scheme from distributed functional signature and prove the securities which include untrusted cloud security (program privacy and verifiability) and untrusted client security.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bhavsar, Sejal Atit, and Kirit J. Modi. "Design and Development of Framework for Platform Level Issues in Fog Computing." International Journal of Electronics, Communications, and Measurement Engineering 8, no. 1 (January 2019): 1–20. http://dx.doi.org/10.4018/ijecme.2019010101.

Повний текст джерела
Анотація:
Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cavicchioli, Roberto, Riccardo Martoglia, and Micaela Verucchi. "A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities." JUCS - Journal of Universal Computer Science 28, no. 1 (January 28, 2022): 3–26. http://dx.doi.org/10.3897/jucs.71645.

Повний текст джерела
Анотація:
Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of &ldquo;rich&rdquo; data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chidambaram, Nithya, Pethuru Raj, K. Thenmozhi, and Rengarajan Amirtharajan. "Enhancing the Security of Customer Data in Cloud Environments Using a Novel Digital Fingerprinting Technique." International Journal of Digital Multimedia Broadcasting 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/8789397.

Повний текст джерела
Анотація:
With the rapid rise of the Internet and electronics in people’s life, the data related to it has also undergone a mammoth increase in magnitude. The data which is stored in the cloud can be sensitive and at times needs a proper file storage system with a tough security algorithm. Whereas cloud is an open shareable elastic environment, it needs impenetrable and airtight security. This paper deals with furnishing a secure storage system for the above-mentioned purpose in the cloud. To become eligible to store data a user has to register with the cloud database. This prevents unauthorized access. The files stored in the cloud are encrypted with RSA algorithm and digital fingerprint for the same has been generated through MD5 message digest before storage. The RSA provides unreadability of data to anyone without the private key. MD5 makes it impossible for any changes on data to go unnoticed. After the application of RSA and MD5 before storage, the data becomes resistant to access or modifications by any third party and to intruders of cloud storage system. This application is tested in Amazon Elastic Compute Cloud Web Services.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Resende, João S., Luís Magalhães, André Brandão, Rolando Martins, and Luís Antunes. "Towards a Modular On-Premise Approach for Data Sharing." Sensors 21, no. 17 (August 28, 2021): 5805. http://dx.doi.org/10.3390/s21175805.

Повний текст джерела
Анотація:
The growing demand for everyday data insights drives the pursuit of more sophisticated infrastructures and artificial intelligence algorithms. When combined with the growing number of interconnected devices, this originates concerns about scalability and privacy. The main problem is that devices can detect the environment and generate large volumes of possibly identifiable data. Public cloud-based technologies have been proposed as a solution, due to their high availability and low entry costs. However, there are growing concerns regarding data privacy, especially with the introduction of the new General Data Protection Regulation, due to the inherent lack of control caused by using off-premise computational resources on which public cloud belongs. Users have no control over the data uploaded to such services as the cloud, which increases the uncontrolled distribution of information to third parties. This work aims to provide a modular approach that uses cloud-of-clouds to store persistent data and reduce upfront costs while allowing information to remain private and under users’ control. In addition to storage, this work also extends focus on usability modules that enable data sharing. Any user can securely share and analyze/compute the uploaded data using private computing without revealing private data. This private computation can be training machine learning (ML) models. To achieve this, we use a combination of state-of-the-art technologies, such as MultiParty Computation (MPC) and K-anonymization to produce a complete system with intrinsic privacy properties.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Lin, Li, Huanzeng Yang, Jing Zhan, and Xuhui Lv. "VNGuarder: An Internal Threat Detection Approach for Virtual Network in Cloud Computing Environment." Security and Communication Networks 2022 (April 16, 2022): 1–15. http://dx.doi.org/10.1155/2022/1242576.

Повний текст джерела
Анотація:
Edge-assisted Internet of things applications often need to use cloud virtual network services to transmit data. However, the internal threats such as illegal management and configuration to cloud platform intentionally or unintentionally will lead to virtual network security problems such as malicious changes of user network and hijacked data flow. It will eventually affect edge-assisted Internet of things applications. We propose a virtual network internal threat detection method called VNGuarder in a cloud computing environment, which can effectively monitor whether the virtual network configuration of legitimate users under the IaaS cloud platform has been maliciously changed or destroyed by insiders. First, based on the life cycle of cloud virtual network services, we summarized two types of internal attacks involving illegal use of virtualization management tools and illegal invocation of virtual network-related processes. Second, based on normal behavior of tenants, a hierarchical trusted call correlation scheme is proposed to provide a basis for discovering that insiders illegally call virtualized management tools and virtual network-related processes on the controller node of the cloud platform or the network node and compute node. Third, a trace-enable mechanism combining real-time monitoring and log analysis is introduced. By collecting and recording the complete call process of virtual network management and configuration in the cloud platform, and comparing it with the result of the hierarchical trusted call correlation, abnormal operations can be reported to the tenants in time. Comprehensive simulation experiments on the Openstack platform show that VNGuarder can effectively detect illegal management and configuration of virtual networks by insiders without significantly affecting the creation time of tenant networks and the utilization of CPU and memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kanwal, Samira, Zeshan Iqbal, Aun Irtaza, Muhammad Sajid, Sohaib Manzoor, and Nouman Ali. "Head Node Selection Algorithm in Cloud Computing Data Center." Mathematical Problems in Engineering 2021 (July 24, 2021): 1–12. http://dx.doi.org/10.1155/2021/3418483.

Повний текст джерела
Анотація:
Cloud computing provides multiple services such as computational services, data processing, and resource sharing through multiple nodes. These nodes collaborate for all prementioned services in the data center through the head/leader node. This head node is responsible for reliability, higher performance, latency, and deadlock handling and enables the user to access cost-effective computational services. However, the optimal head nodes’ selection is a challenging problem due to consideration of resources such as memory, CPU-MIPS, and bandwidth. The existing methods are monolithic, as they select the head nodes without taking the resources of the nodes. Still, there is a need for the candidate node which can be selected as a head node in case of head node failure. Therefore, in this paper, we proposed a technique, i.e., Head Node Selection Algorithm (HNSA), for optimal head node selection from the data center, which is based on the genetic algorithm (GA). In our proposed method, there are three modules, i.e., initial population generation, head node selection, and candidate node selection. In the first module, we generate the initial population by randomly mapping the task on different servers using a scheduling algorithm. After that, we compute the overall cost and the cost of each node based on resources. In the second module, the best optimal nodes are selected as a head node by applying the genetic operations such as crossover, mutation, and fitness function by considering the available resources. In the selected optimal nodes, one node is chosen as a head node and the other is considered as a candidate node. In the third module, the candidate node becomes the head node in the case of head node failure. The proposed method HNSA is compared against the state-of-the-art algorithms such as Bees Life Algorithm (BLA) and Heterogeneous Earliest Finished Time (HEFT). The simulation analysis shows that the proposed HNSA technique performs better in terms of execution time, memory utilization, service level sgreement (SLA) violation, and energy consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Liu, Jianhua, and Zibo Wu. "PECSA: Practical Edge Computing Service Architecture Applicable to Adaptive IoT-Based Applications." Future Internet 13, no. 11 (November 19, 2021): 294. http://dx.doi.org/10.3390/fi13110294.

Повний текст джерела
Анотація:
The cloud-based Internet of Things (IoT-Cloud) combines the advantages of the IoT and cloud computing, which not only expands the scope of cloud computing but also enhances the data processing capability of the IoT. Users always seek affordable and efficient services, which can be completed by the cooperation of all available network resources, such as edge computing nodes. However, current solutions exhibit significant security and efficiency problems that must be solved. Insider attacks could degrade the performance of the IoT-Cloud due to its natural environment and inherent open construction. Unfortunately, traditional security approaches cannot defend against these attacks effectively. In this paper, a novel practical edge computing service architecture (PECSA), which integrates a trust management methodology with dynamic cost evaluation schemes, is proposed to address these problems. In the architecture, the edge network devices and edge platform cooperate to achieve a shorter response time and/or less economic costs, as well as to enhance the effectiveness of the trust management methodology, respectively. To achieve faster responses for IoT-based requirements, all the edge computing devices and cloud resources cooperate in a reasonable way by evaluating computational cost and runtime resource capacity in the edge networks. Moreover, when cooperated with the edge platform, the edge networks compute trust values of linked nodes and find the best collaborative approach for each user to meet various service requirements. Experimental results demonstrate the efficiency and the security of the proposed architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Mora-Márquez, Fernando, José Luis Vázquez-Poletti, Víctor Chano, Carmen Collada, Álvaro Soto, and Unai López de Heredia. "Hardware Performance Evaluation of De novo Transcriptome Assembly Software in Amazon Elastic Compute Cloud." Current Bioinformatics 15, no. 5 (October 14, 2020): 420–30. http://dx.doi.org/10.2174/1574893615666191219095817.

Повний текст джерела
Анотація:
Background: Bioinformatics software for RNA-seq analysis has a high computational requirement in terms of the number of CPUs, RAM size, and processor characteristics. Specifically, de novo transcriptome assembly demands large computational infrastructure due to the massive data size, and complexity of the algorithms employed. Comparative studies on the quality of the transcriptome yielded by de novo assemblers have been previously published, lacking, however, a hardware efficiency-oriented approach to help select the assembly hardware platform in a cost-efficient way. Objective: We tested the performance of two popular de novo transcriptome assemblers, Trinity and SOAPdenovo-Trans (SDNT), in terms of cost-efficiency and quality to assess limitations, and provided troubleshooting and guidelines to run transcriptome assemblies efficiently. Methods: We built virtual machines with different hardware characteristics (CPU number, RAM size) in the Amazon Elastic Compute Cloud of the Amazon Web Services. Using simulated and real data sets, we measured the elapsed time, cost, CPU percentage and output size of small and large data set assemblies. Results: For small data sets, SDNT outperformed Trinity by an order the magnitude, significantly reducing the time duration and costs of the assembly. For large data sets, Trinity performed better than SDNT. Both the assemblers provide good quality transcriptomes. Conclusion: The selection of the optimal transcriptome assembler and provision of computational resources depend on the combined effect of size and complexity of RNA-seq experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sokolovskyy, Yaroslav, Denys Manokhin, Yaroslav Kaplunsky, and Olha Mokrytska. "Development of software and algorithms of parallel learning of artificial neural networks using CUDA technologies." Technology audit and production reserves 5, no. 2(61) (September 23, 2021): 21–25. http://dx.doi.org/10.15587/2706-5448.2021.239784.

Повний текст джерела
Анотація:
The object of research is to parallelize the learning process of artificial neural networks to automate the procedure of medical image analysis using the Python programming language, PyTorch framework and Compute Unified Device Architecture (CUDA) technology. The operation of this framework is based on the Define-by-Run model. The analysis of the available cloud technologies for realization of the task and the analysis of algorithms of learning of artificial neural networks is carried out. A modified U-Net architecture from the MedicalTorch library was used. The purpose of its application was the need for a network that can effectively learn with small data sets, as in the field of medicine one of the most problematic places is the availability of large datasets, due to the requirements for data confidentiality of this nature. The resulting information system is able to implement the tasks set before it, contains the most user-friendly interface and all the necessary tools to simplify and automate the process of visualization and analysis of data. The efficiency of neural network learning with the help of the central processor (CPU) and with the help of the graphic processor (GPU) with the use of CUDA technologies is compared. Cloud technology was used in the study. Google Colab and Microsoft Azure were considered among cloud services. Colab was first used to build a prototype. Therefore, the Azure service was used to effectively teach the finished architecture of the artificial neural network. Measurements were performed using cloud technologies in both services. The Adam optimizer was used to learn the model. CPU duration measurements were also measured to assess the acceleration of CUDA technology. An estimate of the acceleration obtained through the use of GPU computing and cloud technologies was implemented. CPU duration measurements were also measured to assess the acceleration of CUDA technology. The model developed during the research showed satisfactory results according to the metrics of Jaccard and Dyce in solving the problem. A key factor in the success of this study was cloud computing services.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Bai, Jinbing, Ileen Jhaney, and Jessica Wells. "Developing a Reproducible Microbiome Data Analysis Pipeline Using the Amazon Web Services Cloud for a Cancer Research Group: Proof-of-Concept Study." JMIR Medical Informatics 7, no. 4 (November 11, 2019): e14667. http://dx.doi.org/10.2196/14667.

Повний текст джерела
Анотація:
Background Cloud computing for microbiome data sets can significantly increase working efficiencies and expedite the translation of research findings into clinical practice. The Amazon Web Services (AWS) cloud provides an invaluable option for microbiome data storage, computation, and analysis. Objective The goals of this study were to develop a microbiome data analysis pipeline by using AWS cloud and to conduct a proof-of-concept test for microbiome data storage, processing, and analysis. Methods A multidisciplinary team was formed to develop and test a reproducible microbiome data analysis pipeline with multiple AWS cloud services that could be used for storage, computation, and data analysis. The microbiome data analysis pipeline developed in AWS was tested by using two data sets: 19 vaginal microbiome samples and 50 gut microbiome samples. Results Using AWS features, we developed a microbiome data analysis pipeline that included Amazon Simple Storage Service for microbiome sequence storage, Linux Elastic Compute Cloud (EC2) instances (ie, servers) for data computation and analysis, and security keys to create and manage the use of encryption for the pipeline. Bioinformatics and statistical tools (ie, Quantitative Insights Into Microbial Ecology 2 and RStudio) were installed within the Linux EC2 instances to run microbiome statistical analysis. The microbiome data analysis pipeline was performed through command-line interfaces within the Linux operating system or in the Mac operating system. Using this new pipeline, we were able to successfully process and analyze 50 gut microbiome samples within 4 hours at a very low cost (a c4.4xlarge EC2 instance costs $0.80 per hour). Gut microbiome findings regarding diversity, taxonomy, and abundance analyses were easily shared within our research team. Conclusions Building a microbiome data analysis pipeline with AWS cloud is feasible. This pipeline is highly reliable, computationally powerful, and cost effective. Our AWS-based microbiome analysis pipeline provides an efficient tool to conduct microbiome data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії