Journal articles on the topic 'AWS – Amazon Web Service'

To see the other types of publications on this topic, follow the link: AWS – Amazon Web Service.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AWS – Amazon Web Service.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Roobini, M. S., Selvasurya Sampathkumar, Shaik Khadar Basha, and Anitha Ponraj. "Serverless Computing Using Amazon Web Services." Journal of Computational and Theoretical Nanoscience 17, no. 8 (August 1, 2020): 3581–85. http://dx.doi.org/10.1166/jctn.2020.9235.

Full text
Abstract:
In the last decade cloud computing transformed the way in which we build applications. The boom in cloud computing helped to develop new software design and architecture. Helping the developers to focus more on the business logic than the infrastructure. FaaS (function as a service) compute model it gave developers to concentrate only on the application code and rest of the factors will be taken care by the cloud provider. Here we present a serverless architecture of a web application built using AWS services and provide detail analysis of lambda function and micro service software design implemented using these AWS services.
APA, Harvard, Vancouver, ISO, and other styles
2

Bankar, Suyog. "Cloud Computing Using Amazon Web Services AWS." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (June 30, 2018): 2156–57. http://dx.doi.org/10.31142/ijtsrd14583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dirgantara, Dhimas, and Is Mardianto. "TEKNIK IDENTITY AND ACCESS MANAGEMENT PADA LAYANAN AMAZON WEB SERVICES." Computatio : Journal of Computer Science and Information Systems 3, no. 1 (June 18, 2019): 1. http://dx.doi.org/10.24912/computatio.v3i1.4270.

Full text
Abstract:
In this era, traditional technologies such as having servers and various kinds of hardware have been abandoned by large companies, which are turning to cloud computing technology. This technology makes it easier for companies to run their business. Of the many cloud computing service providers, Amazon Web Service (AWS) is one of the first and biggest service providers. Problems that occur in the use of cloud computing technology is the provision of access rights in data management. AWS has a service to manage access control for each service, namely Identity and Access Management (IAM). This service seeks to prevent activities that lead to security breaches. The results obtained are in the form of groups that can access AWS services according to the role given. Pada era ini, teknologi tradisional seperti memiliki server dan berbagai macam hardware telah ditinggalkan oleh perusahaan besar, yang beralih pada teknologi cloud computing. Teknologi ini memudahkan perusahaan dalam menjalankan bisnisnya. Dari banyaknya penyedia layanan cloud computing, Amazon Web Service (AWS) adalah salah satu penyedia layanan pertama dan yang terbesar. Masalah-masalah yang terjadi dalam penggunaan teknologi cloud computing adalah pemberian hak akses dalam melakukan manajemen data. AWS memiliki suatu layanan untuk mengatur kendali akses pada setiap layanan yaitu Identity and Access Management (IAM). Layanan ini berupaya mencegah aktivitas yang mengarah pada pelanggaran keamanan. Hasil yang didapatkan berupa group yang dapat mengakses layanan AWS sesuai dengan role yang diberikan.
APA, Harvard, Vancouver, ISO, and other styles
4

Choudhary, Anurag. "A walkthrough of Amazon Elastic Compute Cloud (Amazon EC2): A Review." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (November 30, 2021): 93–97. http://dx.doi.org/10.22214/ijraset.2021.38764.

Full text
Abstract:
Abstract: Cloud services are being provided by various giant corporations notably Amazon Web Services, Microsoft Azure, Google Cloud Platform, and others. In this scenario, we address the most prominent web service provider, which is Amazon Web Services, which comprises the Elastic Compute Cloud functionality. Amazon offers a comprehensive package of computing solutions to let businesses establish dedicated virtual clouds while maintaining complete configuration control over their working environment. An organization needs to interact with several other technologies; however, instead of installing the technologies, the company may just buy the technology available online as a service. Amazon's Elastic Compute Cloud Web service, delivers highly customizable computing capacity throughout the cloud, allowing developers to establish applications with high scalability. Explicitly put, an Elastic Compute Cloud is a virtual platform that replicates a physical server on which you may host your applications. Instead of acquiring your own hardware and connecting it to a network, Amazon provides you with almost endless virtual machines to deploy your applications while they control the hardware. This review will focus on the quick overview of the Amazon Web Services Elastic Compute Cloud which also containing the features, pricing, and challenges. Finally, unanswered obstacles, and future research directions in Amazon Web Services Elastic Compute Cloud, are addressed. Keywords: Cloud Computing, Cloud Service Provider, Amazon Web Services, Amazon Elastic Compute Cloud, AWS EC2
APA, Harvard, Vancouver, ISO, and other styles
5

Kewate, Neha. "A Review on AWS - Cloud Computing Technology." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 258–63. http://dx.doi.org/10.22214/ijraset.2022.39802.

Full text
Abstract:
Abstract: Cloud computing is something simple we can define as maintaining data centers and data servers and also u can access technology services by computing power, storage, and database using cloud computing technology AWS(Amazon Web Services). It is an emerged model which is already popular among almost all enterprises. It provides us the concept of ondemand services where we are using and scaling cloud resources on demand and as per demand respectively. AWS Cloud computing is a cost-effective model. The major concern in this model is Security and Storage in the cloud. This is one of the major reasons many enterprises of choosing AWS cloud computing. This paper provides a review of security research in the field of cloud security and storage services of the AWS cloud platform. After security and storage, we have presented the working of AWS (Amazon Web Service) cloud computing. AWS is the most trusted provider of cloud computing which not only provides excellent cloud security but also provides excellent cloud storage services. The main aim of this paper is to make cloud computing storage and security a core operation and not an add-on operation. As per the increase in the Service provider and related companies, this AWS Cloud Platform plays a vital role in service industries by giving its best web services, so, therefore, choosing the cloud service providers wisely is the basic need of the industry. Therefore we are going to see how AWS fulfills all these specific needs. Keywords: Trusted Computing, AWS, Information-Centric Security, Cloud Storage, S3, EC2, Cloud Computing
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Sayyed, Rizik M. H., Wadi’ A. Hijawi, Anwar M. Bashiti, Ibrahim AlJarah, Nadim Obeid, and Omar Y. A. Al-Adwan. "An Investigation of Microsoft Azure and Amazon Web Services from Users’ Perspectives." International Journal of Emerging Technologies in Learning (iJET) 14, no. 10 (May 30, 2019): 217. http://dx.doi.org/10.3991/ijet.v14i10.9902.

Full text
Abstract:
Cloud computing is one of the paradigms that have undertaken to deliver the utility computing concept. It views computing as a utility similar to water and electricity. We aim in this paper to make an investigation of two highly efficacious Cloud platforms: Microsoft Azure (Azure) and Amazon Web Services (AWS) from users’ perspectives the point of view of users. We highlight and compare in depth the features of Azure and AWS from users’ perspectives. The features which we shall focus on include (1) Pricing, (2) Availability, (3) Confidentiality, (4) Secrecy, (5) Tier Account and (6) Service Level Agreement (SLA). The study shows that Azure is more appropriate when considering Pricing and Availability (Error Rate) while AWS is more appropriate when considering Tier account. Our user survey study and its statistical analysis agreed with the arguments made for each of the six comparisons factors.
APA, Harvard, Vancouver, ISO, and other styles
7

Niranjanamurthy, M., M. P. Amulya, N. M. Niveditha, and P. Dayananda. "Creating a Custom Virtual Private Cloud and Launch an Elastic Compute Cloud (EC2) Instance in Your Virtual Private Cloud." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4509–14. http://dx.doi.org/10.1166/jctn.2020.9106.

Full text
Abstract:
Cloud Computing is regarded to as putting away and getting to data over the web. The hard disk of your PC doesn’t hold this data. In Cloud computing, you can get to information from a remote server. Amazon Web Services (AWS) enables adaptable, solid, versatile, simple to-utilize and practical Cloud computing arrangements. AWS is an extensive, simple to utilize processing stage offered Amazon. A virtual private cloud (VPC) is devoted to the AWS account which is in the AWS cloud that acts coherently detached with different virtual systems. Amazon EC2 is a secure web administration which allows register with the modifiable limit in the cloud .In this work, we are giving route subtleties of Creating a custom VPC and dispatch an EC2 Instance in your VPC.
APA, Harvard, Vancouver, ISO, and other styles
8

Бабак, И. Н., and A. A. Микитенко. "АНАЛИЗ МЕХАНИЗМОВ ОБЕСПЕЧЕНИЯ БЕЗОПАСНОСТИ ИНФОРМАЦИИ В ОБЛАКЕ AMAZON WEB SERVICES (AWS)." Open Information and Computer Integrated Technologies, no. 81 (November 16, 2018): 110–15. http://dx.doi.org/10.32620/oikit.2018.81.12.

Full text
Abstract:
Advantages and disadvantages of using cloud technologies in companies are considered. The programmatic mechanisms for protecting data and users’ resources from an unauthorized access that are given by an Amazon Web Services (AWS) cloud service provider are analyzed. The basic vulnerabilities which can be present even when effective programmatic methods for cloud resources safety exist are analyzed. Such problems as obsolescence of the access keys, open ports and untuned safety rules, presence of resources which are not in use in a cloud are highlighted. In the article approaches for increasing data protection strength and information security in the AWS cloud are proposed. Development of the framework is proposed, the main functions of this framework is the following: scanning and removing outdated access keys, open ports, resources which are not in use and vulnerable configurations of protection mechanisms in a cloud. For ease of use of the framework, it is supposed to develop scripts for automating the process of its deployment.
APA, Harvard, Vancouver, ISO, and other styles
9

Ratkov, Aleksandar. "DIZAJN SERVERLESS WEB APLIKACIJA NA AMAZON PLATFORMI." Zbornik radova Fakulteta tehničkih nauka u Novom Sadu 34, no. 11 (November 3, 2019): 2009–11. http://dx.doi.org/10.24867/05be16ratkov.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Park, Se-Joon, Yong-Joon Lee, and Won-Hyung Park. "Configuration Method of AWS Security Architecture That Is Applicable to the Cloud Lifecycle for Sustainable Social Network." Security and Communication Networks 2022 (January 12, 2022): 1–12. http://dx.doi.org/10.1155/2022/3686423.

Full text
Abstract:
Recently, due to the many features and advantages of cloud computing, “cloud service” is being introduced to countless industries around the world at an unbelievably rapid pace. However, with the rapid increase in the introduction of cloud computing services, security vulnerabilities are increasing and the risk of technology leakage from cloud computing services is also expected to increase in social network service. Therefore, this study will propose an AWS-based (Amazon Web Services) security architecture configuration method that can be applied for the entire life cycle (planning, establishment, and operation) of cloud services for better security in AWS Cloud Services, which is the most used cloud service in the world. The proposed AWS security guide consists of five different areas, Security Solution Selection Guide, Personal Information Safeguard Guide, Security Architecture Design Guide, Security Configuration Guide, and Operational Security Checklist, for a safe social network. The AWS Security Architecture has been designed with three reference models: Standard Security Architecture, Basic Security Architecture, and Essential Security Architecture. The AWS Security Guide and AWS Security Architecture proposed in this paper are expected to help many businesses and institutions that are hoping to establish and operate a safe and reliable AWS cloud system in the social network environment.
APA, Harvard, Vancouver, ISO, and other styles
11

Yu, Ellen, Aparna Bhaskaran, Shang-Lin Chen, Zachary E. Ross, Egill Hauksson, and Robert W. Clayton. "Southern California Earthquake Data Now Available in the AWS Cloud." Seismological Research Letters 92, no. 5 (June 16, 2021): 3238–47. http://dx.doi.org/10.1785/0220210039.

Full text
Abstract:
Abstract The Southern California Earthquake Data Center is hosting its earthquake catalog and seismic waveform archive in the Amazon Web Services (AWS) Open Dataset Program (s3://scedc-pds; us-west-2 region). The cloud dataset’s high data availability and scalability facilitate research that uses large volumes of data and computationally intensive processing. We describe the data archive and our rationale for the formats and data organization. We provide two simple examples to show how storing the data in AWS Simple Storage Service can benefit the analysis of large datasets. We share usage statistics of our data during the first year in the AWS Open Dataset Program. We also discuss the challenges and opportunities of a cloud-hosted archive.
APA, Harvard, Vancouver, ISO, and other styles
12

Kacamarga, Muhamad Fitra, Arif Budiarto, and Bens Pardamean. "A Platform for Electronic Health Record Sharing in Environments with Scarce Resource Using Cloud Computing." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 09 (August 13, 2020): 63. http://dx.doi.org/10.3991/ijoe.v16i09.13187.

Full text
Abstract:
One of the main objectives of Electronic Health Record (EHR) is the transferability of patient data from one location to another. Many locations with scarce resources, particularly unreliable internet connectivity, face difficulties in accessing and sharing EHR data. This article presents our proposed design that utilizes Amazon Web Services (AWS) for a sharing mechanism platform among distributed healthcare organizations found in an environment with scarce resources. We proposed the use of database replication mechanism and REST (Representational State Transfer) web service to perform information exchange among health organizations and public health information systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Manukyan, A., D. Korobkin, S. Fomenkov, and S. Kolesnikov. "Semantic patent analysis with Amazon Web Services." Journal of Physics: Conference Series 2060, no. 1 (October 1, 2021): 012025. http://dx.doi.org/10.1088/1742-6596/2060/1/012025.

Full text
Abstract:
Abstract Semantic analysis of the patent array allows us to solve several modern problems: (1) Clustering of the patent array. This information can be useful for identifying patent trends, key modern technologies, and predicting the demand for technologies in the future period; (2) Automation of the work of the patent office expert. Based on a full-text query (the text of a patent application), a search for analogous patents can be performed. This study describes a developed software that provides the possibility of clustering the patent array (topic modeling), identifying groups of related patents (not based on patent classification but on the basis of key terms/phrases extracted from the texts), and a search for patents using AWS technologies.
APA, Harvard, Vancouver, ISO, and other styles
14

Han, Yan. "Cloud Computing: Case Studies and Total Cost of Ownership." Information Technology and Libraries 30, no. 4 (December 1, 2011): 198. http://dx.doi.org/10.6017/ital.v30i4.1871.

Full text
Abstract:
This paper consists of four major sections: The first section is a literature review of cloud computing and a cost model. The next section focuses on detailed overviews of cloud computing and its levels of services: SaaS, PaaS, and IaaS. Major cloud computing providers are introduced, including Amazon Web Services (AWS),<br />Microsoft Azure, and Google App Engine. Finally, case studies of implementing web applications on IaaS and PaaS using AWS, Linode and Google AppEngine are demonstrated. Justifications of running on an IaaS provider (AWS) and running on a PaaS provider (Google AppEngine) are described. The last section discusses costs and technology analysis comparing cloud computing with local managed storage and servers. The total costs of ownership (TCO) of an AWS small instance are significantly<br />lower, but the TCO of a typical 10TB space in Amazon S3 are<br />significantly higher. Since Amazon offers lower storage pricing for huge amounts of data, the TCO might be lower. Readers should do their own analysis on the TCOs.
APA, Harvard, Vancouver, ISO, and other styles
15

Campagna, Matthew, and Shay Gueron. "Key Management Systems at the Cloud Scale." Cryptography 3, no. 3 (September 5, 2019): 23. http://dx.doi.org/10.3390/cryptography3030023.

Full text
Abstract:
This paper describes a cloud-scale encryption system. It discusses the constraints that shaped the design of Amazon Web Services’ Key Management Service, and in particular, the challenges that arise from using a standard mode of operation such as AES-GCM while safely supporting huge amounts of encrypted data that is (simultaneously) generated and consumed by a huge number of users employing different keys. We describe a new derived-key mode that is designed for this multi-user-multi-key scenario typical at the cloud scale. Analyzing the resulting security bounds of this model illustrates its applicability for our setting. This mode is already deployed as the default mode of operation for the AWS key management service.
APA, Harvard, Vancouver, ISO, and other styles
16

Rojas-Albarracín, Gabriel, Jorge Páramo-Fonseca, and Cindy Hernández-Merchán. "Plataforma computacional sobre Amazon Web Services (Aws) de renderizado distribuido." Revista científica 3, no. 30 (September 1, 2017): 252. http://dx.doi.org/10.14483/23448350.12362.

Full text
Abstract:
En la actualidad se ha creado una dinámica en la cual las personas exigen una mayor calidad de imagen en diferentes medios (Juegos, películas, animaciones). Una mayor definición por lo general requiere el procesamiento de imágenes de mayor tamaño, esto trae consigo la necesidad de aumentar la capacidad de cálculo. El presente artículo expone un caso de estudio en el cual se muestra la implementación de una plataforma de bajo costo, sobre la nube de Amazon, para el procesamiento (renderizado) de imágenes y animaciones de forma paralela.
APA, Harvard, Vancouver, ISO, and other styles
17

Bai, Jinbing, Ileen Jhaney, and Jessica Wells. "Developing a Reproducible Microbiome Data Analysis Pipeline Using the Amazon Web Services Cloud for a Cancer Research Group: Proof-of-Concept Study." JMIR Medical Informatics 7, no. 4 (November 11, 2019): e14667. http://dx.doi.org/10.2196/14667.

Full text
Abstract:
Background Cloud computing for microbiome data sets can significantly increase working efficiencies and expedite the translation of research findings into clinical practice. The Amazon Web Services (AWS) cloud provides an invaluable option for microbiome data storage, computation, and analysis. Objective The goals of this study were to develop a microbiome data analysis pipeline by using AWS cloud and to conduct a proof-of-concept test for microbiome data storage, processing, and analysis. Methods A multidisciplinary team was formed to develop and test a reproducible microbiome data analysis pipeline with multiple AWS cloud services that could be used for storage, computation, and data analysis. The microbiome data analysis pipeline developed in AWS was tested by using two data sets: 19 vaginal microbiome samples and 50 gut microbiome samples. Results Using AWS features, we developed a microbiome data analysis pipeline that included Amazon Simple Storage Service for microbiome sequence storage, Linux Elastic Compute Cloud (EC2) instances (ie, servers) for data computation and analysis, and security keys to create and manage the use of encryption for the pipeline. Bioinformatics and statistical tools (ie, Quantitative Insights Into Microbial Ecology 2 and RStudio) were installed within the Linux EC2 instances to run microbiome statistical analysis. The microbiome data analysis pipeline was performed through command-line interfaces within the Linux operating system or in the Mac operating system. Using this new pipeline, we were able to successfully process and analyze 50 gut microbiome samples within 4 hours at a very low cost (a c4.4xlarge EC2 instance costs $0.80 per hour). Gut microbiome findings regarding diversity, taxonomy, and abundance analyses were easily shared within our research team. Conclusions Building a microbiome data analysis pipeline with AWS cloud is feasible. This pipeline is highly reliable, computationally powerful, and cost effective. Our AWS-based microbiome analysis pipeline provides an efficient tool to conduct microbiome data analysis.
APA, Harvard, Vancouver, ISO, and other styles
18

S, Murali, Manimaran A, Selvakumar K, and Dinesh Kumar S. "Hashing based Hybrid Online Voting Using Amazon Web Services." International Journal of Engineering & Technology 7, no. 4.10 (October 2, 2018): 295. http://dx.doi.org/10.14419/ijet.v7i4.10.20915.

Full text
Abstract:
The secured web-based voting framework is the need of the present time. We propose another secure authentication for the online voting framework by utilizing face recognition and hashing algorithm. A simple verification process is accomplished during the initial registration process via email and phone. The voter is asked to give a unique identification number (UIN) provided by the election authority and face image at the time of main registration. This UIN is converted into a secret key using the SHA algorithm. The face image that is saved in the Amazon web service (AWS) acts as an authentication mechanism which enables people to cast their vote secretly. The voters, who cast numerous votes amid the way toward voting is guaranteed to be counteracted by encrypted UIN. The election organizers can see the election parallelly as the voting is saved in the real-time database. The privacy of the voter is maintained as the details are converted into the key. In this system, an individual can vote from outside of his/her allocated constituency.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Heejin, Ki Young Huh, Meihua Piao, Hyeongju Ryu, Wooseok Yang, SeungHwan Lee, and Kyung Hwan Kim. "Self-Reporting Technique-Based Clinical-Trial Service Platform for Real-Time Arrhythmia Detection." Applied Sciences 12, no. 9 (April 30, 2022): 4558. http://dx.doi.org/10.3390/app12094558.

Full text
Abstract:
The analysis of the electrocardiogram (ECG) is critical for the diagnosis of arrhythmias. Recent advances in information and communications technology (ICT) have led to the development of wearable ECG devices and arrhythmia-detection algorithms. This study aimed to develop an ICT-based clinical trial service platform using a self-reporting technique for real-time arrhythmia detection. To establish a clinical-trial service platform, a mobile application (app), a demilitarized zone (DMZ), an internal network, and Amazon web services virtual private cloud (AWS-VPC) were developed. The ECG data acquired by a wearable device were transmitted to the mobile app, which collected the participants’ self-reported information. The mobile app transmitted raw ECG and self-reported data to the AWS-VPC and DMZ, respectively. In the AWS-VPC, the live-streaming and playback-reviewer services were operational to display the currently and previously acquired ECG data to clinicians through the web client. All the measured data were transmitted to the internal network, in which the arrhythmia-detection algorithm was executed and all the data were saved. The self-reporting technique and arrhythmia-detection algorithm are the key elements of this platform. In particular, subjective information of participants can be easily collected using a self-reporting technique. These features are particularly of critical importance for treating painless, sparsely occurring arrhythmias.
APA, Harvard, Vancouver, ISO, and other styles
20

Mora-Márquez, Fernando, José Luis Vázquez-Poletti, and Unai López de Heredia. "NGScloud2: optimized bioinformatic analysis using Amazon Web Services." PeerJ 9 (April 16, 2021): e11237. http://dx.doi.org/10.7717/peerj.11237.

Full text
Abstract:
Background NGScloud was a bioinformatic system developed to perform de novo RNAseq analysis of non-model species by exploiting the cloud computing capabilities of Amazon Web Services. The rapid changes undergone in the way this cloud computing service operates, along with the continuous release of novel bioinformatic applications to analyze next generation sequencing data, have made the software obsolete. NGScloud2 is an enhanced and expanded version of NGScloud that permits the access to ad hoc cloud computing infrastructure, scaled according to the complexity of each experiment. Methods NGScloud2 presents major technical improvements, such as the possibility of running spot instances and the most updated AWS instances types, that can lead to significant cost savings. As compared to its initial implementation, this improved version updates and includes common applications for de novo RNAseq analysis, and incorporates tools to operate workflows of bioinformatic analysis of reference-based RNAseq, RADseq and functional annotation. NGScloud2 optimizes the access to Amazon’s large computing infrastructures to easily run popular bioinformatic software applications, otherwise inaccessible to non-specialized users lacking suitable hardware infrastructures. Results The correct performance of the pipelines for de novo RNAseq, reference-based RNAseq, RADseq and functional annotation was tested with real experimental data, providing workflow performance estimates and tips to make optimal use of NGScloud2. Further, we provide a qualitative comparison of NGScloud2 vs. the Galaxy framework. NGScloud2 code, instructions for software installation and use are available at https://github.com/GGFHF/NGScloud2. NGScloud2 includes a companion package, NGShelper that contains Python utilities to post-process the output of the pipelines for downstream analysis at https://github.com/GGFHF/NGShelper.
APA, Harvard, Vancouver, ISO, and other styles
21

Khlevna, Iuliia L., and Bohdan S. Koval. "DEVELOPMENT OF THE AUTOMATED FRAUD DETECTION SYSTEM CONCEPT IN PAYMENT SYSTEMS." Applied Aspects of Information Technology 4, no. 1 (April 10, 2021): 37–46. http://dx.doi.org/10.15276/aait.01.2021.3.

Full text
Abstract:
The paper presents the demand for the spread of payment systems. This is caused by the development of technology. The open issue of application of payment systems - fraud - is singled out. It is established that there is no effective algorithm that would be the standard for all financial institutions in detecting and preventing fraud. This is due to the fact that approaches to fraud are dynamic and require constant revision of forecasts. Prospects for the development of scientific and practical approaches to prevent fraudulent transactions in payment systems have been identified. It has been researched that machine learning is appropriate in solving the problem of detecting fraud in payment systems. At the same time, the detection of fraud in payment systems is not only to build the algorithmic core, but also to build a reliable automated system, which in real time, under high load, is able to control data flows and effectively operate the algorithmic core of the system. The paper describes the architecture, principles and operation models, the infrastructure of the automated fraud detection mechanism in payment systems. The expediency of using a cloud web service has been determined. The deployment of the model in the form of automated technology based on the Amazon Web Services platform is substantiated. The basis of the automated online fraud detection system is Amazon Fraud Detector and setting up payment fraud detection workflows in payment systems using a customizable Amazon A2I task type to verify and confirm high-risk forecasts. The paper gives an example of creating an anomaly detection system on Amazon DynamoDB streams using Amazon SageMaker, AWS Glue and AWS Lambda. The automated system takes into account the dynamics of the data set, as the AWS Lambda function also works with many other AWS streaming services. There are three main tasks that the software product solves: prevention and detection of fraud in payment systems, rapid fraud detection (counts in minutes), integration of the software product into the business where payment systems and services are used (for example, payment integration services in financial institutions, online stores, logistics companies, insurance policies, trading platforms, etc.). It is determined that the implementation of an automated system should be considered as a project. The principles of project implementation are offered. It is established that for the rational implementation of the project it is necessary to develop a specific methodology for the implementation of the software product for fraud detection in payment systems of business institutions.
APA, Harvard, Vancouver, ISO, and other styles
22

Shapovalova, V. V., S. P. Radko, K. G. Ptitsyn, G. S. Krasnov, K. V. Nakhod, O. S. Konash, M. A. Vinogradina, E. A. Ponomarenko, D. S. Druzhilovskiy, and A. V. Lisitsa. "Processing Oxford Nanopore Long Reads Using Amazon Web Services." Biomedical Chemistry: Research and Methods 3, no. 4 (2020): e00131. http://dx.doi.org/10.18097/bmcrm00131.

Full text
Abstract:
Studies of genomes and transcriptomes are performed using sequencers that read the sequence of nucleotide residues of genomic DNA, RNA, or complementary DNA (cDNA). The analysis consists of an experimental part (obtaining primary data) and bioinformatic processing of primary data. The bioinformatics part is performed with different sets of input parameters. The selection of the optimal values of the parameters, as a rule, requires significant computing power. The article describes a protocol for processing transcriptome data by virtual computers provided by the cloud platform Amazon Web Services (AWS) using the example of the recently emerging technology of long DNA and RNA sequences (Oxford Nanopore Technology). As a result, a virtual machine and instructions for its use have been developed, thus allowing a wide range of molecular biologists to independently process the results obtained using the "Oxford nanopore".
APA, Harvard, Vancouver, ISO, and other styles
23

Tiwari, Rajeev, Shuchi Upadhyay, Gunjan Lal, and Varun Tanwar. "Project Workflow Management: A Cloud based Solution-Scrum Console." International Journal of Engineering & Technology 7, no. 4 (September 20, 2018): 2457. http://dx.doi.org/10.14419/ijet.v7i4.15799.

Full text
Abstract:
Today, there is a data workload that needs to be managed efficiently. There are many ways for the management and scheduling of processes, which can impact the performance and quality of the product and highly available, scalable web hosting can be a complex and expensive proposition. Traditional web architectures don’t offer reliability. So in this work a Scrum Console is being designed for managing a process which will be hosted on Amazon Web Services (AWS) [2] which provides a reliable, scalable, highly available and high performance infrastructure web application. The Scrum Console Platform facilitates the collaboration of various members of a team to manage projects together. The Scrum Console Platform has been developed using JSP, Hibernate & Oracle 12c Enterprise Edition Database. The Platform is deployed as a web application on AWS Elastic Beanstalk which automates the deployment, management and monitoring of the application while relying on the underlying AWS resources such EC2, S3, RDS, CloudWatch, autoscaling, etc.
APA, Harvard, Vancouver, ISO, and other styles
24

Pandis, Ippokratis. "The evolution of Amazon redshift." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3162–74. http://dx.doi.org/10.14778/3476311.3476391.

Full text
Abstract:
In 2013, Amazon Web Services revolutionized the data warehousing industry by launching Amazon Redshift [7], the first fully managed, petabyte-scale enterprise-grade cloud data warehouse. Amazon Redshift made it simple and cost-effective to efficiently analyze large volumes of data using existing business intelligence tools. This launch was a significant leap from the traditional on-premise data warehousing solutions, which were expensive, not elastic, and required significant expertise to tune and operate. Customers embraced Amazon Redshift and it became the fastest growing service in AWS. Today, tens of thousands of customers use Amazon Redshift in AWS's global infrastructure of 25 launched Regions and 81 Availability Zones (AZs), to process exabytes of data daily. The success of Amazon Redshift inspired a lot of innovation in the analytics segment, e.g. [1, 2, 4, 10], which in turn has benefited customers. In the last few years, the use cases for Amazon Redshift have evolved and in response, Amazon Redshift continues to deliver a series of innovations that delight customers. In this paper, we give an overview of Amazon Redshift's system architecture. Amazon Redshift is a columnar MPP data warehouse [7]. As shown in Figure 1, an Amazon Redshift compute cluster consists of a coordinator node, called the leader node , and multiple compute nodes . Data is stored on Redshift Managed Storage , backed by Amazon S3, and cached in compute nodes on locally-attached SSDs in compressed columnar fashion. Tables are either replicated on every compute node or partitioned into multiple buckets that are distributed among all compute nodes. AQUA is a query acceleration layer that leverages FPGAs to improve performance. CaaS is a caching microservice of optimized generated code for the various query fragments executed in the Amazon Redshift fleet. The innovation at Amazon Redshift continues at accelerated pace. Its development is centered around four streams. First, Amazon Redshift strives to provide industry-leading data warehousing performance. Amazon Redshift's query execution blends database operators in each query fragment via code generation. It combines prefetching and vectorized execution with code generation to achieve maximum efficiency. This allows Amazon Redshift to scale linearly when processing from a few terabytes to petabytes of data. Figure 2 depicts the total execution time of the Cloud Data Warehouse Benchmark Derived from TPC-DS 2.13 [6] while scaling dataset size and hardware simultaneously. Amazon Redshift's performance remains nearly flat for a given ratio of data to hardware, as data volume increases from 30TB to 1PB. This linear scaling to the petabyte scale makes it easy, predictable and cost-efficient for customers to on-board new datasets and workloads. Second, customers needed to process more data and wanted to support an increasing number of concurrent users or independent compute clusters that are operating over the Redshift-managed data and the data in Amazon S3. We present Redshift Managed Storage, Redshift's high-performance transactional storage layer, which is disaggregated from the Redshift compute layer and allows a single database to grow to tens of petabytes. We also describe Redshift's compute scaling capabilities. In particular, we present how Redshift can scale up by elastically resizing the size of each cluster, and how Redshift can scale out and increase its throughput via multi-cluster autoscaling, called Concurrency Scaling. With Concurrency Scaling, customers can have thousands of concurrent users executing queries on the same Amazon Redshift endpoint. We also talk about data sharing, which allows users to have multiple isolated compute clusters consume the same datasets in Redshift Managed Storage. Elastic resizing, concurrency scaling and data sharing can be combined giving multiple compute scaling options to the Amazon Redshift customers. Third, as Amazon Redshift became the most widely used cloud data warehouse, its users wanted it to be even easier to use. For that, Redshift introduced ML-based autonomics. We present how Redshift automated among others workload management, physical tuning, the refresh of materialized views (MVs), along with automated MVs-based optimization that rewrites queries to use MVs. We also present how we leverage ML to improve the operational health of the service and deal with gray failures [8]. Finally, as AWS offers a wide range of purpose-built services, Amazon Redshift provides seamless integration with the AWS ecosystem and novel abilities in ingesting and ELTing semistructured data (e.g., JSON) using the PartiQL extension of SQL [9]. AWS purpose-built services include the Amazon S3 object storage, transactional databases (e.g., DynamoDB [5] and Aurora [11]) and the ML services of Amazon Sagemaker. We present how AWS and Redshift make it easy for their customers to use the best service for each job and seamlessly take advantage of Redshift's best of class analytics capabilities. For example, we talk about Redshift Spectrum [3] that allows Redshift to query data in open-file formats in Amazon S3. We present how Redshift facilitates both the in-place querying of data in OLTP services, using Redshift's Federated Querying, as well as the copy of data to Redshift, using Glue Elastic Views. We also present how Redshift can leverage the catabilities of Amazon Sagemaker through SQL and without data movement.
APA, Harvard, Vancouver, ISO, and other styles
25

Gowan, Taylor A., John D. Horel, Alexander A. Jacques, and Adair Kovac. "Using Cloud Computing to Analyze Model Output Archived in Zarr Format." Journal of Atmospheric and Oceanic Technology 39, no. 4 (April 2022): 449–62. http://dx.doi.org/10.1175/jtech-d-21-0106.1.

Full text
Abstract:
Abstract Numerical weather prediction centers rely on the Gridded Binary Second Edition (GRIB2) file format to efficiently compress and disseminate model output as two-dimensional grids. User processing time and storage requirements are high if many GRIB2 files with size O(100 MB, where B = bytes) need to be accessed routinely. We illustrate one approach to overcome such bottlenecks by reformatting GRIB2 model output from the High-Resolution Rapid Refresh (HRRR) model of the National Centers for Environmental Prediction to a cloud-optimized storage type, Zarr. Archives of the original HRRR GRIB2 files and the resulting Zarr stores on Amazon Web Services (AWS) Simple Storage Service (S3) are available publicly through the Amazon Sustainability Data Initiative. Every hour, the HRRR model produces 18- or 48-hourly GRIB2 surface forecast files of size O(100 MB). To simplify access to the grids in the surface files, we reorganize the HRRR model output for each variable and vertical level into Zarr stores of size O(1 MB), with chunks O(10 kB) containing all forecast lead times for 150 × 150 gridpoint subdomains. Open-source libraries provide efficient access to the compressed Zarr stores using cloud or local computing resources. The HRRR-Zarr approach is illustrated for common applications of sensible weather parameters, including real-time alerts for high-impact situations and retrospective access to output from hundreds to thousands of model runs. For example, time series of surface pressure forecast grids can be accessed using AWS cloud computing resources approximately 40 times as fast from the HRRR-Zarr store as from the HRRR-GRIB2 archive. Significance Statement The rapid evolution of computing power and data storage have enabled numerical weather prediction forecasts to be generated faster and with more detail than ever before. The increased temporal and spatial resolution of forecast model output can force end users with finite memory and storage capabilities to make pragmatic decisions about which data to retrieve, archive, and process for their applications. We illustrate an approach to alleviate this access bottleneck for common weather analysis and forecasting applications by using the Amazon Web Services (AWS) Simple Storage Service (S3) to store output from the High-Resolution Rapid Refresh (HRRR) model in Zarr format. Zarr is a relatively new data storage format that is flexible, compressible, and designed to be accessed with open-source software either using cloud or local computing resources. The HRRR-Zarr dataset is publicly available as part of the AWS Sustainability Data Initiative.
APA, Harvard, Vancouver, ISO, and other styles
26

Fajri, Muhammad, Hidra Amnur, and Aldo Erianda. "Alat Pengatur Suhu pada Mesin Penetas Telur Ayam menggunakan Mikrokontroler, Android dan Server AWS (Amazon Web Service)." JITSI : Jurnal Ilmiah Teknologi Sistem Informasi 1, no. 3 (December 26, 2020): 114–20. http://dx.doi.org/10.30630/jitsi.1.3.16.

Full text
Abstract:
Teknologi IoT saat ini sangat berkembang. Biasanya ayam mengerami telurnya secara alami oleh induknya. namun pada masa sekarang ini sudah banyak diproduksi mesin penetas telur. pengeraman melalui mesin penetas telur yang dibuat oleh manusia bisa dilakukan dengan jumlah telur yang sangat banyak. kedua teknik pengeraman tersebut memiliki kekurangan dan kelebihan masing-masing. Diantara mesin tersebut pengaturan panas dari mesin tersebut masih manual, yaitu dengan cek temperatur secara berkala agar tidak terlalu panas atau panasnya kurang untuk menetaskan telur tersebut dan itu akan sangat menguras tenaga. Alat penetas telur yang banyak beredar sekarang ini lebih boros listrik, dikarenakan penggunaan lampu pada alat tersebut tidak dikontrol sehingga akan menyala terus dan akan memakai listrik yang sangat banyak. Projek penelitian ini membuat alat untuk menetaskan telur dan menjaga temperatur tetap stabil. Alat yang dapat mengatur temperatur secara otomatis dan mengirim datanya ke user. Mengirim data yang sudah dikumpulkan dari sensor ke user lewat mikrokontroller nodeMCU. Serta dapat memonitoring keadaan dari alat tetas telur lewat Andrioid user yang telah dikirim data dari alat dan disimpan didatabase lalu dikirim keserver
APA, Harvard, Vancouver, ISO, and other styles
27

Chrismanto, Antonius Rachmat, Willy Sudiarto Raharjo, and Yuan Lukito. "Firefox Extension untuk Klasifikasi Komentar Spam pada Instagram Berbasis REST Services." Jurnal Edukasi dan Penelitian Informatika (JEPIN) 5, no. 2 (August 6, 2019): 146. http://dx.doi.org/10.26418/jp.v5i2.33010.

Full text
Abstract:
Klasifikasi komentar spam pada Instagram (IG) hanya dapat digunakan oleh pengguna melalui sistem yang berjalan di sisi client, karena data IG tidak dapat dimanipulasi dari luar IG. Dibutuhkan sistem yang dapat memanipulasi data dari sisi client dalam bentuk browser extension. Penelitian ini berfokus pada pengembangan browser extension untuk Firefox yang memanfaatkan web services REST pada layanan cloud dengan platform Amazon Web Services (AWS). Browser extension yang dikembangkan menggunakan 2 algoritma klasifikasi, yaitu KNN dan Distance-Weighted KNN (DW-KNN). Extension ini mampu menandai komentar spam dengan mengubah Document Object Model (DOM) IG menjadi berwarna merah dengan dicoret (strikethrough). Metode pengembangan extension dilakukan dengan metode Rapid Application Development (RAD). Pengujian pada penelitian ini dilakukan pada hasil implementasi browser extension dan pengukuran akurasi web service (algoritma KNN & DW-KNN). Pengujian implementasi browser extension menggunakan metode pengujian fungsionalitas, dimana setiap fitur yang telah diimplementasikan diuji apakah sudah sesuai dengan spesifikasi yang telah ditentukan sebelumnya. Pengujian akurasi web service dilakukan dengan bantuan tool SOAPUI. Hasil pengujian extension adalah: (1) pengujian extension pada sembarang halaman web berhasil 100%, (2) pengujian pada halaman awal (default) IG berhasil 100%, (3) pengujian pada halaman profile suatu akun IG berhasil 100%, (4) pengujian pada suatu posting IG dan komentarnya, tidak selalu berhasil karena dipengaruhi oleh kemampuan algoritma pada web services, (5) pengujian untuk bahasa bukan Indonesia tidak selalu berhasil karena bergantung pada library bahasa, (6) pengujian untuk load more comments pada IG tidak selalu berhasil karena bergantung pada algoritma pada web services, dan (7) pengujian pilihan algoritma pada options extension berhasil 100%. Hasil akurasi rata-rata tertinggi algoritma KNN adalah 80% untuk k=1, sedangkan DW-KNN adalah 90% untuk k=2.
APA, Harvard, Vancouver, ISO, and other styles
28

Thakur, Amey, and Mega Satish. "Pizza Ordering Chatbot Using Amazon Lex." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 1206–16. http://dx.doi.org/10.22214/ijraset.2022.40861.

Full text
Abstract:
Abstract: Because of breakthroughs in machine learning and deep learning, which are causing a change in every industry area and managing various types of activities better than people. The majority of monotonous jobs that were formerly performed by humans are now replaced by AI. Every firm is aiming to replace the least skilled labour with AI robots that can do comparable tasks more efficiently, especially when it comes to chatbots. A chatbot is computer software that mimics human interaction by using voice instructions, text dialogues, or both. Chatbots are being employed to address consumer concerns or problems in food delivery app businesses such as Zomato and Swiggy, but are chatbots truly useful in that business model? This business model's target customers are those who don't have time to go outside to obtain food, prefer convenience at home, or are unwilling to endure discomfort, thus their concerns should be resolved in the most convenient way possible. To fulfil the user's request, a chatbot is employed. It is critical for the chatbot to plan how to carry out the task that the user has asked. New tools are available now to create and deploy chatbots; Amazon Lex by AWS is one of them. This project focuses on creating a Pizza Ordering Chatbot using Amazon Lex to help the user order pizza. Keywords: Amazon Lex, Amazon Web Services (AWS), Chatbot
APA, Harvard, Vancouver, ISO, and other styles
29

Park, Jong-Hyuk, Hwa-Young Jeong, Young-Sik Jeong, and Min Choi. "REST-MapReduce: An Integrated Interface but Differentiated Service." Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/170723.

Full text
Abstract:
With the fast deployment of cloud computing, MapReduce architectures are becoming the major technologies for mobile cloud computing. The concept of MapReduce was first introduced as a novel programming model and implementation for a large set of computing devices. In this research, we propose a novel concept of REST-MapReduce, enabling users to use only the REST interface without using the MapReduce architecture. This approach provides a higher level of abstraction by integration of the two types of access interface, REST API and MapReduce. The motivation of this research stems from the slower response time for accessing simple RDBMS on Hadoop than direct access to RDMBS. This is because there is overhead to job scheduling, initiating, starting, tracking, and management during MapReduce-based parallel execution. Therefore, we provide a good performance for REST Open API service and for MapReduce, respectively. This is very useful for constructing REST Open API services on Hadoop hosting services, for example, Amazon AWS (Macdonald, 2005) or IBM Smart Cloud. For evaluating performance of our REST-MapReduce framework, we conducted experiments with Jersey REST web server and Hadoop. Experimental result shows that our approach outperforms conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
30

Pandit, A., S. A. Sawant, R. Agrawal, J. D. Mohite, and S. Pappula. "DEVELOPMENT OF AUTOMATED SATELLITE DATA DOWNLOADING AND PROCESSING PIPELINE ON AWS CLOUD FOR NEAR-REAL-TIME AGRICULTURE APPLICATIONS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-4-2022 (May 18, 2022): 189–96. http://dx.doi.org/10.5194/isprs-annals-v-4-2022-189-2022.

Full text
Abstract:
Abstract. Remote sensing satellites allow users to acquire detailed information about the Earth's surface on a temporal basis. Widen time-series analysis at a large geographical scale involves a huge amount (in Terabytes) of satellite data downloading and processing operations. Such processes need good computational power, large storage, and sophisticated tools. Maintaining such infrastructure can cost heavily to the research/commercial enterprises. To overcome such issues, Amazon Web Service (AWS) offers a sophisticated cloud computing environment. We developed an in-house automated satellite data downloading and processing (ADDPro) pipeline on the AWS platform. The ADDPro pipeline employed Sentinel-2 satellite data to offer current and relative vegetation health information of the agriculture region on a temporal basis at the pan-India scale. Image compositing and multi-sensor data fusion technique have been incorporated into the ADDPro pipeline to produce cloud-free raster (GeoTIFF) outputs. ADDPro pipeline also facilitates lossless raster data compression, which reduces AWS data transfer costs between regions. Data compression also aids in reducing raster publishing time on GeoServer. Operationally, AWS allows users to download only the bands required to generate a certain index (e.g. NDVI) rather than the entire Sentinel-2 data package. The entire ADDPro pipeline is extremely cost-effective, efficient, and scalable.
APA, Harvard, Vancouver, ISO, and other styles
31

Fleury-Steiner, Benjamin. "Deportation Platforms: The AWS-ICE Alliance and the Fallacy of Explicit Agendas." Surveillance & Society 17, no. 1/2 (March 31, 2019): 105–10. http://dx.doi.org/10.24908/ss.v17i1/2.12951.

Full text
Abstract:
In this paper, I analyze elite discourse in the context of the increasing role played by large-scale corporate platforms in federal immigration enforcement in the US Specifically, I focus on Amazon Web Services’ (AWS) alliance with Immigration and Customs Enforcement (ICE). Incorporating Marx’s (2016) “fallacy of explicit agendas” as a heuristic for contextualizing recent employee challenges to company CEO Jeff Bezos, I show how the fallacy serves to conceal far more about the AWS alliance with ICE, an organization with a long track record of deeply troubling practices. The secrecy that is fostered by such discourse also obscures the growing dependency of government entities on large-scale technologies of marginalizing surveillance that threaten civil liberties and rights of refugees and immigrants.
APA, Harvard, Vancouver, ISO, and other styles
32

Ansari, Steve, Stephen Del Greco, Edward Kearns, Otis Brown, Scott Wilkins, Mohan Ramamurthy, Jeff Weber, et al. "Unlocking the Potential of NEXRAD Data through NOAA’s Big Data Partnership." Bulletin of the American Meteorological Society 99, no. 1 (January 1, 2018): 189–204. http://dx.doi.org/10.1175/bams-d-16-0021.1.

Full text
Abstract:
Abstract The National Oceanic and Atmospheric Administration’s (NOAA) Big Data Partnership (BDP) was established in April 2015 through cooperative research agreements between NOAA and selected commercial and academic partners. The BDP is investigating how the value inherent in NOAA’s data may be leveraged to broaden their utilization through modern cloud infrastructures and advanced “big data” techniques. NOAA’s Next Generation Weather Radar (NEXRAD) data were identified as an ideal candidate for such collaborative efforts. NEXRAD Level II data are valuable yet challenging to utilize in their entirety, and recent advances in weather radar science can be applied to both the archived and real-time data streams. NOAA’s National Centers for Environmental Information (NCEI) transferred the complete NEXRAD Level II historical archive, originating in 1991, through North Carolina State University’s Cooperative Institute for Climate and Satellites (CICS-NC) to interested BDP collaborators. Amazon Web Services (AWS) has received and made freely available the complete archived Level II data through its AWS platform. AWS then partnered with Unidata/University Corporation for Atmospheric Research (UCAR) to establish a real-time NEXRAD feed, thereby providing on-demand dissemination of both archived and current data seamlessly through the same access mechanism by October 2015. To organize, verify, and utilize the NEXRAD data on its platform, AWS further partnered with the Climate Corporation. This collective effort among federal government, private industry, and academia has already realized a number of new and novel applications that employ NOAA’s NEXRAD data, at no net cost to the U.S. taxpayer. The volume of accessed NEXRAD data, including this new AWS platform service, has increased by 130%, while the amount of data delivered by NOAA/NCEI has decreased by 50%.
APA, Harvard, Vancouver, ISO, and other styles
33

Ferreira, K. R., G. R. Queiroz, G. Camara, R. C. M. Souza, L. Vinhas, R. F. B. Marujo, R. E. O. Simoes, et al. "USING REMOTE SENSING IMAGES AND CLOUD SERVICES ON AWS TO IMPROVE LAND USE AND COVER MONITORING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (November 6, 2020): 207–11. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-207-2020.

Full text
Abstract:
Abstract. The Brazilian National Institute for Space Research (INPE) produces official information about deforestation as well as land use and cover in the country, based on remote sensing images. The current open data policy adopted by many space agencies and governments worldwide provided access to petabytes of remote sensing images. To properly deal with this vast amount of images, novel technologies have been proposed and developed based on cloud computing and big data systems. This paper describes the INPE’s initiatives in using remote sensing images and cloud services of the Amazon Web Services (AWS) infrastructure to improve land use and cover monitoring.
APA, Harvard, Vancouver, ISO, and other styles
34

Gupta, Sangeeta, and Narsimha Gugulothu. "Secure NoSQL for the Social Networking and E-Commerce Based Bigdata Applications Deployed in Cloud." International Journal of Cloud Applications and Computing 8, no. 2 (April 2018): 113–29. http://dx.doi.org/10.4018/ijcac.2018040106.

Full text
Abstract:
The work presented in this article brings into light the security issues with NoSQL databases- MongoDB, HBase and Cassandra. A literature survey is carried out to identify the modern world scenarios of the applications using NoSQL databases and limitations are identified. A solution is proposed by designing a framework to achieve security for the web crawler applications using Cassandra, a NoSQL data store. Experimental results are presented to show the effectiveness of the work by designing an appropriate algorithm to trigger security for scalable web crawler architecture. Amazon Web Services (AWS), a familiar cloud platform, and bitnami cloud hosting services are used to procure the required servers and virtual machines. Performance changes on the virtual machines are brought into consideration before and after encrypting and decrypting the voluminous data and an improvement in efficiency is observed with the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
35

Jung, Sooyoung, and Jun-Ho Huh. "An Efficient LMS Platform and Its Test Bed." Electronics 8, no. 2 (February 1, 2019): 154. http://dx.doi.org/10.3390/electronics8020154.

Full text
Abstract:
In order to develop an e-learning system as a method of education that frees both the teacher and learner from the constraints of time and space, it is necessary to develop software and to build the network equipment required to operate the software. The most basic system consists of a web server, a database server, and a video server. However, these elements are vulnerable to both internal and external threats. As for the web, database, and video servers, it is possible to respond to such threats by operating more than two devices, but this inevitably increases the cost of building the equipment. Therefore, this study proposed the use of a cloud service, such as AWS (Amazon Web Service), in order to save on the costs of purchasing, installing, and operating the servers, as well as a service designed to strengthen security by employees or trainees who understand the internal situation of the training institute. In other words, this study proposed the development of an efficient Learning Management System (LMS) platform and proved its efficiency using a test bed over a period of three years. The major contribution of this study is that the design of the proposed LMS has been improved to provide a more efficient performance than the existing LMSs by surmounting the traffic overload problem often found in video services. This is achieved by utilizing a lesser number of servers and maintaining the balance of the loads. Also, the interface used for the system can be adaptable to most of the web servers as they support Java, Android, and HTML-based systems. As a cloud-based LMS, this system has been tested for its efficiency and effectiveness for a period of three years during which the results have been satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
36

Hiden, Hugo, Simon Woodman, Paul Watson, and Jacek Cala. "Developing cloud applications using the e-Science Central platform." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1983 (January 28, 2013): 20120085. http://dx.doi.org/10.1098/rsta.2012.0085.

Full text
Abstract:
This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.
APA, Harvard, Vancouver, ISO, and other styles
37

Bahmani, Amir, Kyle Ferriter, Vandhana Krishnan, Arash Alavi, Amir Alavi, Philip S. Tsao, Michael P. Snyder, and Cuiping Pan. "Swarm: A federated cloud framework for large-scale variant analysis." PLOS Computational Biology 17, no. 5 (May 12, 2021): e1008977. http://dx.doi.org/10.1371/journal.pcbi.1008977.

Full text
Abstract:
Genomic data analysis across multiple cloud platforms is an ongoing challenge, especially when large amounts of data are involved. Here, we present Swarm, a framework for federated computation that promotes minimal data motion and facilitates crosstalk between genomic datasets stored on various cloud platforms. We demonstrate its utility via common inquiries of genomic variants across BigQuery in the Google Cloud Platform (GCP), Athena in the Amazon Web Services (AWS), Apache Presto and MySQL. Compared to single-cloud platforms, the Swarm framework significantly reduced computational costs, run-time delays and risks of security breach and privacy violation.
APA, Harvard, Vancouver, ISO, and other styles
38

Alfiandi, Tedi, T. M. Diansyah, and Risko Liza. "ANALISIS PERBANDINGAN MANAJEMEN KONFIGURASI MENGGUNAKAN ANSIBLE DAN SHELL SCRIPT PADA CLOUD SERVER DEPLOYMENT AWS." JiTEKH 8, no. 2 (September 30, 2020): 78–84. http://dx.doi.org/10.35447/jitekh.v8i2.308.

Full text
Abstract:
Utilization of cloud computing technology in the development of a website has significant developments such as the use of disk storage, memory, and CPUs running in the cloud at a low cost when compared to physical servers. When creates a website, human interaction in carrying out the deployment process such as creating a database and installing the packages needed by the website, all of this is done manually so it takes a lot of time. An automation process is needed to solve this problem by using ansible and shell scripts in the website deployment process. This final project will compare ansible and shell scripts as configuration management for drupal deployments to the Amazon web service EC2 server by analyzing the deployment process time, CPU and memory usage on the server, throughput, and packet loss. Based on tests that shell scripts have performed outperformed ansible at deployment time with a difference of 3 minutes, troughput generated in ansible testing is better with an average of 60,164 Kb/s and 22,009 Kb/s for shell scripts, and ansible usage against the CPU is much better because it does not make the server overloaded.
APA, Harvard, Vancouver, ISO, and other styles
39

Ye, Byeong Jin, Ju Young Kim, Chunhui Suh, Seong Pil Choi, Maro Choi, Dong Hyun Kim, and Byung Chul Son. "Development of a Chatbot Program for Follow-Up Management of Workers’ General Health Examinations in Korea: A Pilot Study." International Journal of Environmental Research and Public Health 18, no. 4 (February 23, 2021): 2170. http://dx.doi.org/10.3390/ijerph18042170.

Full text
Abstract:
(1) Background: Follow-up management of workers’ general health examination (WGHE) is important, but it is not currently well done. Chatbot, a type of digital healthcare tool, is used in various medical fields but has never been developed for follow-up management of WGHE in Korea. (2) Methods: The database containing results and explanations related to WGHE was constructed. Then, the channel, which connects users with the database was created. A user survey regarding effectiveness was administered to 23 healthcare providers. Additionally, interviews on applicability for occupational health services were conducted with six nurses in the agency of occupational health management. (3) Results: Chatbot was implemented on a small scale on the Amazon cloud service (AWS) EC2 using KaKaoTalk and Web Chat as user channels. Regarding the effectiveness, 21 (91.30%) rated the need for chatbots as very high; however, 11 (47.83%) rated the usability as not high. Of the 23 participants, 14 (60.87%) expressed overall satisfaction. Nurses appreciated the chatbot program as a method for resolving accessibility and as an aid for explaining examination results and follow-up management. (4) Conclusions: The effectiveness of WGHE and the applicability in the occupational health service of the chatbot program for follow-up management can be confirmed.
APA, Harvard, Vancouver, ISO, and other styles
40

Jackson, Keith R., Krishna Muriki, Lavanya Ramakrishnan, Karl J. Runge, and Rollin C. Thomas. "Performance and Cost Analysis of the Supernova Factory on the Amazon AWS Cloud." Scientific Programming 19, no. 2-3 (2011): 107–19. http://dx.doi.org/10.1155/2011/498542.

Full text
Abstract:
Today, our picture of the Universe radically differs from that of just over a decade ago. We now know that the Universe is not only expanding as Hubble discovered in 1929, but that the rate of expansion is accelerating, propelled by mysterious new physics dubbed “Dark Energy”. This revolutionary discovery was made by comparing the brightness of nearby Type Ia supernovae (which exploded in the past billion years) to that of much more distant ones (from up to seven billion years ago). The reliability of this comparison hinges upon a very detailed understanding of the physics of the nearby events. To further this understanding, the Nearby Supernova Factory (SNfactory) relies upon a complex pipeline of serial processes that execute various image processing algorithms in parallel on ~10 TBs of data. This pipeline traditionally runs on a local cluster. Cloud computing [Above the clouds: a Berkeley view of cloud computing, Technical Report UCB/EECS-2009-28, University of California, 2009] offers many features that make it an attractive alternative. The ability to completely control the software environment in a cloud is appealing when dealing with a community developed science pipeline with many unique library and platform requirements. In this context we study the feasibility of porting the SNfactory pipeline to the Amazon Web Services environment. Specifically we: describe the tool set we developed to manage a virtual cluster on Amazon EC2, explore the various design options available for application data placement, and offer detailed performance results and lessons learned from each of the above design options.
APA, Harvard, Vancouver, ISO, and other styles
41

Chaudhary, Hafiz Ahmad Awais, and Tiziana Margaria. "Integration of micro-services as components in modeling environments for low code development." Proceedings of the Institute for System Programming of the RAS 33, no. 4 (2021): 19–30. http://dx.doi.org/10.15514/ispras-2021-33(4)-2.

Full text
Abstract:
Low code development environments are gaining attention due to their potential as a development paradigm for very large scale adoption in the future IT. In this paper, we propose a method to extend the (application) Domain Specific Languages supported by two low code development environments based on formal models, namely DIME (native Java) and Pyro (native Python), to include functionalities hosted on heterogeneous technologies and platforms. For this we follow the analogy of micro services. After this integration, both environments can leverage the communication with pre-existing remote RESTful and enterprise systems’ services, in our case Amazon Web Services (AWS) (but this can be easily generalized to other cloud platforms). Developers can this way utilize within DIME and Pyro the potential of sophisticated services, potentially the entire Python and AWS ecosystems, as libraries of drag and drop components in their model driven, low-code style. The new DSLs are made available in DIME and Pyro as collections of implemented SIBs and blocks. Due to the specific capabilities and checks underlying the DIME and Pyro platforms, the individual DSL functionalities are automatically validated for semantic and syntactical errors in both environments.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Kun, Giovanni Stracquadanio, Jingchuan Luo, Jef D. Boeke, and Joel S. Bader. "BioPartsBuilder: a synthetic biology tool for combinatorial assembly of biological parts." Bioinformatics 32, no. 6 (November 14, 2015): 937–39. http://dx.doi.org/10.1093/bioinformatics/btv664.

Full text
Abstract:
Abstract Summary: Combinatorial assembly of DNA elements is an efficient method for building large-scale synthetic pathways from standardized, reusable components. These methods are particularly useful because they enable assembly of multiple DNA fragments in one reaction, at the cost of requiring that each fragment satisfies design constraints. We developed BioPartsBuilder as a biologist-friendly web tool to design biological parts that are compatible with DNA combinatorial assembly methods, such as Golden Gate and related methods. It retrieves biological sequences, enforces compliance with assembly design standards and provides a fabrication plan for each fragment. Availability and implementation: BioPartsBuilder is accessible at http://public.biopartsbuilder.org and an Amazon Web Services image is available from the AWS Market Place (AMI ID: ami-508acf38). Source code is released under the MIT license, and available for download at https://github.com/baderzone/biopartsbuilder. Contact: joel.bader@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
43

Grandinetti, Justin Joseph. "Welcome to a New Generation of Entertainment: Amazon Web Services and the Normalization of Big Data Analytics and RFID Tracking." Surveillance & Society 17, no. 1/2 (March 31, 2019): 169–75. http://dx.doi.org/10.24908/ss.v17i1/2.12919.

Full text
Abstract:
The 2017 partnership between the National Football League (NFL) and Amazon Web Services (AWS) promises novel forms of cutting-edge real-time statistical analysis through the use of both radio frequency identification (RFID) chips and Amazon’s cloud-based machine learning and data-analytics tools. This use of RFID is heralded for its possibilities: for broadcasters, who are now capable of providing more thorough analysis; for fans, who can experience the game on a deeper analytical level using the NFL’s Next Gen Stats; and for coaches, who can capitalize on data-driven pattern recognition to gain a statistical edge over their competitors in real-time. In this paper, we respond to calls for further examination of the discursive positionings of RFID and big data technologies (Frith 2015; Kitchin and Dodge 2011). Specifically, this synthesis of RFID and cloud computing infrastructure via corporate partnership provides an alternative discursive positioning of two technologies that are often part of asymmetrical relations of power (Andrejevic 2014). Consequently, it is critical to examine the efforts of Amazon and the NFL to normalize pervasive spatial data collection and analytics to a mass audience by presenting these surveillance technologies as helpful tools for accessing new forms of data-driven knowing and analysis.
APA, Harvard, Vancouver, ISO, and other styles
44

Moltó, Germán, Diana M. Naranjo, and J. Damian Segrelles. "Insights from Learning Analytics for Hands-On Cloud Computing Labs in AWS." Applied Sciences 10, no. 24 (December 21, 2020): 9148. http://dx.doi.org/10.3390/app10249148.

Full text
Abstract:
Cloud computing instruction requires hands-on experience with a myriad of distributed computing services from a public cloud provider. Tracking the progress of the students, especially for online courses, requires one to automatically gather evidence and produce learning analytics in order to further determine the behavior and performance of students. With this aim, this paper describes the experience from an online course in cloud computing with Amazon Web Services on the creation of an open-source data processing tool to systematically obtain learning analytics related to the hands-on activities carried out throughout the course. These data, combined with the data obtained from the learning management system, have allowed the better characterization of the behavior of students in the course. Insights from a population of more than 420 online students through three academic years have been assessed, the dataset has been released for increased reproducibility. The results corroborate that course length has an impact on online students dropout. In addition, a gender analysis pointed out that there are no statistically significant differences in the final marks between genders, but women show an increased degree of commitment with the activities planned in the course.
APA, Harvard, Vancouver, ISO, and other styles
45

Naranjo, Diana M., José R. Prieto, Germán Moltó, and Amanda Calatrava. "A Visual Dashboard to Track Learning Analytics for Educational Cloud Computing." Sensors 19, no. 13 (July 4, 2019): 2952. http://dx.doi.org/10.3390/s19132952.

Full text
Abstract:
Cloud providers such as Amazon Web Services (AWS) stand out as useful platforms to teach distributed computing concepts as well as the development of Cloud-native scalable application architectures on real-world infrastructures. Instructors can benefit from high-level tools to track the progress of students during their learning paths on the Cloud, and this information can be disclosed via educational dashboards for students to understand their progress through the practical activities. To this aim, this paper introduces CloudTrail-Tracker, an open-source platform to obtain enhanced usage analytics from a shared AWS account. The tool provides the instructor with a visual dashboard that depicts the aggregated usage of resources by all the students during a certain time frame and the specific use of AWS for a specific student. To facilitate self-regulation of students, the dashboard also depicts the percentage of progress for each lab session and the pending actions by the student. The dashboard has been integrated in four Cloud subjects that use different learning methodologies (from face-to-face to online learning) and the students positively highlight the usefulness of the tool for Cloud instruction in AWS. This automated procurement of evidences of student activity on the Cloud results in close to real-time learning analytics useful both for semi-automated assessment and student self-awareness of their own training progress.
APA, Harvard, Vancouver, ISO, and other styles
46

Tawalbeh, Lo’ai, Fadi Muheidat, Mais Tawalbeh, and Muhannad Quwaider. "IoT Privacy and Security: Challenges and Solutions." Applied Sciences 10, no. 12 (June 15, 2020): 4102. http://dx.doi.org/10.3390/app10124102.

Full text
Abstract:
Privacy and security are among the significant challenges of the Internet of Things (IoT). Improper device updates, lack of efficient and robust security protocols, user unawareness, and famous active device monitoring are among the challenges that IoT is facing. In this work, we are exploring the background of IoT systems and security measures, and identifying (a) different security and privacy issues, (b) approaches used to secure the components of IoT-based environments and systems, (c) existing security solutions, and (d) the best privacy models necessary and suitable for different layers of IoT driven applications. In this work, we proposed a new IoT layered model: generic and stretched with the privacy and security components and layers identification. The proposed cloud/edge supported IoT system is implemented and evaluated. The lower layer represented by the IoT nodes generated from the Amazon Web Service (AWS) as Virtual Machines. The middle layer (edge) implemented as a Raspberry Pi 4 hardware kit with support of the Greengrass Edge Environment in AWS. We used the cloud-enabled IoT environment in AWS to implement the top layer (the cloud). The security protocols and critical management sessions were between each of these layers to ensure the privacy of the users’ information. We implemented security certificates to allow data transfer between the layers of the proposed cloud/edge enabled IoT model. Not only is the proposed system model eliminating possible security vulnerabilities, but it also can be used along with the best security techniques to countermeasure the cybersecurity threats facing each one of the layers; cloud, edge, and IoT.
APA, Harvard, Vancouver, ISO, and other styles
47

Goteng, Gokop L., M. Mahruf C. Shohel, and Faisal Tariq. "Enhancing Student Employability in Collaboration with the Industry: Case Study of a Partnership with Amazon Web Services Academy." Education Sciences 12, no. 6 (May 25, 2022): 366. http://dx.doi.org/10.3390/educsci12060366.

Full text
Abstract:
The continuous increase in tuition fees in high education in many countries requires justification by the university authorities of what students receive from them in return. One of the key factors of student recruitment is values for money and quality learning experiences including hands-on industry training that can guarantee immediate employment for the graduates. This article describes redesigning the curriculum of a cloud computing undergraduate module in collaboration with Amazon Web Services (AWS) Academy. Industry-based practical hands-on labs were incorporated into this module for engineering students to improve their practical knowledge and skills related to the Internet of Things. Through an innovative approach, this practitioner research introduces industry best practices and hands-on labs in cloud computing. In this approach, academic theories were incorporated in cloud computing with their applications through industry attachment. It enables students to have both the theoretical and practical knowledge and skills for ensuring their careers in the field of cloud computing. The study finds that students tend to be more engaged and learn better when theoretical knowledge and understanding are combined with real-world applications through the attachment with the industry.
APA, Harvard, Vancouver, ISO, and other styles
48

Khan, Asim, Umair Nawaz, Anwaar Ulhaq, and Randall W. Robinson. "Real-time plant health assessment via implementing cloud-based scalable transfer learning on AWS DeepLens." PLOS ONE 15, no. 12 (December 17, 2020): e0243243. http://dx.doi.org/10.1371/journal.pone.0243243.

Full text
Abstract:
The control of plant leaf diseases is crucial as it affects the quality and production of plant species with an effect on the economy of any country. Automated identification and classification of plant leaf diseases is, therefore, essential for the reduction of economic losses and the conservation of specific species. Various Machine Learning (ML) models have previously been proposed to detect and identify plant leaf disease; however, they lack usability due to hardware sophistication, limited scalability and realistic use inefficiency. By implementing automatic detection and classification of leaf diseases in fruit trees (apple, grape, peach and strawberry) and vegetable plants (potato and tomato) through scalable transfer learning on Amazon Web Services (AWS) SageMaker and importing it into AWS DeepLens for real-time functional usability, our proposed DeepLens Classification and Detection Model (DCDM) addresses such limitations. Scalability and ubiquitous access to our approach is provided by cloud integration. Our experiments on an extensive image data set of healthy and unhealthy fruit trees and vegetable plant leaves showed 98.78% accuracy with a real-time diagnosis of diseases of plant leaves. To train DCDM deep learning model, we used forty thousand images and then evaluated it on ten thousand images. It takes an average of 0.349s to test an image for disease diagnosis and classification using AWS DeepLens, providing the consumer with disease information in less than a second.
APA, Harvard, Vancouver, ISO, and other styles
49

Bhandari, Adarsh. "Analyzation and Comparison of Cloud Computing and Data Mining Techniques: Big Data and Impact of Blockchain." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (November 30, 2021): 712–21. http://dx.doi.org/10.22214/ijraset.2021.38888.

Full text
Abstract:
Abstract: With the rapid escalation of data driven solutions, companies are integrating huge data from multiple sources in order to gain fruitful results. To handle this tremendous volume of data we need cloud based architecture to store and manage this data. Cloud computing has emerged as a significant infrastructure that promises to reduce the need for maintaining costly computing facilities by organizations and scale up the products. Even today heavy applications are deployed on cloud and managed specially at AWS eliminating the need for error prone manual operations. This paper demonstrates about certain cloud computing tools and techniques present to handle big data and processes involved while extracting this data till model deployment and also distinction among their usage. It will also demonstrate, how big data analytics and cloud computing will change methods that will later drive the industry. Additionally, a study is presented later in the paper about management of blockchain generated big data on cloud and making analytical decision. Furthermore, the impact of blockchain in cloud computing and big data analytics has been employed in this paper. Keywords: Cloud Computing, Big Data, Amazon Web Services (AWS), Google Cloud Platform (GCP), SaaS, PaaS, IaaS.
APA, Harvard, Vancouver, ISO, and other styles
50

Kelly, Christopher, Nikolaos Pitropakis, Alexios Mylonas, Sean McKeown, and William J. Buchanan. "A Comparative Analysis of Honeypots on Different Cloud Platforms." Sensors 21, no. 7 (April 1, 2021): 2433. http://dx.doi.org/10.3390/s21072433.

Full text
Abstract:
In 2019, the majority of companies used at least one cloud computing service and it is expected that by the end of 2021, cloud data centres will process 94% of workloads. The financial and operational advantages of moving IT infrastructure to specialised cloud providers are clearly compelling. However, with such volumes of private and personal data being stored in cloud computing infrastructures, security concerns have risen. Motivated to monitor and analyze adversarial activities, we deploy multiple honeypots on the popular cloud providers, namely Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure, and operate them in multiple regions. Logs were collected over a period of three weeks in May 2020 and then comparatively analysed, evaluated and visualised. Our work revealed heterogeneous attackers’ activity on each cloud provider, both when one considers the volume and origin of attacks, as well as the targeted services and vulnerabilities. Our results highlight the attempt of threat actors to abuse popular services, which were widely used during the COVID-19 pandemic for remote working, such as remote desktop sharing. Furthermore, the attacks seem to exit not only from countries that are commonly found to be the source of attacks, such as China, Russia and the United States, but also from uncommon ones such as Vietnam, India and Venezuela. Our results provide insights on the adversarial activity during our experiments, which can be used to inform the Situational Awareness operations of an organisation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography