Segui questo link per vedere altri tipi di pubblicazioni sul tema: Continuous and distributed machine learning.

Articoli di riviste sul tema "Continuous and distributed machine learning"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Continuous and distributed machine learning".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Stan, Ioan-Mihail, Siarhei Padolski e Christopher Jon Lee. "Exploring the self-service model to visualize the results of the ATLAS Machine Learning analysis jobs in BigPanDA with Openshift OKD3". EPJ Web of Conferences 251 (2021): 02009. http://dx.doi.org/10.1051/epjconf/202125102009.

Testo completo
Abstract (sommario):
A large scientific computing infrastructure must offer versatility to host any kind of experiment that can lead to innovative ideas. The ATLAS experiment offers wide access possibilities to perform intelligent algorithms and analyze the massive amount of data produced in the Large Hadron Collider at CERN. The BigPanDA monitoring is a component of the PanDA (Production ANd Distributed Analysis) system, and its main role is to monitor the entire lifecycle of a job/task running in the ATLAS Distributed Computing infrastructure. Because many scientific experiments now rely upon Machine Learning algorithms, the BigPanDA community desires to expand the platform’s capabilities and fill the gap between Machine Learning processing and data visualization. In this regard, BigPanDA partially adopts the cloud-native paradigm and entrusts the data presentation to MLFlow services running on Openshift OKD. Thus, BigPanDA interacts with the OKD API and instructs the containers orchestrator how to locate and expose the results of the Machine Learning analysis. The proposed architecture also introduces various DevOps-specific patterns, including continuous integration for MLFlow middleware configuration and continuous deployment pipelines that implement rolling upgrades. The Machine Learning data visualization services operate on demand and run for a limited time, thus optimizing the resource consumption.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yin, Zhongdong, Jingjing Tu e Yonghai Xu. "Development of a Kernel Extreme Learning Machine Model for Capacity Selection of Distributed Generation Considering the Characteristics of Electric Vehicles". Applied Sciences 9, n. 12 (13 giugno 2019): 2401. http://dx.doi.org/10.3390/app9122401.

Testo completo
Abstract (sommario):
The large-scale access of distributed generation (DG) and the continuous increase in the demand of electric vehicle (EV) charging will result in fundamental changes in the planning and operating characteristics of the distribution network. Therefore, studying the capacity selection of the distributed generation, such as wind and photovoltaic (PV), and considering the charging characteristic of electric vehicles, is of great significance to the stability and economic operation of the distribution network. By using the network node voltage, the distributed generation output and the electric vehicles’ charging power as training data, we propose a capacity selection model based on the kernel extreme learning machine (KELM). The model accuracy is evaluated by using the root mean square error (RMSE). The stability of the network is evaluated by voltage stability evaluation index (Ivse). The IEEE33 node distributed system is used as simulation example, and gives results calculated by the kernel extreme learning machine that satisfy the minimum network loss and total investment cost. Finally, the results are compared with support vector machine (SVM), particle swarm optimization algorithm (PSO) and genetic algorithm (GA), to verify the feasibility and effectiveness of the proposed model and method.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Brophy, Eoin, Maarten De Vos, Geraldine Boylan e Tomás Ward. "Estimation of Continuous Blood Pressure from PPG via a Federated Learning Approach". Sensors 21, n. 18 (21 settembre 2021): 6311. http://dx.doi.org/10.3390/s21186311.

Testo completo
Abstract (sommario):
Ischemic heart disease is the highest cause of mortality globally each year. This puts a massive strain not only on the lives of those affected, but also on the public healthcare systems. To understand the dynamics of the healthy and unhealthy heart, doctors commonly use an electrocardiogram (ECG) and blood pressure (BP) readings. These methods are often quite invasive, particularly when continuous arterial blood pressure (ABP) readings are taken, and not to mention very costly. Using machine learning methods, we develop a framework capable of inferring ABP from a single optical photoplethysmogram (PPG) sensor alone. We train our framework across distributed models and data sources to mimic a large-scale distributed collaborative learning experiment that could be implemented across low-cost wearables. Our time-series-to-time-series generative adversarial network (T2TGAN) is capable of high-quality continuous ABP generation from a PPG signal with a mean error of 2.95 mmHg and a standard deviation of 19.33 mmHg when estimating mean arterial pressure on a previously unseen, noisy, independent dataset. To our knowledge, this framework is the first example of a GAN capable of continuous ABP generation from an input PPG signal that also uses a federated learning methodology.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Vrachimis, Andreas, Stella Gkegka e Kostas Kolomvatsos. "Resilient edge machine learning in smart city environments". Journal of Smart Cities and Society 2, n. 1 (7 luglio 2023): 3–24. http://dx.doi.org/10.3233/scs-230005.

Testo completo
Abstract (sommario):
Distributed Machine Learning (DML) has emerged as a disruptive technology that enables the execution of Machine Learning (ML) and Deep Learning (DL) algorithms in proximity to data generation, facilitating predictive analytics services in Smart City environments. However, the real-time analysis of data generated by Smart City Edge Devices (EDs) poses significant challenges. Concept drift, where the statistical properties of data streams change over time, leads to degraded prediction performance. Moreover, the reliability of each computing node directly impacts the availability of DML systems, making them vulnerable to node failures. To address these challenges, we propose a resilience framework comprising computationally lightweight maintenance strategies that ensure continuous quality of service and availability in DML applications. We conducted a comprehensive experimental evaluation using real datasets, assessing the effectiveness and efficiency of our resilience maintenance strategies across three different scenarios. Our findings demonstrate the significance and practicality of our framework in sustaining predictive performance in smart city edge learning environments. Specifically, our enhanced model exhibited increased generalizability when confronted with concept drift. Furthermore, we achieved a substantial reduction in the amount of data transmitted over the network during the maintenance of the enhanced models, while balancing the trade-off between the quality of analytics and inter-node data communication cost.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Musa, M. O., e E. E. Odokuma. "A framework for the detection of distributed denial of service attacks on network logs using ML and DL classifiers". Scientia Africana 22, n. 3 (25 gennaio 2024): 153–64. http://dx.doi.org/10.4314/sa.v22i3.14.

Testo completo
Abstract (sommario):
Despite the promise of machine learning in DDoS mitigation, it is not without its challenges. Attackers can employ adversarial techniques to evade detection by machine learning models. Moreover, machine learning models require large amounts of high-quality data for training and continuous refinement. Security teams must also be vigilant in monitoring and fine-tuning these models to adapt to new attack vectors. Nonetheless, the integration of machine learning into cybersecurity strategies represents a powerful approach to countering the persistent threat of DDoS attacks in an increasingly interconnected world. This paper proposed Machine Learning (ML) models and a Deep Learning (DL) model for the detection of Distributed Denial of Service Attacks (DDOS) on network system. The DDOS dataset is highly imbalanced because the number of instances of the various classes of the dataset are different. To solve the imbalance problem, we performed random under-sampling using under sampling technique in python called random under-sampler. The down sampled dataset was used for the training of the ML and DL classifiers. The trained models are random forest, gradient boosting and recurrent neural network algorithms on the DDOS dataset. The model was trained on the DDOS dataset by fine tuning the hyper parameters. The models was used to make prediction in an unseen dataset to detect the various types of the DDOS attacks. The result of the models were evaluated in terms of accuracy. The results of the models show an accuracy result of 79% for random forest, 82%, for gradient boosting, and 99.47% for recurrent neural network. From the experimental results.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Oliveri, Giorgio, Lucas C. van Laake, Cesare Carissimo, Clara Miette e Johannes T. B. Overvelde. "Continuous learning of emergent behavior in robotic matter". Proceedings of the National Academy of Sciences 118, n. 21 (10 maggio 2021): e2017015118. http://dx.doi.org/10.1073/pnas.2017015118.

Testo completo
Abstract (sommario):
One of the main challenges in robotics is the development of systems that can adapt to their environment and achieve autonomous behavior. Current approaches typically aim to achieve this by increasing the complexity of the centralized controller by, e.g., direct modeling of their behavior, or implementing machine learning. In contrast, we simplify the controller using a decentralized and modular approach, with the aim of finding specific requirements needed for a robust and scalable learning strategy in robots. To achieve this, we conducted experiments and simulations on a specific robotic platform assembled from identical autonomous units that continuously sense their environment and react to it. By letting each unit adapt its behavior independently using a basic Monte Carlo scheme, the assembled system is able to learn and maintain optimal behavior in a dynamic environment as long as its memory is representative of the current environment, even when incurring damage. We show that the physical connection between the units is enough to achieve learning, and no additional communication or centralized information is required. As a result, such a distributed learning approach can be easily scaled to larger assemblies, blurring the boundaries between materials and robots, paving the way for a new class of modular “robotic matter” that can autonomously learn to thrive in dynamic or unfamiliar situations, for example, encountered by soft robots or self-assembled (micro)robots in various environments spanning from the medical realm to space explorations.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kodaira, Daisuke, Kazuki Tsukazaki, Taiki Kure e Junji Kondoh. "Improving Forecast Reliability for Geographically Distributed Photovoltaic Generations". Energies 14, n. 21 (4 novembre 2021): 7340. http://dx.doi.org/10.3390/en14217340.

Testo completo
Abstract (sommario):
Photovoltaic (PV) generation is potentially uncertain. Probabilistic PV generation forecasting methods have been proposed with prediction intervals (PIs) to evaluate the uncertainty quantitively. However, few studies have applied PIs to geographically distributed PVs in a specific area. In this study, a two-step probabilistic forecast scheme is proposed for geographically distributed PV generation forecasting. Each step of the proposed scheme adopts ensemble forecasting based on three different machine-learning methods. When individual PV generation is forecasted, the proposed scheme utilizes surrounding PVs’ past data to train the ensemble forecasting model. In this case study, the proposed scheme was compared with conventional non-multistep forecasting. The proposed scheme improved the reliability of the PIs and deterministic PV forecasting results through 30 days of continuous operation with real data in Japan.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Hua, Xia, e Lei Han. "Design and Practical Application of Sports Visualization Platform Based on Tracking Algorithm". Computational Intelligence and Neuroscience 2022 (16 agosto 2022): 1–9. http://dx.doi.org/10.1155/2022/4744939.

Testo completo
Abstract (sommario):
Machine learning methods use computers to imitate human learning activities to discover new knowledge and enhance learning effects through continuous improvement. The main process is to further classify or predict unknown data by learning from existing experience and creating a learning machine. In order to improve the real-time performance and accuracy of the distributed EM algorithm for machine online learning, a clustering analysis algorithm based on distance measurement is proposed in combination with related theories. Among them, the greedy EM algorithm is a practical and important algorithm. However, the existing methods cannot simultaneously load a large amount of social information into the memory at a time. Therefore, we created a Hadoop cluster to cluster the Gaussian mixture model and check the accuracy of the algorithm, then compare the running time of the distributed EM algorithm and the greedy algorithm to verify the efficiency of the algorithm, and finally check the scalability of the algorithm by increasing the number of nodes. Based on this fact, this article has conducted research and discussion on the visualization of sports movements, and the teaching of visualization of sports movements can stimulate students’ interest in physical education. The traditional physical education curriculum is completely based on the teacher’s oral explanation and personal demonstration, and the emergence of visualized teaching of motor movements broke the teacher-centered teaching model and made teaching methods more interesting. This stimulated students’ interest in sports and improved classroom efficiency.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rustam, Furqan, Muhammad Faheem Mushtaq, Ameer Hamza, Muhammad Shoaib Farooq, Anca Delia Jurcut e Imran Ashraf. "Denial of Service Attack Classification Using Machine Learning with Multi-Features". Electronics 11, n. 22 (20 novembre 2022): 3817. http://dx.doi.org/10.3390/electronics11223817.

Testo completo
Abstract (sommario):
The exploitation of internet networks through denial of services (DoS) attacks has experienced a continuous surge over the past few years. Despite the development of advanced intrusion detection and protection systems, network security remains a challenging problem and necessitates the development of efficient and effective defense mechanisms to detect these threats. This research proposes a machine learning-based framework to detect distributed DOS (DDoS)/DoS attacks. For this purpose, a large dataset containing the network traffic of the application layer is utilized. A novel multi-feature approach is proposed where the principal component analysis (PCA) features and singular value decomposition (SVD) features are combined to obtain higher performance. The validation of the multi-feature approach is determined by extensive experiments using several machine learning models. The performance of machine learning models is evaluated for each class of attack and results are discussed regarding the accuracy, recall, and F1 score, etc., in the context of recent state-of-the-art approaches. Experimental results confirm that using multi-feature increases the performance and RF obtains a 100% accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Huang, Leqi. "Problems, solutions and improvements on federated learning model". Applied and Computational Engineering 22, n. 1 (23 ottobre 2023): 183–86. http://dx.doi.org/10.54254/2755-2721/22/20231215.

Testo completo
Abstract (sommario):
The field of machine learning has been stepping forward at a significant pace since the 21century due to the continuous modifications and improvements on the major underlying algorithms, particularly the model named federated learning (FL). This paper will specifically focus on the Partially Distributed and Coordinated Model, one of the major models subject to federated learning, to provide an analysis of the models working algorithms, existing problems and solutions, and improvements on the original model. The identification of the merits and drawbacks of each solution will be founded on document analysis, data analysis and contrastive analysis. The research concluded that both alternative solutions and improvements to the original model can possess their unique advantage as well as newly-emerged concerns or challenges.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Johnson, Paul A., e Chris W. Johnson. "Earthquake fault slip and nonlinear dynamics". Journal of the Acoustical Society of America 153, n. 3_supplement (1 marzo 2023): A203. http://dx.doi.org/10.1121/10.0018661.

Testo completo
Abstract (sommario):
Earthquake fault slip under shear forcing can be envisioned as a nonlinear dynamical process dominated by a single slip plane. In contrast, nonlinear behavior in Earth materials (e.g., rock) is driven by a strain-induced ensemble activation and slip of a large number of distributed features—cracks and grain boundary slip across many scales in the volume. The bulk recovery of a fault post-failure and that of a rock sample post dynamic or static forcing (”aging” or the “slow dynamics”) is very similar with approximate log(time) dependence for much of the recovery. In our work, we analyze large amounts of continuous acoustic emission (AE) data from a laboratory “earthquake machine,” applying machine learning, with the task of determining what information regarding fault slip the AE signal may carry. Applying the continuous AE as input to machine learning models and using measured fault friction, displacement, etc., as model labels, we find that the AE are imprinted with information regarding the fault friction and displacement. We are currently developing approaches to probe stick-slip on Earth faults, those that are responsible for damaging earthquakes. A related goal is to quantitatively relate nonlinear elastic theory (e.g., PM space, Arrhenius) to frictional theory (e.g., rate-state).
Gli stili APA, Harvard, Vancouver, ISO e altri
12

T V, Bhuvana. "AI ENABLED WATER CONSERVATION FOR IRRIGATION USING IOT". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 05 (14 maggio 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33920.

Testo completo
Abstract (sommario):
This project introduces a smart irrigation system that utilizes Internet of Things (IoT) technology and machine learning algorithms to enhance water management in agriculture. The system employs a series of IoT sensors distributed across the field to consistently monitor key environmental factors such as soil moisture levels, temperature, humidity, and others. The collected data is analysed using machine learning Cat Boosting algorithm. This algorithm analyze the data to determine the optimal irrigation schedule based on crop water requirements, soil conditions, weather forecasts, and historical data. The system controls irrigation equipment such as pumps, valves, and sprinklers to deliver precise amounts of water to crops at the right time. Continuous feedback from the system allows for refinement of irrigation schedules, leading to improved water conservation, increased crop yield, and cost savings for farmers. This project discusses the benefits of such a system, including water conservation, increased crop yield, cost savings, environmental sustainability, and remote monitoring and control capabilities. Overall, the integration of IoT and machine learning technologies offers a powerful solution for sustainable agriculture, enabling data-driven decision-making processes to optimize water usage and maximize crop productivity. Keywords— Smart irrigation system, Internet of Things(IOT)technology, Water conservation, Environmental parameters, Soil moisture monitoring, Increased crop yield, Machine learning algorithms
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Xie, Anze, Anders Carlsson, Jason Mohoney, Roger Waleffe, Shanan Peters, Theodoros Rekatsinas e Shivaram Venkataraman. "Demo of marius". Proceedings of the VLDB Endowment 14, n. 12 (luglio 2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Testo completo
Abstract (sommario):
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models over billion-edge graphs on a single machine. Marius is built around a recently-introduced architecture for machine learning over graphs that utilizes pipelining and a novel data replacement policy to maximize GPU utilization and exploit the entire memory hierarchy (including disk, CPU, and GPU memory) to scale to large instances. The audience will experience how to develop, train, and deploy graph embedding models using Marius' configuration-driven programming model. Moreover, the audience will have the opportunity to explore Marius' deployments on applications including link-prediction on WikiKG90M and reasoning queries on a paleobiology knowledge graph. Marius is available as open source software at https://marius-project.org.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Chang, Wanjun, Yangbo Li e Qidong Du. "Microblog Emotion Analysis Using Improved DBN Under Spark Platform". International Journal of Information Technologies and Systems Approach 16, n. 2 (16 febbraio 2023): 1–16. http://dx.doi.org/10.4018/ijitsa.318141.

Testo completo
Abstract (sommario):
In order to solve the problems that traditional single-machine methods find it difficult to complete the task of emotion classification quickly, and the time efficiency and scalability are not high; a microblog emotion analysis method using improved deep belief network (DBN) under Spark platform is proposed. First, the Hadoop distributed file system is used to realize the distributed storage of text data, and the preprocessed data and emotion dictionary are converted into word vector representation based on the continuous bag-of-words model. Then, an improved DBN model is constructed by combining the adaptive learning method of DBN with the active learning method, and it is applied to the learning analysis of text word vectors. Finally, the data parallel optimization of the improved DBN model is realized, based on Spark platform to accurately and quickly obtain the emotion types of microblog texts. The experimental analysis of the proposed method based on the microblog text data set shows that the classification accuracy is more than 94%.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Gómez, Jairo A., Jorge E. Patiño, Juan C. Duque e Santiago Passos. "Spatiotemporal Modeling of Urban Growth Using Machine Learning". Remote Sensing 12, n. 1 (28 dicembre 2019): 109. http://dx.doi.org/10.3390/rs12010109.

Testo completo
Abstract (sommario):
This paper presents a general framework for modeling the growth of three important variables for cities: population distribution, binary urban footprint, and urban footprint in color. The framework models the population distribution as a spatiotemporal regression problem using machine learning, and it obtains the binary urban footprint from the population distribution through a binary classifier plus a temporal correction for existing urban regions. The framework estimates the urban footprint in color from its previous value, as well as from past and current values of the binary urban footprint using a semantic inpainting algorithm. By combining this framework with free data from the Landsat archive and the Global Human Settlement Layer framework, interested users can get approximate growth predictions of any city in the world. These predictions can be improved with the inclusion in the framework of additional spatially distributed input variables over time subject to availability. Unlike widely used growth models based on cellular automata, there are two main advantages of using the proposed machine learning-based framework. Firstly, it does not require to define rules a priori because the model learns the dynamics of growth directly from the historical data. Secondly, it is very easy to train new machine learning models using different explanatory input variables to assess their impact. As a proof of concept, we tested the framework in Valledupar and Rionegro, two Latin American cities located in Colombia with different geomorphological characteristics, and found that the model predictions were in close agreement with the ground-truth based on performance metrics, such as the root-mean-square error, zero-mean normalized cross-correlation, Pearson’s correlation coefficient for continuous variables, and a few others for discrete variables such as the intersection over union, accuracy, and the f 1 metric. In summary, our framework for modeling urban growth is flexible, allows sensitivity analyses, and can help policymakers worldwide to assess different what-if scenarios during the planning cycle of sustainable and resilient cities.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Behera, Ranjan, Sushree Das, Santanu Rath, Sanjay Misra e Robertas Damasevicius. "Comparative Study of Real Time Machine Learning Models for Stock Prediction through Streaming Data". JUCS - Journal of Universal Computer Science 26, n. 9 (28 settembre 2020): 1128–47. http://dx.doi.org/10.3897/jucs.2020.059.

Testo completo
Abstract (sommario):
Stock prediction is one of the emerging applications in the field of data science which help the companies to make better decision strategy. Machine learning models play a vital role in the field of prediction. In this paper, we have proposed various machine learning models which predicts the stock price from the real-time streaming data. Streaming data has been a potential source for real-time prediction which deals with continuous ow of data having information from various sources like social networking websites, server logs, mobile phone applications, trading oors etc. We have adopted the distributed platform, Spark to analyze the streaming data collected from two different sources as represented in two case studies in this paper. The first case study is based on stock prediction from the historical data collected from Google finance websites through NodeJs and the second one is based on the sentiment analysis of Twitter collected through Twitter API available in Stanford NLP package. Several researches have been made in developing models for stock prediction based on static data. In this work, an effort has been made to develop scalable, fault tolerant models for stock prediction from the real-time streaming data. The Proposed model is based on a distributed architecture known as Lambda architecture. The extensive comparison is made between actual and predicted output for different machine learning models. Support vector regression is found to have better accuracy as compared to other models. The historical data is considered as a ground truth data for validation.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Liu, Bowen, e Qiang Tang. "Secure Data Sharing in Federated Learning through Blockchain-Based Aggregation". Future Internet 16, n. 4 (15 aprile 2024): 133. http://dx.doi.org/10.3390/fi16040133.

Testo completo
Abstract (sommario):
In this paper, we explore the realm of federated learning (FL), a distributed machine learning (ML) paradigm, and propose a novel approach that leverages the robustness of blockchain technology. FL, a concept introduced by Google in 2016, allows multiple entities to collaboratively train an ML model without the need to expose their raw data. However, it faces several challenges, such as privacy concerns and malicious attacks (e.g., data poisoning attacks). Our paper examines the existing EIFFeL framework, a protocol for decentralized real-time messaging in continuous integration and delivery pipelines, and introduces an enhanced scheme that leverages the trustworthy nature of blockchain technology. Our scheme eliminates the need for a central server and any other third party, such as a public bulletin board, thereby mitigating the risks associated with the compromise of such third parties.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Avcı, İsa, e Murat Koca. "Predicting DDoS Attacks Using Machine Learning Algorithms in Building Management Systems". Electronics 12, n. 19 (5 ottobre 2023): 4142. http://dx.doi.org/10.3390/electronics12194142.

Testo completo
Abstract (sommario):
The rapid growth of the Internet of Things (IoT) in smart buildings necessitates the continuous evaluation of potential threats and their implications. Conventional methods are increasingly inadequate in measuring risk and mitigating associated hazards, necessitating the development of innovative approaches. Cybersecurity systems for IoT are critical not only in Building Management System (BMS) applications but also in various aspects of daily life. Distributed Denial of Service (DDoS) attacks targeting core BMS software, particularly those launched by botnets, pose significant risks to assets and safety. In this paper, we propose a novel algorithm that combines the power of the Slime Mould Optimization Algorithm (SMOA) for feature selection with an Artificial Neural Network (ANN) predictor and the Support Vector Machine (SVM) algorithm. Our enhanced algorithm achieves an outstanding accuracy of 97.44% in estimating DDoS attack risk factors in the context of BMS. Additionally, it showcases a remarkable 99.19% accuracy in predicting DDoS attacks, effectively preventing system disruptions, and managing cyber threats. To further validate our work, we perform a comparative analysis using the K-Nearest Neighbor Classifier (KNN), which yields an accuracy rate of 96.46%. Our model is trained on the Canadian Institute for Cybersecurity (CIC) IoT Dataset 2022, enabling behavioral analysis and vulnerability testing on diverse IoT devices utilizing various protocols, such as IEEE 802.11, Zigbee-based, and Z-Wave.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Battaglia, Elena, Simone Celano e Ruggero G. Pensa. "Differentially Private Distance Learning in Categorical Data". Data Mining and Knowledge Discovery 35, n. 5 (13 luglio 2021): 2050–88. http://dx.doi.org/10.1007/s10618-021-00778-0.

Testo completo
Abstract (sommario):
AbstractMost privacy-preserving machine learning methods are designed around continuous or numeric data, but categorical attributes are common in many application scenarios, including clinical and health records, census and survey data. Distance-based methods, in particular, have limited applicability to categorical data, since they do not capture the complexity of the relationships among different values of a categorical attribute. Although distance learning algorithms exist for categorical data, they may disclose private information about individual records if applied to a secret dataset. To address this problem, we introduce a differentially private family of algorithms for learning distances between any pair of values of a categorical attribute according to the way they are co-distributed with the values of other categorical attributes forming the so-called context. We define different variants of our algorithm and we show empirically that our approach consumes little privacy budget while providing accurate distances, making it suitable in distance-based applications, such as clustering and classification.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Kim, Bockjoo, e Dimitri Bourilkov. "Automatic Monitoring of Large-Scale Computing Infrastructure". EPJ Web of Conferences 295 (2024): 07007. http://dx.doi.org/10.1051/epjconf/202429507007.

Testo completo
Abstract (sommario):
Modern distributed computing systems produce large amounts of monitoring data. For these systems to operate smoothly, underperforming or failing components must be identified quickly, and preferably automatically, enabling the system managers to react accordingly. In this contribution, we analyze jobs and transfer data collected in the running of the LHC computing infrastructure. The monitoring data is harvested from the Elasticsearch database and converted to formats suitable for further processing. Based on various machine and deep learning techniques, we develop automatic tools for continuous monitoring of the health of the underlying systems. Our initial implementation is based on publicly available deep learning tools, PyTorch or TensorFlow packages, running on state-of-the-art GPU systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Paul, Baltescu, Blunsom Phil e Hoang Hieu. "OxLM: A Neural Language Modelling Framework for Machine Translation". Prague Bulletin of Mathematical Linguistics 102, n. 1 (11 settembre 2014): 81–92. http://dx.doi.org/10.2478/pralin-2014-0016.

Testo completo
Abstract (sommario):
Abstract This paper presents an open source implementation1 of a neural language model for machine translation. Neural language models deal with the problem of data sparsity by learning distributed representations for words in a continuous vector space. The language modelling probabilities are estimated by projecting a word's context in the same space as the word representations and by assigning probabilities proportional to the distance between the words and the context's projection. Neural language models are notoriously slow to train and test. Our framework is designed with scalability in mind and provides two optional techniques for reducing the computational cost: the so-called class decomposition trick and a training algorithm based on noise contrastive estimation. Our models may be extended to incorporate direct n-gram features to learn weights for every n-gram in the training data. Our framework comes with wrappers for the cdec and Moses translation toolkits, allowing our language models to be incorporated as normalized features in their decoders (inside the beam search).
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Sánchez-Reolid, Roberto, Arturo S. García, Miguel A. Vicente-Querol, Beatriz García-Martinez e Antonio Fernández-Caballero. "Distributed Architecture for Acquisition and Processing of Physiological Signals". Proceedings 31, n. 1 (20 novembre 2019): 30. http://dx.doi.org/10.3390/proceedings2019031030.

Testo completo
Abstract (sommario):
The increase in the number of devices equipped with physiological sensors and their low price mean that they can be used in many fields. One of these fields is health-care and home-care for the elderly or people with disabilities. The development of such devices makes it possible to monitor their condition continuously and at all times. A continuous monitoring not only establishes an image of the user’s status, but also detects possible anomalies. Therefore, it is necessary to develop a distributed architecture that allows expert analysts to access the data provided by the sensors at all times and from anywhere. This paper introduces the development and implementation of the concept of distributed architecture, focusing on the minimum requirements needed to carry it out. All the necessary modules are described for different stages: acquisition, communication and processing of physiological signals. The last stage is carried out by a machine learning system. The complete reporting and storage system is also described. Finally, the most important conclusions that have emerged during the development are reported.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Ojugo, Arnold, e Andrew Okonji Eboka. "An Empirical Evaluation On Comparative Machine Learning Techniques For Detection of The Distributed Denial of Service (DDoS) Attacks". Journal of Applied Science, Engineering, Technology, and Education 2, n. 1 (13 giugno 2020): 18–27. http://dx.doi.org/10.35877/454ri.asci2192.

Testo completo
Abstract (sommario):
The advent of the Internet that aided the efficient sharing of resources. Also, it has introduced adversaries whom are today restlessly in their continued efforts at an effective, non-detectable means to invade secure systems, either for fun or personal gains. They achieve these feats via the use of malware, which is both on the rise, wreaks havoc alongside causing loads of financial losses to users. With the upsurge to counter these escapades, users and businesses today seek means to detect these evolving behavior and pattern by these adversaries. It is also to worthy of note that adversaries have also evolved, changing their own structure to make signature detection somewhat unreliable and anomaly detection tedious to network administrators. Our study investigates the detection of the distributed denial of service (DDoS) attacks using machine learning techniques. Results shows that though evolutionary models have been successfully implemented in the detection DDoS, the search for optima is an inconclusive and continuous task. That no one method yields a better optima than hybrids. That with hybrids, users must adequately resolve the issues of data conflicts arising from the dataset to be used, conflict from the adapted statistical methods arising from data encoding, and conflicts in parameter selection to avoid model overtraining, over-fitting and over-parameterization.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Kareem, Amer, Haiming Liu e Vladan Velisavljevic. "A Privacy-Preserving Approach to Effectively Utilize Distributed Data for Malaria Image Detection". Bioengineering 11, n. 4 (30 marzo 2024): 340. http://dx.doi.org/10.3390/bioengineering11040340.

Testo completo
Abstract (sommario):
Malaria is one of the life-threatening diseases caused by the parasite known as Plasmodium falciparum, affecting the human red blood cells. Therefore, it is an important to have an effective computer-aided system in place for early detection and treatment. The visual heterogeneity of the malaria dataset is highly complex and dynamic, therefore higher number of images are needed to train the machine learning (ML) models effectively. However, hospitals as well as medical institutions do not share the medical image data for collaboration due to general data protection regulations (GDPR) and the data protection act (DPA). To overcome this collaborative challenge, our research utilised real-time medical image data in the framework of federated learning (FL). We have used state-of-the-art ML models that include the ResNet-50 and DenseNet in a federated learning framework. We have experimented both models in different settings on a malaria dataset constituting 27,560 publicly available images and our preliminary results showed that the DenseNet model performed better in accuracy (75%) in contrast to ResNet-50 (72%) while considering eight clients, while the trend was observed as common in four clients with the similar accuracy of 94%, and six clients showed that the DenseNet model performed quite well with the accuracy of 92%, while ResNet-50 achieved only 72%. The federated learning framework enhances the accuracy due to its decentralised nature, continuous learning, and effective communication among clients, as well as the efficient local adaptation. The use of federated learning architecture among the distinct clients for ensuring the data privacy and following GDPR is the contribution of this research work.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Melton, Joe R., Ed Chan, Koreen Millard, Matthew Fortier, R. Scott Winton, Javier M. Martín-López, Hinsby Cadillo-Quiroz, Darren Kidd e Louis V. Verchot. "A map of global peatland extent created using machine learning (Peat-ML)". Geoscientific Model Development 15, n. 12 (20 giugno 2022): 4709–38. http://dx.doi.org/10.5194/gmd-15-4709-2022.

Testo completo
Abstract (sommario):
Abstract. Peatlands store large amounts of soil carbon and freshwater, constituting an important component of the global carbon and hydrologic cycles. Accurate information on the global extent and distribution of peatlands is presently lacking but is needed by Earth system models (ESMs) to simulate the effects of climate change on the global carbon and hydrologic balance. Here, we present Peat-ML, a spatially continuous global map of peatland fractional coverage generated using machine learning (ML) techniques suitable for use as a prescribed geophysical field in an ESM. Inputs to our statistical model follow drivers of peatland formation and include spatially distributed climate, geomorphological and soil data, and remotely sensed vegetation indices. Available maps of peatland fractional coverage for 14 relatively extensive regions were used along with mapped ecoregions of non-peatland areas to train the statistical model. In addition to qualitative comparisons to other maps in the literature, we estimated model error in two ways. The first estimate used the training data in a blocked leave-one-out cross-validation strategy designed to minimize the influence of spatial autocorrelation. That approach yielded an average r2 of 0.73 with a root-mean-square error and mean bias error of 9.11 % and −0.36 %, respectively. Our second error estimate was generated by comparing Peat-ML against a high-quality, extensively ground-truthed map generated by Ducks Unlimited Canada for the Canadian Boreal Plains region. This comparison suggests our map to be of comparable quality to mapping products generated through more traditional approaches, at least for boreal peatlands.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Li, Boyuan, Shengbo Chen e Zihao Peng. "New Generation Federated Learning". Sensors 22, n. 21 (3 novembre 2022): 8475. http://dx.doi.org/10.3390/s22218475.

Testo completo
Abstract (sommario):
With the development of the Internet of things (IoT), federated learning (FL) has received increasing attention as a distributed machine learning (ML) framework that does not require data exchange. However, current FL frameworks follow an idealized setup in which the task size is fixed and the storage space is unlimited, which is impossible in the real world. In fact, new classes of these participating clients always emerge over time, and some samples are overwritten or discarded due to storage limitations. We urgently need a new framework to adapt to the dynamic task sequences and strict storage constraints in the real world. Continuous learning or incremental learning is the ultimate goal of deep learning, and we introduce incremental learning into FL to describe a new federated learning framework. New generation federated learning (NGFL) is probably the most desirable framework for FL, in which, in addition to the basic task of training the server, each client needs to learn its private tasks, which arrive continuously independent of communication with the server. We give a rigorous mathematical representation of this framework, detail several major challenges faced under this framework, and address the main challenges of combining incremental learning with federated learning (aggregation of heterogeneous output layers and the task transformation mutual knowledge problem), and show the lower and upper baselines of the framework.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Sitokonstantinou, Vasileios, Alkiviadis Koukos, Thanassis Drivas, Charalampos Kontoes, Ioannis Papoutsis e Vassilia Karathanassi. "A Scalable Machine Learning Pipeline for Paddy Rice Classification Using Multi-Temporal Sentinel Data". Remote Sensing 13, n. 9 (1 maggio 2021): 1769. http://dx.doi.org/10.3390/rs13091769.

Testo completo
Abstract (sommario):
The demand for rice production in Asia is expected to increase by 70% in the next 30 years, which makes evident the need for a balanced productivity and effective food security management at a national and continental level. Consequently, the timely and accurate mapping of paddy rice extent and its productivity assessment is of utmost significance. In turn, this requires continuous area monitoring and large scale mapping, at the parcel level, through the processing of big satellite data of high spatial resolution. This work designs and implements a paddy rice mapping pipeline in South Korea that is based on a time-series of Sentinel-1 and Sentinel-2 data for the year of 2018. There are two challenges that we address; the first one is the ability of our model to manage big satellite data and scale for a nationwide application. The second one is the algorithm’s capacity to cope with scarce labeled data to train supervised machine learning algorithms. Specifically, we implement an approach that combines unsupervised and supervised learning. First, we generate pseudo-labels for rice classification from a single site (Seosan-Dangjin) by using a dynamic k-means clustering approach. The pseudo-labels are then used to train a Random Forest (RF) classifier that is fine-tuned to generalize in two other sites (Haenam and Cheorwon). The optimized model was then tested against 40 labeled plots, evenly distributed across the country. The paddy rice mapping pipeline is scalable as it has been deployed in a High Performance Data Analytics (HPDA) environment using distributed implementations for both k-means and RF classifiers. When tested across the country, our model provided an overall accuracy of 96.69% and a kappa coefficient 0.87. Even more, the accurate paddy rice area mapping was returned early in the year (late July), which is key for timely decision-making. Finally, the performance of the generalized paddy rice classification model, when applied in the sites of Haenam and Cheorwon, was compared to the performance of two equivalent models that were trained with locally sampled labels. The results were comparable and highlighted the success of the model’s generalization and its applicability to other regions.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Zeng, Liang, Wenxin Wang e Wei Zuo. "A Federated Learning Latency Minimization Method for UAV Swarms Aided by Communication Compression and Energy Allocation". Sensors 23, n. 13 (21 giugno 2023): 5787. http://dx.doi.org/10.3390/s23135787.

Testo completo
Abstract (sommario):
Unmanned aerial vehicle swarms (UAVSs) can carry out numerous tasks such as detection and mapping when outfitted with machine learning (ML) models. However, due to the flying height and mobility of UAVs, it is very difficult to ensure a continuous and stable connection between ground base stations and UAVs, as a result of which distributed machine learning approaches, such as federated learning (FL), perform better than centralized machine learning approaches in some circumstances when utilized by UAVs. However, in practice, functions that UAVs must perform often, such as emergency obstacle avoidance, require a high sensitivity to latency. This work attempts to provide a comprehensive analysis of energy consumption and latency sensitivity of FL in UAVs and present a set of solutions based on an efficient asynchronous federated learning mechanism for edge network computing (EAFLM) combined with ant colony optimization (ACO) for the cases where UAVs execute such latency-sensitive jobs. Specifically, UAVs participating in each round of communication are screened, and only the UAVs that meet the conditions will participate in the regular round of communication so as to compress the communication times. At the same time, the transmit power and CPU frequency of the UAV are adjusted to obtain the shortest time of an individual iteration round. This method is verified using the MNIST dataset and numerical results are provided to support the usefulness of our proposed method. It greatly reduces the communication times between UAVs with a relatively low influence on accuracy and optimizes the allocation of UAVs’ communication resources.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Kawabata, Kuniaki, Zhi-Wei Luo e Jie Huang. "Special Issue on Machine Intelligence for Robotics and Mechatronics". Journal of Robotics and Mechatronics 22, n. 4 (20 agosto 2010): 417. http://dx.doi.org/10.20965/jrm.2010.p0417.

Testo completo
Abstract (sommario):
Machine intelligence is important in realizing intelligent recognition, control, and task execution in robotics and mechatronics research. One major approach involves developing machine learning / computational intelligence. This exciting field displays continuous dramatic progress based on new computer performance advances and trends. The 15 papers in this special issue present the latest machine intelligence for robotics and mechatronics and their applications. The first four papers propose interactive human-machine systems and human interfacing supporting human activities and service operations. One example of the major applications of robotics and mechatronics research is supporting daily life and work. The next four papers cover the issues of multiagents and multirobot systems, including intelligent design approach to control based on advanced distributed computational intelligence. Two papers on visual/pattern recognition discuss the asbestos fiber counting problem in qualitative analysis as a typical machine intelligence application. The next two papers deal with bio-related issues - social insects (termites) inspiring labor control of multirobots and “nonsocial” insects (crickets) inspiring a novel experimental interactive robot-insect tool. The last three papers present intelligent control of robot manipulators, mainly using learning algorithms as computational intelligence. All explore cutting-edge research machine intelligence for robotics and mechatronics. We thank the authors for their invaluable contributions in submitting their most recent research results to this issue. We are grateful to the reviewers for their generous time and effort. We also thank the Editorial Board member of the Journal of Robotics and Mechatronics for helping to make this issue possible.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Greifeneder, Felix, Claudia Notarnicola e Wolfgang Wagner. "A Machine Learning-Based Approach for Surface Soil Moisture Estimations with Google Earth Engine". Remote Sensing 13, n. 11 (27 maggio 2021): 2099. http://dx.doi.org/10.3390/rs13112099.

Testo completo
Abstract (sommario):
Due to its relation to the Earth’s climate and weather and phenomena like drought, flooding, or landslides, knowledge of the soil moisture content is valuable to many scientific and professional users. Remote-sensing offers the unique possibility for continuous measurements of this variable. Especially for agriculture, there is a strong demand for high spatial resolution mapping. However, operationally available soil moisture products exist with medium to coarse spatial resolution only (≥1 km). This study introduces a machine learning (ML)—based approach for the high spatial resolution (50 m) mapping of soil moisture based on the integration of Landsat-8 optical and thermal images, Copernicus Sentinel-1 C-Band SAR images, and modelled data, executable in the Google Earth Engine. The novelty of this approach lies in applying an entirely data-driven ML concept for global estimation of the surface soil moisture content. Globally distributed in situ data from the International Soil Moisture Network acted as an input for model training. Based on the independent validation dataset, the resulting overall estimation accuracy, in terms of Root-Mean-Squared-Error and R², was 0.04 m3·m−3 and 0.81, respectively. Beyond the retrieval model itself, this article introduces a framework for collecting training data and a stand-alone Python package for soil moisture mapping. The Google Earth Engine Python API facilitates the execution of data collection and retrieval which is entirely cloud-based. For soil moisture retrieval, it eliminates the requirement to download or preprocess any input datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Hao, Yixue, Yiming Miao, Min Chen, Hamid Gharavi e Victor Leung. "6G Cognitive Information Theory: A Mailbox Perspective". Big Data and Cognitive Computing 5, n. 4 (16 ottobre 2021): 56. http://dx.doi.org/10.3390/bdcc5040056.

Testo completo
Abstract (sommario):
With the rapid development of 5G communications, enhanced mobile broadband, massive machine type communications and ultra-reliable low latency communications are widely supported. However, a 5G communication system is still based on Shannon’s information theory, while the meaning and value of information itself are not taken into account in the process of transmission. Therefore, it is difficult to meet the requirements of intelligence, customization, and value transmission of 6G networks. In order to solve the above challenges, we propose a 6G mailbox theory, namely a cognitive information carrier to enable distributed algorithm embedding for intelligence networking. Based on Mailbox, a 6G network will form an intelligent agent with self-organization, self-learning, self-adaptation, and continuous evolution capabilities. With the intelligent agent, redundant transmission of data can be reduced while the value transmission of information can be improved. Then, the features of mailbox principle are introduced, including polarity, traceability, dynamics, convergence, figurability, and dependence. Furthermore, key technologies with which value transmission of information can be realized are introduced, including knowledge graph, distributed learning, and blockchain. Finally, we establish a cognitive communication system assisted by deep learning. The experimental results show that, compared with a traditional communication system, our communication system performs less data transmission quantity and error.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Sun, Hanqi, Wanquan Zhu, Ziyu Sun, Mingsheng Cao e Wenbin Liu. "FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks". Electronics 12, n. 23 (30 novembre 2023): 4838. http://dx.doi.org/10.3390/electronics12234838.

Testo completo
Abstract (sommario):
Federated learning is a distributed machine learning algorithm that enables collaborative training among multiple clients without sharing sensitive information. Unlike centralized learning, it emphasizes the distinctive benefits of safeguarding data privacy. However, two challenging issues, namely heterogeneity and backdoor attacks, pose severe challenges to standardizing federated learning algorithms. Data heterogeneity affects model accuracy, target heterogeneity fragments model applicability, and model heterogeneity compromises model individuality. Backdoor attacks inject trigger patterns into data to deceive the model during training, thereby undermining the performance of federated learning. In this work, we propose an advanced federated learning paradigm called Federated Mutual Distillation Learning (FMDL). FMDL allows clients to collaboratively train a global model while independently training their private models, subject to server requirements. Continuous bidirectional knowledge transfer is performed between local models and private models to achieve model personalization. FMDL utilizes the technique of attention distillation, conducting mutual distillation during the local update phase and fine-tuning on clean data subsets to effectively erase the backdoor triggers. Our experiments demonstrate that FMDL benefits clients from different data, tasks, and models, effectively defends against six types of backdoor attacks, and validates the effectiveness and efficiency of our proposed approach.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Karlgren, Jussi, e Pentti Kanerva. "High-dimensional distributed semantic spaces for utterances". Natural Language Engineering 25, n. 4 (luglio 2019): 503–17. http://dx.doi.org/10.1017/s1351324919000226.

Testo completo
Abstract (sommario):
AbstractHigh-dimensional distributed semantic spaces have proven useful and effective for aggregating and processing visual, auditory and lexical information for many tasks related to human-generated data. Human language makes use of a large and varying number of features, lexical and constructional items as well as contextual and discourse-specific data of various types, which all interact to represent various aspects of communicative information. Some of these features are mostly local and useful for the organisation of, for example, argument structure of a predication; others are persistent over the course of a discourse and necessary for achieving a reasonable level of understanding of the content.This paper describes a model for high-dimensional representation for utterance and text-level data including features such as constructions or contextual data, based on a mathematically principled and behaviourally plausible approach to representing linguistic information. The implementation of the representation is a straightforward extension of Random Indexing models previously used for lexical linguistic items. The paper shows how the implementedmodel is able to represent a broad range of linguistic features in a common integral framework of fixed dimensionality, which is computationally habitable, and which is suitable as a bridge between symbolic representations such as dependency analysis and continuous representations used, for example, in classifiers or further machine-learning approaches. This is achieved with operations on vectors that constitute a powerful computational algebra, accompanied with an associative memory for the vectors. The paper provides a technical overview of the framework and a worked through implemented example of how it can be applied to various types of linguistic features.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Ashush, Nerya, Shlomo Greenberg, Erez Manor e Yehuda Ben-Shimol. "Unsupervised Drones Swarm Characterization Using RF Signals Analysis and Machine Learning Methods". Sensors 23, n. 3 (1 febbraio 2023): 1589. http://dx.doi.org/10.3390/s23031589.

Testo completo
Abstract (sommario):
Autonomous unmanned aerial vehicles (UAVs) have attracted increasing academic and industrial attention during the last decade. Using drones have broad benefits in diverse areas, such as civil and military applications, aerial photography and videography, mapping and surveying, agriculture, and disaster management. However, the recent development and innovation in the field of drone (UAV) technology have led to malicious usage of the technology, including the penetration of secure areas (such as airports) and serving terrorist attacks. Autonomous weapon systems might use drone swarms to perform more complex military tasks. Utilizing a large number of drones, simultaneously increases the risk and the reliability of the mission in terms of redundancy, survivability, scalability, and the quality of autonomous performance in a complex environment. This research suggests a new approach for drone swarm characterization and detection using RF signals analysis and various machine learning methods. While most of the existing drone detection and classification methods are typically related to a single drone classification, using supervised approaches, this research work proposes an unsupervised approach for drone swarm characterization. The proposed method utilizes the different radio frequency (RF) signatures of the drone’s transmitters. Various kinds of frequency transform, such as the continuous, discrete, and wavelet scattering transform, have been applied to extract RF features from the radio frequency fingerprint, which have then been used as input for the unsupervised classifier. To reduce the input data dimension, we suggest using unsupervised approaches such as Principal component analysis (PCA), independent component analysis (ICA), uniform manifold approximation and projection (UMAP), and the t-distributed symmetric neighbor embedding (t-SNE) algorithms. The proposed clustering approach is based on common unsupervised methods, including K-means, mean shift, and X-means algorithms. The proposed approach has been evaluated using self-built and common drone swarm datasets. The results demonstrate a classification accuracy of about 95% under additive Gaussian white noise with different levels of SNR.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Naqvi, Sardar Shan Ali, Yuancheng Li e Muhammad Uzair. "DDoS attack detection in smart grid network using reconstructive machine learning models". PeerJ Computer Science 10 (9 gennaio 2024): e1784. http://dx.doi.org/10.7717/peerj-cs.1784.

Testo completo
Abstract (sommario):
Network attacks pose a significant challenge for smart grid networks, mainly due to the existence of several multi-directional communication devices coupling consumers to the grid. One of the network attacks that can affect the smart grid is the distributed denial of service (DDoS), where numerous compromised communication devices/nodes of the grid flood the smart grid network with false data and requests, leading to disruptions in smart meters, data servers, and the state estimator, ultimately effecting the services for end-users. Machine learning-based strategies show distinctive benefits in resolving the challenge of securing the network from DDoS attacks. Regardless, a notable hindrance in deploying machine learning-based techniques is the requirement of model retraining whenever new attack classes arise. Practically, disrupting the normal operations of smart grid is really discouraged. To handle this challenge effectively and detect DDoS attacks without major disruptions, we propose the deployment of reconstructive deep learning techniques. A primary benefit of our proposed technique is the minimum disruption during the introduction of a new attack class, even after complete deployment. We trained several deep and shallow reconstructive models to get representations for each attack type separately, and we performed attack detection by class-specific reconstruction error-based classification. Our technique experienced rigid evaluation via multiple experiments using two well-acknowledged standard databases exclusively for DDoS attacks, including their subsets. Later, we performed a comparative estimation of our outcomes against six methods prevalent within the same domain. Our outcomes reveal that our technique attained higher accuracy, and notably eliminates the requirement of a complete model retraining in the event of the introduction of new attack classes. This method will not only boost the security of smart grid networks but also ensure the stability and reliability of normal operations, protecting the critical infrastructure from ever-evolving network attacks. As smart grid is advancing rapidly, our approach proposes a robust and adaptive way to overcome the continuous challenges posed by network attacks.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Flores, Juan J., Jose L. Garcia-Nava, Jose R. Cedeno Gonzalez, Victor M. Tellez, Felix Calderon e Arturo Medrano. "A Machine-Learning Pipeline for Large-Scale Power-Quality Forecasting in the Mexican Distribution Grid". Applied Sciences 12, n. 17 (24 agosto 2022): 8423. http://dx.doi.org/10.3390/app12178423.

Testo completo
Abstract (sommario):
Electric power distribution networks face increasing factors for power-quality (PQ) deterioration, such as distributed, renewable-energy generation units and countless high-end electronic devices loaded as controllers or in standalone mode. Consequently, government regulations are issued worldwide to set up strict PQ distribution standards; the distribution grids must comply with those regulations. This situation drives research towards PQ forecasting as a crucial part of early-warning systems. However, most of the approaches in the literature disregard the big-data nature of the problem by working on small datasets. These datasets come from short-scale off-grid configurations or selected portions of a larger power grid. This article addresses a study case from a region-sized state-owned Mexican distribution grid, where the company must preserve essential PQ standards in approximately 700 distribution circuits and 150 quality-control nodes. We implemented a machine-learning pipeline with nearly 4000 univariate forecasting models to address this challenge. The system executes a weekly forecasting pipeline and daily data ingestion and preprocessing pipeline, processing massive amounts of data ingested. The implemented system, MIRD (an acronym for Monitoreo Inteligente de Redes de Distribution—Intelligent Monitoring of Distribution Networks), is an unprecedented effort in the production, deployment, and continuous use of forecasting models for PQ indices monitoring. To the extent of the authors’ best knowledge, there is no similar work of this type in any other Latin-American distribution grid.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Manavalan, Mani. "Intersection of Artificial Intelligence, Machine Learning, and Internet of Things – An Economic Overview". Global Disclosure of Economics and Business 9, n. 2 (25 dicembre 2020): 119–28. http://dx.doi.org/10.18034/gdeb.v9i2.584.

Testo completo
Abstract (sommario):
Internet of Things (IoT) has become one of the mainstream advancements and a supreme domain of research for the technical as well as the scientific world, and financially appealing for the business world. It supports the interconnection of different gadgets and the connection of gadgets to people. IoT requires a distributed computing set up to deal with the rigorous data processing and training; and simultaneously, it requires artificial intelligence (AI) and machine learning (ML) to analyze the information stored on various cloud frameworks and make extremely quick and smart decisions w.r.t to data. Moreover, the continuous developments in these three areas of IT present a strong opportunity to collect real-time data about every activity of a business. Artificial Intelligence (AI) and Machine Learning are assuming a supportive part in applications and use cases offered by the Internet of Things, a shift evident in the behavior of enterprises trying to adopt this paradigm shift around the world. Small as well as large-scale organizations across the globe are leveraging these applications to develop the latest offers of services and products that will present a new set of business opportunities and direct new developments in the technical landscape. The following transformation will also present another opportunity for various industries to run their operations and connect with their users through the power of AI, ML, and IoT combined. Moreover, there is still huge scope for those who can convert raw information into valuable business insights, and the way ahead to do as such lies in viable data analytics. Organizations are presently looking further into the data streams to identify new and inventive approaches to elevate proficiency and effectiveness in the technical as well as business landscape. Organizations are taking on bigger, more exhaustive research approaches with the assistance of continuous progress being made in science and technology, especially in machine learning and artificial intelligence. If companies want to understand the valuable capacity of this innovation, they are required to integrate their IoT frameworks with persuasive AI and ML algorithms that allow ’smart devices/gadgets’ to imitate behavioral patterns of humans and be able to take wise decisions just like humans without much of an intervention. Integrating both artificial intelligence and machine learning with IoT networks is proving to be a challenging task for the accomplishment of the present IoT-based digital ecosystems. Hence, organizations should direct the necessary course of action to identify how they will drive value from intersecting AI, ML, and IoT to maintain a satisfactory position in the business in years to come. In this review, we will also discuss the progress of IoT so far and what role AI and ML can play in accomplishing new heights for businesses in the future. Later the paper will discuss the opportunities and challenges faced during the implementation of this hybrid model.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Rashid, Kanwal, Yousaf Saeed, Abid Ali, Faisal Jamil, Reem Alkanhel e Ammar Muthanna. "An Adaptive Real-Time Malicious Node Detection Framework Using Machine Learning in Vehicular Ad-Hoc Networks (VANETs)". Sensors 23, n. 5 (26 febbraio 2023): 2594. http://dx.doi.org/10.3390/s23052594.

Testo completo
Abstract (sommario):
Modern vehicle communication development is a continuous process in which cutting-edge security systems are required. Security is a main problem in the Vehicular Ad Hoc Network (VANET). Malicious node detection is one of the critical issues found in the VANET environment, with the ability to communicate and enhance the mechanism to enlarge the field. The vehicles are attacked by malicious nodes, especially DDoS attack detection. Several solutions are presented to overcome the issue, but none are solved in a real-time scenario using machine learning. During DDoS attacks, multiple vehicles are used in the attack as a flood on the targeted vehicle, so communication packets are not received, and replies to requests do not correspond in this regard. In this research, we selected the problem of malicious node detection and proposed a real-time malicious node detection system using machine learning. We proposed a distributed multi-layer classifier and evaluated the results using OMNET++ and SUMO with machine learning classification using GBT, LR, MLPC, RF, and SVM models. The group of normal vehicles and attacking vehicles dataset is considered to apply the proposed model. The simulation results effectively enhance the attack classification with an accuracy of 99%. Under LR and SVM, the system achieved 94 and 97%, respectively. The RF and GBT achieved better performance with 98% and 97% accuracy values, respectively. Since we have adopted Amazon Web Services, the network’s performance has improved because training and testing time do not increase when we include more nodes in the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Zhang, Nan, Mingjie Chen, Fan Yang, Cancan Yang, Penghui Yang, Yushan Gao, Yue Shang e Daoli Peng. "Forest Height Mapping Using Feature Selection and Machine Learning by Integrating Multi-Source Satellite Data in Baoding City, North China". Remote Sensing 14, n. 18 (6 settembre 2022): 4434. http://dx.doi.org/10.3390/rs14184434.

Testo completo
Abstract (sommario):
Accurate estimation of forest height is crucial for the estimation of forest aboveground biomass and monitoring of forest resources. Remote sensing technology makes it achievable to produce high-resolution forest height maps in large geographical areas. In this study, we produced a 25 m spatial resolution wall-to-wall forest height map in Baoding city, north China. We evaluated the effects of three factors on forest height estimation utilizing four types of remote sensing data (Sentinel-1, Sentinel-2, ALOS PALSAR-2, and SRTM DEM) with the National Forest Resources Continuous Inventory (NFCI) data, three feature selection methods (stepwise regression analysis (SR), recursive feature elimination (RFE), and Boruta), and six machine learning algorithms (k-nearest neighbor (k-NN), support vector machine regression (SVR), random forest (RF), gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost), and categorical boosting (CatBoost)). ANOVA was adopted to quantify the effects of three factors, including data source, feature selection method, and modeling algorithm, on forest height estimation. The results showed that all three factors had a significant influence. The combination of multiple sensor data improved the estimation accuracy. Boruta’s overall performance was better than SR and RFE, and XGBoost outperformed the other five machine learning algorithms. The variables selected based on Boruta, including Sentinel-1, Sentinel-2, and topography metrics, combined with the XGBoost algorithm, provided the optimal model (R2 = 0.67, RMSE = 2.2 m). Then, we applied the best model to create the forest height map. There were several discrepancies between the generated forest height map and the existing map product, and the values with large differences between the two maps were mostly distributed in the steep areas with high slope values. Overall, we proposed a methodological framework for quantifying the importance of data source, feature selection method, and machine learning algorithm in forest height estimation, and it was proved to be effective in estimating forest height by using freely accessible multi-source data, advanced feature selection method, and machine learning algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Rajab Asaad, Renas, e Subhi R. M. Zeebaree. "Enhancing Security and Privacy in Distributed Cloud Environments: A Review of Protocols and Mechanisms". Academic Journal of Nawroz University 13, n. 1 (31 marzo 2024): 476–88. http://dx.doi.org/10.25007/ajnu.v13n1a2010.

Testo completo
Abstract (sommario):
As cloud computing becomes increasingly integral to data management and services, security and privacy concerns remain paramount. This article presents a comprehensive review of the current protocols and mechanisms designed to fortify security and privacy in distributed cloud environments. It synthesizes contributions from various research works, each proposing innovative solutions to address these challenges. Among these are a two-layer cryptographic algorithm that combines genetic techniques with logical-mathematical functions, enhancing encryption beyond traditional methods, and a novel symmetric-key block cipher that increases encryption complexity while maintaining flexibility. The paper also discusses machine learning applications for threat detection, the role of blockchain in trust management, and the importance of multi-cloud strategies to secure big data. Through comparative analyses, the reviewed methodologies show promising advancements in encryption, data integrity, and resource management, suggesting a robust framework for tackling the evolving landscape of cyber threats. The article underscores the need for continuous innovation and research to navigate the dynamic domain of cloud security, aiming for a future where data security and privacy are not just reactive safeguards but proactive measures embedded in the fabric of cloud computing.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Naeem, Muhammad, Jian Yu, Muhammad Aamir, Sajjad Ahmad Khan, Olayinka Adeleye e Zardad Khan. "Comparative analysis of machine learning approaches to analyze and predict the COVID-19 outbreak". PeerJ Computer Science 7 (16 dicembre 2021): e746. http://dx.doi.org/10.7717/peerj-cs.746.

Testo completo
Abstract (sommario):
Background Forecasting the time of forthcoming pandemic reduces the impact of diseases by taking precautionary steps such as public health messaging and raising the consciousness of doctors. With the continuous and rapid increase in the cumulative incidence of COVID-19, statistical and outbreak prediction models including various machine learning (ML) models are being used by the research community to track and predict the trend of the epidemic, and also in developing appropriate strategies to combat and manage its spread. Methods In this paper, we present a comparative analysis of various ML approaches including Support Vector Machine, Random Forest, K-Nearest Neighbor and Artificial Neural Network in predicting the COVID-19 outbreak in the epidemiological domain. We first apply the autoregressive distributed lag (ARDL) method to identify and model the short and long-run relationships of the time-series COVID-19 datasets. That is, we determine the lags between a response variable and its respective explanatory time series variables as independent variables. Then, the resulting significant variables concerning their lags are used in the regression model selected by the ARDL for predicting and forecasting the trend of the epidemic. Results Statistical measures—Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE)—are used for model accuracy. The values of MAPE for the best-selected models for confirmed, recovered and deaths cases are 0.003, 0.006 and 0.115, respectively, which falls under the category of highly accurate forecasts. In addition, we computed 15 days ahead forecast for the daily deaths, recovered, and confirm patients and the cases fluctuated across time in all aspects. Besides, the results reveal the advantages of ML algorithms for supporting the decision-making of evolving short-term policies.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Bonilla-Garzón, Andrea, Shyam Madhusudhana, Robert P. Dziak e Holger Klinck. "Assessing vocal activity patterns of leopard seals (Hydrurga leptonyx) In the Bransfield Strait, Antarctica using machine learning". Journal of the Acoustical Society of America 152, n. 4 (ottobre 2022): A106. http://dx.doi.org/10.1121/10.0015696.

Testo completo
Abstract (sommario):
Leopard seals ( Hydrurga leptonyx) are widely distributed pack-ice seals in Antarctic and sub-Antarctic waters. As apex predators, these animals play a crucial role in the Southern Ocean food web. Data on population-level changes in their abundance and distribution may be a useful indicator of ecosystem-level changes in this unique and fragile environment. Over the past few decades, many studies have conclusively shown that passive acoustic monitoring (PAM) is an effective tool to monitor the abundance and distribution of vocally active marine mammals in remote and inaccessible areas for extended periods. However, handling and analyzing the vast amount of PAM data being collected remains challenging. Within the scope of this effort, we explored the use of a machine learning algorithm (convolutional neural network; CNN) to automatically detect the ‘low double trill’; one of the most common leopard seal vocalizations, in three years of continuous acoustic data recorded in the Bransfield Strait, Antarctica between 2005 and 2008. After optimizing the algorithm, we evaluated its detection performance on various temporal scales (weeks, days, hours) to assess if CNNs are useful for monitoring leopard seal populations at ecologically relevant scales in the Southern Ocean.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Priya, Harshitha. "A Cloud Approach for Melanoma Detection Based On Deep Learning Networks". International Journal for Research in Applied Science and Engineering Technology 11, n. 5 (31 maggio 2023): 3983–88. http://dx.doi.org/10.22214/ijraset.2023.52571.

Testo completo
Abstract (sommario):
Abstract: Utilizing PC vision, machine learning, and deep learning , the objective is to track down new data and concentrate data from advanced pictures. Images can now be used for both early illness detection and treatment. Dermatology use deep neural network to tell the difference between images with and without melanoma. Two important melanoma location research topics have been emphasized in this essay. Classifier accuracy is impacted by even minor alterations to dataset’s bounds, the primary variable under investigation. We examined the exchange learning issues in this example. We propose using continuous preparation test cycles to create trustworthy prediction models on the basis of this initial evaluation’s findings . Seconds, a more flexible design philosophy that can oblige changes in the preparation datasets is fundamental. We recommended the creation and utilization of a half breed plan in view of cloud, dimness, and edge figuring to give Melanoma Area the board in light of clinical and dermoscopic pictures. By lessing the span of the consistent retrain, this designing must continually adjust to the quantity of data the should be investigated. This aspect has been highlighted in experiments coduted on a dingle Pc using various conveyance method, demonstrating how a distributed system guarantees yield fulfillment in an unquestionably more acceptable amount of time
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Anand, Dubey, e Choubey Siddhartha. "Blockchain and machine learning for data analytics, privacy preserving, and security in fraud detection". i-manager’s Journal on Software Engineering 18, n. 1 (2023): 45. http://dx.doi.org/10.26634/jse.18.1.20091.

Testo completo
Abstract (sommario):
Blockchain technology has emerged as a revolutionary distributed ledger system with the potential to transform various industries, including finance, supply chain, healthcare, and more. However, the decentralized nature of blockchain introduces unique challenges in terms of fraud detection and prevention. This abstract provides an overview of the current state of research and technologies related to fraud detection in blockchain technology-based systems. The paper begins by discussing the fundamental characteristics of blockchain, highlighting its immutability, transparency, and decentralization. These characteristics provide a promising foundation for ensuring data integrity and security but also pose significant challenges in detecting and mitigating fraudulent activities. Next, the paper explores various types of fraud that can occur in blockchain systems, such as double-spending, Sybil attacks, 51% attacks, smart contract vulnerabilities, and identity theft. Each type of fraud is explained along with its potential impact on the integrity and reliability of blockchain systems. To address these challenges, the paper presents an overview of existing fraud detection techniques in blockchain systems. These techniques encompass a range of approaches, including anomaly detection, machine learning algorithms, consensus mechanisms, cryptographic techniques, and forensic analysis. The strengths and limitations of each technique are discussed to provide a comprehensive understanding of their applicability in different scenarios. Furthermore, the paper highlights emerging trends in fraud detection research within the blockchain domain. These trends include the integration of artificial intelligence and blockchain technology, the use of decentralized and federated machine learning approaches, the development of privacy-preserving fraud detection mechanisms, and the utilization of data analytics and visualization techniques for improved detection and investigation. The paper concludes by emphasizing the importance of continuous research and development in fraud detection for blockchain technology-based systems. As blockchain adoption expands across industries, it is crucial to enhance the security and trustworthiness of these systems by effectively detecting and preventing fraud. Future directions for research and potential challenges are also discussed, encouraging further exploration in this vital area of study.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Chen, Shaomin, Jiachen Gao, Fangchuan Lou, Yunfei Tuo, Shuai Tan, Yuyang Shan, Lihua Luo, Zhilin Xu, Zhengfu Zhang e Xiangyu Huang. "Rapid estimation of soil water content based on hyperspectral reflectance combined with continuous wavelet transform, feature extraction, and extreme learning machine". PeerJ 12 (22 agosto 2024): e17954. http://dx.doi.org/10.7717/peerj.17954.

Testo completo
Abstract (sommario):
Background Soil water content is one of the critical indicators in agricultural systems. Visible/near-infrared hyperspectral remote sensing is an effective method for soil water estimation. However, noise removal from massive spectral datasets and effective feature extraction are challenges for achieving accurate soil water estimation using this technology. Methods This study proposes a method for hyperspectral remote sensing soil water content estimation based on a combination of continuous wavelet transform (CWT) and competitive adaptive reweighted sampling (CARS). Hyperspectral data were collected from soil samples with different water contents prepared in the laboratory. CWT, with two wavelet basis functions (mexh and gaus2), was used to pre-process the hyperspectral reflectance to eliminate noise interference. The correlation analysis was conducted between soil water content and wavelet coefficients at ten scales. The feature variables were extracted from these wavelet coefficients using the CARS method and used as input variables to build linear and non-linear models, specifically partial least squares (PLSR) and extreme learning machine (ELM), to estimate soil water content. Results The results showed that the correlation between wavelet coefficients and soil water content decreased as the decomposition scale increased. The corresponding bands of the extracted wavelet coefficients were mainly distributed in the near-infrared region. The non-linear model (ELM) was superior to the linear method (PLSR). ELM demonstrated satisfactory accuracy based on the feature wavelet coefficients of CWT with the mexh wavelet basis function at a decomposition scale of 1 (CWT(mexh_1)), with R2, RMSE, and RPD values of 0.946, 1.408%, and 3.759 in the validation dataset, respectively. Overall, the CWT(mexh_1)-CARS-ELM systematic modeling method was feasible and reliable for estimating the water content of sandy clay loam.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Ahmed, Mehreen, Rafia Mumtaz, Syed Mohammad Hassan Zaidi, Maryam Hafeez, Syed Ali Raza Zaidi e Muneer Ahmad. "Distributed Fog Computing for Internet of Things (IoT) Based Ambient Data Processing and Analysis". Electronics 9, n. 11 (22 ottobre 2020): 1756. http://dx.doi.org/10.3390/electronics9111756.

Testo completo
Abstract (sommario):
Urban centers across the globe are under immense environmental distress due to an increase in air pollution, industrialization, and elevated living standards. The unmanageable and mushroom growth of industries and an exponential soar in population has made the ascent of air pollution intractable. To this end, the solutions that are based on the latest technologies, such as the Internet of things (IoT) and Artificial Intelligence (AI) are becoming increasingly popular and they have capabilities to monitor the extent and scale of air contaminants and would be subsequently useful for containing them. With centralized cloud-based IoT platforms, the ubiquitous and continuous monitoring of air quality and data processing can be facilitated for the identification of air pollution hot spots. However, owing to the inherent characteristics of cloud, such as large end-to-end delay and bandwidth constraint, handling the high velocity and large volume of data that are generated by distributed IoT sensors would not be feasible in the longer run. To address these issues, fog computing is a powerful paradigm, where the data are processed and filtered near the end of the IoT nodes and it is useful for improving the quality of service (QoS) of IoT network. To further improve the QoS, a conceptual model of distributed fog computing and a machine learning based data processing and analysis model is proposed for the optimal utilization of cloud resources. The proposed model provides a classification accuracy of 99% while using a Support Vector Machines (SVM) classifier. This model is also simulated in iFogSim toolkit. It affords many advantages, such as reduced load on the central server by locally processing the data and reporting the quality of air. Additionally, it would offer the scalability of the system by integrating more air quality monitoring nodes in the IoT network.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Bakurov, Illya, Marco Buzzelli, Mauro Castelli, Leonardo Vanneschi e Raimondo Schettini. "General Purpose Optimization Library (GPOL): A Flexible and Efficient Multi-Purpose Optimization Library in Python". Applied Sciences 11, n. 11 (23 maggio 2021): 4774. http://dx.doi.org/10.3390/app11114774.

Testo completo
Abstract (sommario):
Several interesting libraries for optimization have been proposed. Some focus on individual optimization algorithms, or limited sets of them, and others focus on limited sets of problems. Frequently, the implementation of one of them does not precisely follow the formal definition, and they are difficult to personalize and compare. This makes it difficult to perform comparative studies and propose novel approaches. In this paper, we propose to solve these issues with the General Purpose Optimization Library (GPOL): a flexible and efficient multipurpose optimization library that covers a wide range of stochastic iterative search algorithms, through which flexible and modular implementation can allow for solving many different problem types from the fields of continuous and combinatorial optimization and supervised machine learning problem solving. Moreover, the library supports full-batch and mini-batch learning and allows carrying out computations on a CPU or GPU. The package is distributed under an MIT license. Source code, installation instructions, demos and tutorials are publicly available in our code hosting platform (the reference is provided in the Introduction).
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Barron, Alfredo, Dante D. Sanchez-Gallegos, Diana Carrizales-Espinoza, J. L. Gonzalez-Compean e Miguel Morales-Sandoval. "On the Efficient Delivery and Storage of IoT Data in Edge–Fog–Cloud Environments". Sensors 22, n. 18 (16 settembre 2022): 7016. http://dx.doi.org/10.3390/s22187016.

Testo completo
Abstract (sommario):
Cloud storage has become a keystone for organizations to manage large volumes of data produced by sensors at the edge as well as information produced by deep and machine learning applications. Nevertheless, the latency produced by geographic distributed systems deployed on any of the edge, the fog, or the cloud, leads to delays that are observed by end-users in the form of high response times. In this paper, we present an efficient scheme for the management and storage of Internet of Thing (IoT) data in edge–fog–cloud environments. In our proposal, entities called data containers are coupled, in a logical manner, with nano/microservices deployed on any of the edge, the fog, or the cloud. The data containers implement a hierarchical cache file system including storage levels such as in-memory, file system, and cloud services for transparently managing the input/output data operations produced by nano/microservices (e.g., a sensor hub collecting data from sensors at the edge or machine learning applications processing data at the edge). Data containers are interconnected through a secure and efficient content delivery network, which transparently and automatically performs the continuous delivery of data through the edge–fog–cloud. A prototype of our proposed scheme was implemented and evaluated in a case study based on the management of electrocardiogram sensor data. The obtained results reveal the suitability and efficiency of the proposed scheme.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Samuel Olaoluwa Folorunsho, Olubunmi Adeolu Adenekan, Chinedu Ezeigweneme, Ike Chidiebere Somadina e Patrick Azuka Okeleke. "Ensuring Cybersecurity in telecommunications: Strategies to protect digital infrastructure and sensitive data". Computer Science & IT Research Journal 5, n. 8 (23 agosto 2024): 1855–83. http://dx.doi.org/10.51594/csitrj.v5i8.1448.

Testo completo
Abstract (sommario):
This review paper aims to address the critical issue of cybersecurity in the telecommunications sector, focusing on strategies to protect digital infrastructure and sensitive data. The objectives of the study are to identify current cybersecurity threats, evaluate existing protective measures, and propose comprehensive strategies to enhance the security of telecommunications networks. The methodology involves a thorough review of industry reports, regulatory frameworks, scholarly articles, and case studies to provide an in-depth understanding of the current cybersecurity landscape. Key findings reveal a diverse array of cybersecurity threats, including malware attacks, data breaches, and Distributed Denial of Service (DDoS) attacks, which pose significant risks to telecommunications infrastructure. The review also highlights gaps in existing cybersecurity practices, such as insufficient encryption, inadequate access controls, and lack of employee training on cybersecurity protocols. To address these vulnerabilities, the paper proposes a multi-layered cybersecurity strategy that includes the implementation of advanced encryption technologies, continuous monitoring and threat detection systems, and robust incident response plans. Emphasis is placed on the need for collaboration between telecommunications companies, regulatory bodies, and cybersecurity experts to develop standardized security protocols and share threat intelligence. The review underscores the importance of a proactive approach to cybersecurity, advocating for regular security audits, investment in cybersecurity training for employees, and the adoption of emerging technologies such as Artificial Intelligence (AI) and Machine Learning (ML) to enhance threat detection and response capabilities. By adopting these strategies, the telecommunications industry can significantly mitigate risks and ensure the protection of its digital infrastructure and sensitive data. Keywords: Telecommunications cybersecurity, Cyber threats, Malware, Ransomware, Phishing, Distributed denial-of-service (DDoS) attacks, Artificial intelligence (AI), Machine learning (ML), Blockchain technology, Quantum-resistant cryptographic algorithms, Zero-trust architecture (ZTA), General Data Protection Regulation (GDPR), Information Sharing and Analysis Centers (ISACs), Cybersecurity education, Workforce development, Threat detection, Policy development, Collaboration, Information sharing, Cybersecurity resilience.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Dudeja, Deepak, Sabeena Yasmin Hera, Nitika Vats Doohan, Nilesh Dubey, R. Mahaveerakannan, Tariq Ahamed Ahanger e Simon Karanja Hinga. "Energy Efficient and Secure Information Dissemination in Heterogeneous Wireless Sensor Networks Using Machine Learning Techniques". Wireless Communications and Mobile Computing 2022 (7 giugno 2022): 1–14. http://dx.doi.org/10.1155/2022/2206530.

Testo completo
Abstract (sommario):
The extensive use of sensor technology in every sphere of life, along with the continuous digitization of society, makes it realistic to anticipate that the planet will soon be patched with small-sized devices all over the place in the not-too-distant future. These devices give monitoring and surveillance capabilities, as well as access to a vast digital universe of information, among other things. Finding data and information, as well as processing enquiries, is made much easier thanks to the seamless transmission of information over wireless media, which is based on the “anywhere, anytime, everywhere” paradigm that allows information to be sent anywhere, at any time. Sensing networks came into existence as a consequence of the downsizing of wireless devices that are capable of receiving information from a source, transferring it, and processing it. Sensor networks, although they share many of the features, uses, and limits of ad hoc networks, have their own set of capabilities that are unique to them. While performing their responsibilities, sensor networks must contend with a variety of security issues, including unsecured wireless channels, physical compromise, and reprogramming. Because of the small size and ubiquitous availability of wireless sensor networks, compromise attacks are the most severe kind of attack in this environment (WSNs). With the proliferation of wireless sensor networks (WSNs), it is becoming more difficult to rely only on machine learning techniques. We sought to tackle the security challenge by developing a key management system as well as a secure routing mechanism. We are building scalable key management approaches that are resistant to node compromise and node replication attacks, which we will demonstrate in our proposed study, by using deployment-driven localization of security information and leveraging distributed key management. Using a security-aware selection of the next hop on the route to the destination, we were able to design safe routing algorithms that were effective.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia