Zeitschriftenartikel zum Thema „Multiple Aggregation Learning“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Multiple Aggregation Learning.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Multiple Aggregation Learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

JIANG, JU, MOHAMED S. KAMEL und LEI CHEN. „AGGREGATION OF MULTIPLE REINFORCEMENT LEARNING ALGORITHMS“. International Journal on Artificial Intelligence Tools 15, Nr. 05 (Oktober 2006): 855–61. http://dx.doi.org/10.1142/s0218213006002990.

Der volle Inhalt der Quelle
Annotation:
Reinforcement learning (RL) has been successfully used in many fields. With the increasing complexity of environments and tasks, it is difficult for a single learning algorithm to cope with complicated problems with high performance. This paper proposes a new multiple learning architecture, "Aggregated Multiple Reinforcement Learning System (AMRLS)", which aggregates different RL algorithms in each learning step to make more appropriate sequential decisions than those made by individual learning algorithms. This architecture was tested on a Cart-Pole system. The presented simulation results confirm our prediction and reveal that aggregation not only provides robustness and fault tolerance ability, but also produces more smooth learning curves and needs fewer learning steps than individual learning algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Aydin, Bahadir, Yavuz Selim Yilmaz Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao und Murat Demirbas. „Crowdsourcing for Multiple-Choice Question Answering“. Proceedings of the AAAI Conference on Artificial Intelligence 28, Nr. 2 (27.07.2014): 2946–53. http://dx.doi.org/10.1609/aaai.v28i2.19016.

Der volle Inhalt der Quelle
Annotation:
We leverage crowd wisdom for multiple-choice question answering, and employ lightweight machine learning techniques to improve the aggregation accuracy of crowdsourced answers to these questions. In order to develop more effective aggregation methods and evaluate them empirically, we developed and deployed a crowdsourced system for playing the “Who wants to be a millionaire?” quiz show. Analyzing our data (which consist of more than 200,000 answers), we find that by just going with the most selected answer in the aggregation, we can answer over 90% of the questions correctly, but the success rate of this technique plunges to 60% for the later/harder questions in the quiz show. To improve the success rates of these later/harder questions, we investigate novel weighted aggregation schemes for aggregating the answers obtained from the crowd. By using weights optimized for reliability of participants (derived from the participants’ confidence), we show that we can pull up the accuracy rate for the harder questions by 15%, and to overall 95% average accuracy. Our results provide a good case for the benefits of applying machine learning techniques for building more accurate crowdsourced question answering systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sinnott, Jennifer A., und Tianxi Cai. „Pathway aggregation for survival prediction via multiple kernel learning“. Statistics in Medicine 37, Nr. 16 (17.04.2018): 2501–15. http://dx.doi.org/10.1002/sim.7681.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Azizi, Fityan, und Wahyu Catur Wibowo. „Intermittent Demand Forecasting Using LSTM With Single and Multiple Aggregation“. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 6, Nr. 5 (02.11.2022): 855–59. http://dx.doi.org/10.29207/resti.v6i5.4435.

Der volle Inhalt der Quelle
Annotation:
Intermittent demand data is data with infrequent demand with varying number of demand sizes. Intermittent demand forecasting is useful for providing inventory control decisions. It is very important to produce accurate forecasts. Based on previous research, deep learning models, especially MLP and RNN-based architectures, have not been able to provide better intermittent data forecasting results compared to traditional methods. This research will focus on analyzing the results of intermittent data forecasting using deep learning with several levels of aggregation and a combination of several levels of aggregation. In this research, the LSTM model is implemented into two traditional models that use aggregation techniques and are specifically used for intermittent data forecasting, namely ADIDA and MAPA. The result, based on tests on the six predetermined data, the LSTM model with aggregation and disaggregation is able to provide better test results than the LSTM model without aggregation and disaggregation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Wei, Xiaodong Yue, Yufei Chen und Thierry Denoeux. „Trusted Multi-View Deep Learning with Opinion Aggregation“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 7 (28.06.2022): 7585–93. http://dx.doi.org/10.1609/aaai.v36i7.20724.

Der volle Inhalt der Quelle
Annotation:
Multi-view deep learning is performed based on the deep fusion of data from multiple sources, i.e. data with multiple views. However, due to the property differences and inconsistency of data sources, the deep learning results based on the fusion of multi-view data may be uncertain and unreliable. It is required to reduce the uncertainty in data fusion and implement the trusted multi-view deep learning. Aiming at the problem, we revisit the multi-view learning from the perspective of opinion aggregation and thereby devise a trusted multi-view deep learning method. Within this method, we adopt evidence theory to formulate the uncertainty of opinions as learning results from different data sources and measure the uncertainty of opinion aggregation as multi-view learning results through evidence accumulation. We prove that accumulating the evidences from multiple data views will decrease the uncertainty in multi-view deep learning and facilitate to achieve the trusted learning results. Experiments on various kinds of multi-view datasets verify the reliability and robustness of the proposed multi-view deep learning method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Zhiqiang, Xinyue Yu, Haoyu Wang und Peiyang Xue. „A federated learning scheme for hierarchical protection and multiple aggregation“. Computers and Electrical Engineering 117 (Juli 2024): 109240. http://dx.doi.org/10.1016/j.compeleceng.2024.109240.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Shikun, Shiming Ge, Yingying Hua, Chunhui Zhang, Hao Wen, Tengfei Liu und Weiqiang Wang. „Coupled-View Deep Classifier Learning from Multiple Noisy Annotators“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 4667–74. http://dx.doi.org/10.1609/aaai.v34i04.5898.

Der volle Inhalt der Quelle
Annotation:
Typically, learning a deep classifier from massive cleanly annotated instances is effective but impractical in many real-world scenarios. An alternative is collecting and aggregating multiple noisy annotations for each instance to train the classifier. Inspired by that, this paper proposes to learn deep classifier from multiple noisy annotators via a coupled-view learning approach, where the learning view from data is represented by deep neural networks for data classification and the learning view from labels is described by a Naive Bayes classifier for label aggregation. Such coupled-view learning is converted to a supervised learning problem under the mutual supervision of the aggregated and predicted labels, and can be solved via alternate optimization to update labels and refine the classifiers. To alleviate the propagation of incorrect labels, small-loss metric is proposed to select reliable instances in both views. A co-teaching strategy with class-weighted loss is further leveraged in the deep classifier learning, which uses two networks with different learning abilities to teach each other, and the diverse errors introduced by noisy labels can be filtered out by peer networks. By these strategies, our approach can finally learn a robust data classifier which less overfits to label noise. Experimental results on synthetic and real data demonstrate the effectiveness and robustness of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mansouri, Mohamad, Melek Önen, Wafa Ben Jaballah und Mauro Conti. „SoK: Secure Aggregation Based on Cryptographic Schemes for Federated Learning“. Proceedings on Privacy Enhancing Technologies 2023, Nr. 1 (Januar 2023): 140–57. http://dx.doi.org/10.56553/popets-2023-0009.

Der volle Inhalt der Quelle
Annotation:
Secure aggregation consists of computing the sum of data collected from multiple sources without disclosing these individual inputs. Secure aggregation has been found useful for various applications ranging from electronic voting to smart grid measurements. Recently, federated learning emerged as a new collaborative machine learning technology to train machine learning models. In this work, we study the suitability of secure aggregation based on cryptographic schemes to federated learning. We first provide a formal definition of the problem and suggest a systematic categorization of existing solutions. We further investigate the specific challenges raised by federated learning and analyze the recent dedicated secure aggregation solutions based on cryptographic schemes. We finally share some takeaway messages that would help a secure design of federated learning and identify open research directions in this topic. Based on the takeaway messages, we propose an improved definition of secure aggregation that better fits federated learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liu, Chang, Zhuocheng Zou, Yuan Miao und Jun Qiu. „Light field quality assessment based on aggregation learning of multiple visual features“. Optics Express 30, Nr. 21 (30.09.2022): 38298. http://dx.doi.org/10.1364/oe.467754.

Der volle Inhalt der Quelle
Annotation:
Light field imaging is a way to represent human vision from a computational perspective. It contains more visual information than traditional imaging systems. As a basic problem of light field imaging, light field quality assessment has received extensive attention in recent years. In this study, we explore the characteristics of light field data for different visual domains (spatial, angular, coupled, projection, and depth), study the multiple visual features of a light field, and propose a non-reference light field quality assessment method based on aggregation learning of multiple visual features. The proposed method has four key modules: multi-visual representation of a light field, feature extraction, feature aggregation, and quality assessment. It first extracts the natural scene statistics (NSS) features from the central view image in the spatial domain. It extracts gray-level co-occurrence matrix (GLCM) features both in the angular domain and in the spatial-angular coupled domain. Then, it extracts the rotation-invariant uniform local binary pattern (LBP) features of depth map in the depth domain, and the statistical characteristics of the local entropy (SDLE) features of refocused images in the projection domain. Finally, the multiple visual features are aggregated to form a visual feature vector for the light field. A prediction model is trained by support vector machines (SVM) to establish a light field quality assessment method based on aggregation learning of multiple visual features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Price, Stanton R., Derek T. Anderson, Timothy C. Havens und Steven R. Price. „Kernel Matrix-Based Heuristic Multiple Kernel Learning“. Mathematics 10, Nr. 12 (11.06.2022): 2026. http://dx.doi.org/10.3390/math10122026.

Der volle Inhalt der Quelle
Annotation:
Kernel theory is a demonstrated tool that has made its way into nearly all areas of machine learning. However, a serious limitation of kernel methods is knowing which kernel is needed in practice. Multiple kernel learning (MKL) is an attempt to learn a new tailored kernel through the aggregation of a set of valid known kernels. There are generally three approaches to MKL: fixed rules, heuristics, and optimization. Optimization is the most popular; however, a shortcoming of most optimization approaches is that they are tightly coupled with the underlying objective function and overfitting occurs. Herein, we take a different approach to MKL. Specifically, we explore different divergence measures on the values in the kernel matrices and in the reproducing kernel Hilbert space (RKHS). Experiments on benchmark datasets and a computer vision feature learning task in explosive hazard detection demonstrate the effectiveness and generalizability of our proposed methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Tam, Prohim, Seungwoo Kang, Seyha Ros und Seokhoon Kim. „Enhancing QoS with LSTM-Based Prediction for Congestion-Aware Aggregation Scheduling in Edge Federated Learning“. Electronics 12, Nr. 17 (27.08.2023): 3615. http://dx.doi.org/10.3390/electronics12173615.

Der volle Inhalt der Quelle
Annotation:
The advancement of the sensing capabilities of end devices drives a variety of data-intensive insights, yielding valuable information for modelling intelligent industrial applications. To apply intelligent models in 5G and beyond, edge intelligence integrates edge computing systems and deep learning solutions, which enables distributed model training and inference. Edge federated learning (EFL) offers collaborative edge intelligence learning with distributed aggregation capabilities, promoting resource efficiency, participant inclusivity, and privacy preservation. However, the quality of service (QoS) faces challenges due to congestion problems that arise from the diverse models and data in practical architectures. In this paper, we develop a modified long short-term memory (LSTM)-based congestion-aware EFL (MLSTM-CEFL) approach that aims to enhance QoS in the final model convergence between end devices, edge aggregators, and the global server. Given the diversity of service types, MLSTM-CEFL proactively detects the congestion rates, adequately schedules the edge aggregations, and effectively prioritizes high mission-critical serving resources. The proposed system is formulated to handle time series analysis from local/edge model parameter loading, weighing the configuration of resource pooling properties at specific congestion intervals. The MLSTM-CEFL policy orchestrates the establishment of long-term paths for participant-aggregator scheduling and follows the expected QoS metrics after final averaging in multiple industrial application classes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Borghei, Benny B., und Thomas Magnusson. „Niche aggregation through cumulative learning: A study of multiple electric bus projects“. Environmental Innovation and Societal Transitions 28 (September 2018): 108–21. http://dx.doi.org/10.1016/j.eist.2018.01.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Carbonneau, Marc-Andre, Eric Granger und Ghyslain Gagnon. „Bag-Level Aggregation for Multiple-Instance Active Learning in Instance Classification Problems“. IEEE Transactions on Neural Networks and Learning Systems 30, Nr. 5 (Mai 2019): 1441–51. http://dx.doi.org/10.1109/tnnls.2018.2869164.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Liu, Fei, Zheng Xiong, Wei Yu, Jia Wu, Zheng Kong, Yunhang Ji, Suwei Xu und Mingtao Ji. „Efficient Federated Learning for Feature Aggregation with Heterogenous Edge Devices“. Journal of Physics: Conference Series 2665, Nr. 1 (01.12.2023): 012007. http://dx.doi.org/10.1088/1742-6596/2665/1/012007.

Der volle Inhalt der Quelle
Annotation:
Abstract Federated learning is a powerful distributed machine learning paradigm for feature aggregation and learning from multiple heterogenous edge devices, due to its ability to keep the privacy of data. However, the training is inefficient for heterogenous devices with considerable communication. Progressive learning is a promising approach for improving the efficiency. Since progressive learning partitions the training process into multiple stages, it is necessary to determine the number of rounds for each stage, and balance the trade-off between saving the energy and improving the model accuracy. Through pilot experiments, we find that the profile which reflects the relationship between round allocation and model quality remains similar in different hyper-parameter configurations, and also observe that the model quality is lossless if the complete model gets sufficient training. Based on the phenomena, we formulate an optimization problem which minimizes the energy consumption of all devices, under the constraint of model quality. We then design a polynomial-time algorithm for the problem. Experimental results demonstrate the superiority of our proposed algorithm under various settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Reiman, Derek, Ahmed Metwally, Jun Sun und Yang Dai. „Meta-Signer: Metagenomic Signature Identifier based onrank aggregation of features“. F1000Research 10 (09.03.2021): 194. http://dx.doi.org/10.12688/f1000research.27384.1.

Der volle Inhalt der Quelle
Annotation:
The advance of metagenomic studies provides the opportunity to identify microbial taxa that are associated with human diseases. Multiple methods exist for the association analysis. However, the results could be inconsistent, presenting challenges in interpreting the host-microbiome interactions. To address this issue, we develop Meta-Signer, a novel Metagenomic Signature Identifier tool based on rank aggregation of features identified from multiple machine learning models including Random Forest, Support Vector Machines, Logistic Regression, and Multi-Layer Perceptron Neural Networks. Meta-Signer generates ranked taxa lists by training individual machine learning models over multiple training partitions and aggregates the ranked lists into a single list by an optimization procedure to represent the most informative and robust microbial features. A User will receive speedy assessment on the predictive performance of each ma-chine learning model using different numbers of the ranked features and determine the final models to be used for evaluation on external datasets. Meta-Signer is user-friendly and customizable, allowing users to explore their datasets quickly and efficiently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Aviv Segev, John Pomerat. „A Comparison of Methods for Neural Network Aggregation“. Advances in Artificial Intelligence and Machine Learning 03, Nr. 02 (2023): 1012–24. http://dx.doi.org/10.54364/aaiml.2023.1160.

Der volle Inhalt der Quelle
Annotation:
Deep learning has been successful in the theoretical aspect. For deep learning to succeed in industry, we need to have algorithms capable of handling many inconsistencies appearing in real data. These inconsistencies can have large effects on the implementation of a deep learning algorithm. Artificial Intelligence is currently changing the medical industry. However, receiving authorization to use medical data for training machine learning algorithms is a huge hurdle. A possible solution is sharing the data without sharing the patient information. We propose a multi-party computation protocol for the deep learning algorithm. The protocol enables to conserve both the privacy and the security of the training data. Three approaches of neural networks assembly are analyzed: transfer learning, average ensemble learning, and series network learning. The results are compared to approaches based on data-sharing in different experiments. We analyze the security issues of the proposed protocol. Although the analysis is based on medical data, the results of multi-party computation of machine learning training are theoretical and can be implemented in multiple research areas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

So, Jinhyun, Ramy E. Ali, Başak Güler, Jiantao Jiao und A. Salman Avestimehr. „Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9864–73. http://dx.doi.org/10.1609/aaai.v37i8.26177.

Der volle Inhalt der Quelle
Annotation:
Secure aggregation is a critical component in federated learning (FL), which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of FL. In fact, we show that the conventional random user selection strategies in FL lead to leaking users' individual models within number of rounds that is linear in the number of users. To address this challenge, we introduce a secure aggregation framework, Multi-RoundSecAgg, with multi-round privacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of FL over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over any number of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. Our experiments on MNIST, CIFAR-10 and CIFAR-100 datasets in the IID and the non-IID settings demonstrate the performance improvement over the baselines, both in terms of privacy protection and test accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Kaltsounis, Anastasios, Evangelos Spiliotis und Vassilios Assimakopoulos. „Conditional Temporal Aggregation for Time Series Forecasting Using Feature-Based Meta-Learning“. Algorithms 16, Nr. 4 (12.04.2023): 206. http://dx.doi.org/10.3390/a16040206.

Der volle Inhalt der Quelle
Annotation:
We present a machine learning approach for applying (multiple) temporal aggregation in time series forecasting settings. The method utilizes a classification model that can be used to either select the most appropriate temporal aggregation level for producing forecasts or to derive weights to properly combine the forecasts generated at various levels. The classifier consists a meta-learner that correlates key time series features with forecasting accuracy, thus enabling a dynamic, data-driven selection or combination. Our experiments, conducted in two large data sets of slow- and fast-moving series, indicate that the proposed meta-learner can outperform standard forecasting approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kim, Sunghun, und Eunjee Lee. „A deep attention LSTM embedded aggregation network for multiple histopathological images“. PLOS ONE 18, Nr. 6 (29.06.2023): e0287301. http://dx.doi.org/10.1371/journal.pone.0287301.

Der volle Inhalt der Quelle
Annotation:
Recent advancements in computer vision and neural networks have facilitated the medical imaging survival analysis for various medical applications. However, challenges arise when patients have multiple images from multiple lesions, as current deep learning methods provide multiple survival predictions for each patient, complicating result interpretation. To address this issue, we developed a deep learning survival model that can provide accurate predictions at the patient level. We propose a deep attention long short-term memory embedded aggregation network (DALAN) for histopathology images, designed to simultaneously perform feature extraction and aggregation of lesion images. This design enables the model to efficiently learn imaging features from lesions and aggregate lesion-level information to the patient level. DALAN comprises a weight-shared CNN, attention layers, and LSTM layers. The attention layer calculates the significance of each lesion image, while the LSTM layer combines the weighted information to produce an all-encompassing representation of the patient’s lesion data. Our proposed method performed better on both simulated and real data than other competing methods in terms of prediction accuracy. We evaluated DALAN against several naive aggregation methods on simulated and real datasets. Our results showed that DALAN outperformed the competing methods in terms of c-index on the MNIST and Cancer dataset simulations. On the real TCGA dataset, DALAN also achieved a higher c-index of 0.803±0.006 compared to the naive methods and the competing models. Our DALAN effectively aggregates multiple histopathology images, demonstrating a comprehensive survival model using attention and LSTM mechanisms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Fu, Fengjie, Dianhai Wang, Meng Sun, Rui Xie und Zhengyi Cai. „Urban Traffic Flow Prediction Based on Bayesian Deep Learning Considering Optimal Aggregation Time Interval“. Sustainability 16, Nr. 5 (22.02.2024): 1818. http://dx.doi.org/10.3390/su16051818.

Der volle Inhalt der Quelle
Annotation:
Predicting short-term urban traffic flow is a fundamental and cost-effective strategy in traffic signal control systems. However, due to the interrupted, periodic, and stochastic characteristics of urban traffic flow influenced by signal control, there are still unresolved issues related to the selection of the optimal aggregation time interval and the quantifiable uncertainties in prediction. To tackle these challenges, this research introduces a method for predicting urban interrupted traffic flow, which is based on Bayesian deep learning and considers the optimal aggregation time interval. Specifically, this method utilizes the cross-validation mean square error (CVMSE) method to obtain the optimal aggregation time interval and to establish the relationship between the optimal aggregation time interval and the signal cycle. A Bayesian LSTM-CNN prediction model, which extends the LSTM-CNN model under the Bayesian framework to a probabilistic model to better capture the stochasticity and variation in the data, is proposed. Experimental results derived from real-world data demonstrate gathering traffic flow data based on the optimal aggregation time interval significantly enhances the prediction accuracy of the urban interrupted traffic flow model. The optimal aggregation time interval for urban interrupted traffic flow data corresponds to a multiple of the traffic signal control cycle. Comparative experiments indicate that the Bayesian LSTM-CNN prediction model outperforms the state-of-the-art prediction models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Weisheng, Maolin He und Minghao Xiang. „Double-Stack Aggregation Network Using a Feature-Travel Strategy for Pansharpening“. Remote Sensing 14, Nr. 17 (27.08.2022): 4224. http://dx.doi.org/10.3390/rs14174224.

Der volle Inhalt der Quelle
Annotation:
Pansharpening methods based on deep learning can obtain high-quality, high-resolution multispectral images and are gradually becoming an active research topic. To combine deep learning and remote sensing domain knowledge more efficiently, we propose a double-stack aggregation network using a feature-travel strategy for pansharpening. The proposed network comprises two important designs. First, we propose a double-stack feature aggregation module that can efficiently retain useful feature information by aggregating features extracted at different levels. The module introduces a new multiscale, large-kernel convolutional block in the feature extraction stage to maintain the overall computational power while expanding the receptive field and obtaining detailed feature information. We also introduce a feature-travel strategy to effectively complement feature details on multiple scales. By resampling the source images, we use three pairs of source images at various scales as the input to the network. The feature-travel strategy lets the extracted features loop through the three scales to supplement the effective feature details. Extensive experiments on three satellite datasets show that the proposed model achieves significant improvements in both spatial and spectral quality measurements compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zhang, Hesheng, Ping Zhang, Mingkai Hu, Muhua Liu und Jiechang Wang. „FedUB: Federated Learning Algorithm Based on Update Bias“. Mathematics 12, Nr. 10 (20.05.2024): 1601. http://dx.doi.org/10.3390/math12101601.

Der volle Inhalt der Quelle
Annotation:
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature of data across different clients. non-iid data can lead to inconsistencies between the minimal loss experienced by individual clients and the global loss observed after the central server aggregates the local models, affecting the model’s convergence speed and generalization capability. To address this challenge, we propose a novel federated learning algorithm based on update bias (FedUB). Unlike traditional federated learning approaches such as FedAvg and FedProx, which independently update model parameters on each client before direct aggregation to form a global model, the FedUB algorithm incorporates an update bias in the loss function of local models—specifically, the difference between each round’s local model updates and the global model updates. This design aims to reduce discrepancies between local and global updates, thus aligning the parameters of locally updated models more closely with those of the globally aggregated model, thereby mitigating the fundamental conflict between local and global optima. Additionally, during the aggregation phase at the server side, we introduce a metric called the bias metric, which assesses the similarity between each client’s local model and the global model. This metric adaptively sets the weight of each client during aggregation after each training round to achieve a better global model. Extensive experiments conducted on multiple datasets have confirmed the effectiveness of the FedUB algorithm. The results indicate that FedUB generally outperforms methods such as FedDC, FedDyn, and Scaffold, especially in scenarios involving partial client participation and non-iid data distributions. It demonstrates superior performance and faster convergence in tasks such as image classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Liu, Bowen, und Qiang Tang. „Secure Data Sharing in Federated Learning through Blockchain-Based Aggregation“. Future Internet 16, Nr. 4 (15.04.2024): 133. http://dx.doi.org/10.3390/fi16040133.

Der volle Inhalt der Quelle
Annotation:
In this paper, we explore the realm of federated learning (FL), a distributed machine learning (ML) paradigm, and propose a novel approach that leverages the robustness of blockchain technology. FL, a concept introduced by Google in 2016, allows multiple entities to collaboratively train an ML model without the need to expose their raw data. However, it faces several challenges, such as privacy concerns and malicious attacks (e.g., data poisoning attacks). Our paper examines the existing EIFFeL framework, a protocol for decentralized real-time messaging in continuous integration and delivery pipelines, and introduces an enhanced scheme that leverages the trustworthy nature of blockchain technology. Our scheme eliminates the need for a central server and any other third party, such as a public bulletin board, thereby mitigating the risks associated with the compromise of such third parties.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Papageorgiou, Konstantinos, Pramod K. Singh, Elpiniki Papageorgiou, Harpalsinh Chudasama, Dionysis Bochtis und George Stamoulis. „Fuzzy Cognitive Map-Based Sustainable Socio-Economic Development Planning for Rural Communities“. Sustainability 12, Nr. 1 (30.12.2019): 305. http://dx.doi.org/10.3390/su12010305.

Der volle Inhalt der Quelle
Annotation:
Every development and production process needs to operate within a circular economy to keep the human being within a safe limit of the planetary boundary. Policymakers are in the quest of a powerful and easy-to-use tool for representing the perceived causal structure of a complex system that could help them choose and develop the right strategies. In this context, fuzzy cognitive maps (FCMs) can serve as a soft computing method for modelling human knowledge and developing quantitative dynamic models. FCM-based modelling includes the aggregation of knowledge from a variety of sources involving multiple stakeholders, thus offering a more reliable final model. The average aggregation method for weighted interconnections among concepts is widely used in FCM modelling. In this research, we applied the OWA (ordered weighted averaging) learning operators in aggregating FCM weights, assigned by various participants/ stakeholders. Our case study involves a complex phenomenon of poverty eradication and socio-economic development strategies in rural areas under the DAY-NRLM (Deendayal Antyodaya Yojana-National Rural Livelihoods Mission) in India. Various scenarios examining the economic sustainability and livelihood diversification of poor women in rural areas were performed using the FCM-based simulation process implemented by the “FCMWizard” tool. The objective of this study was three-fold: (i) to perform a brief comparative analysis between the proposed aggregation method called “OWA learning aggregation” and the conventional average aggregation method, (ii) to identify the significant concepts and their impact on the examined FCM model regarding poverty alleviation, and (iii) to advance the knowledge of circular economy in the context of poverty alleviation. Overall, the proposed method can support policymakers in eliciting accurate outcomes of proposed policies that deal with social resilience and sustainable socio-economic development strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

WARDELL, DEAN C., und GILBERT L. PETERSON. „FUZZY STATE AGGREGATION AND POLICY HILL CLIMBING FOR STOCHASTIC ENVIRONMENTS“. International Journal of Computational Intelligence and Applications 06, Nr. 03 (September 2006): 413–28. http://dx.doi.org/10.1142/s1469026806001903.

Der volle Inhalt der Quelle
Annotation:
Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Additionally, by applying reinforcement learning to multiple cooperative software agents (a multi-agent system) not only allows each individual agent to learn from its own experience, but also opens up the opportunity for the individual agents to learn from the other agents in the system, thus accelerating the rate of learning. This research presents the novel use of fuzzy state aggregation, as the means of function approximation, combined with the fastest policy hill climbing methods of Win or Lose Fast (WoLF) and policy-dynamics based WoLF (PD-WoLF). The combination of fast policy hill climbing and fuzzy state aggregation function approximation is tested in two stochastic environments: Tileworld and the simulated robot soccer domain, RoboCup. The Tileworld results demonstrate that a single agent using the combination of FSA and PHC learns quicker and performs better than combined fuzzy state aggregation and Q-learning reinforcement learning alone. Results from the multi-agent RoboCup domain again illustrate that the policy hill climbing algorithms perform better than Q-learning alone in a multi-agent environment. The learning is further enhanced by allowing the agents to share their experience through a weighted strategy sharing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zhang, Chengdong, Keke Li, Shaoqing Wang, Bin Zhou, Lei Wang und Fuzhen Sun. „Learning Heterogeneous Graph Embedding with Metapath-Based Aggregation for Link Prediction“. Mathematics 11, Nr. 3 (21.01.2023): 578. http://dx.doi.org/10.3390/math11030578.

Der volle Inhalt der Quelle
Annotation:
Along with the growth of graph neural networks (GNNs), many researchers have adopted metapath-based GNNs to handle complex heterogeneous graph embedding. The conventional definition of a metapath only distinguishes whether there is a connection between nodes in the network schema, where the type of edge is ignored. This leads to inaccurate node representation and subsequently results in suboptimal prediction performance. In heterogeneous graphs, a node can be connected by multiple types of edges. In fact, each type of edge represents one kind of scene. The intuition is that if the embedding of nodes is trained under different scenes, the complete representation of nodes can be obtained by organically combining them. In this paper, we propose a novel definition of a metapath whereby the edge type, i.e., the relation between nodes, is integrated into it. A heterogeneous graph can be considered as the compound of multiple relation subgraphs from the view of a novel metapath. In different subgraphs, the embeddings of a node are separately trained by encoding and aggregating the neighbors of the intrapaths, which are the instance levels of a novel metapath. Then, the final embedding of the node is obtained by the use of the attention mechanism which aggregates nodes from the interpaths, which is the semantic level of the novel metapaths. Link prediction is a downstream task by which to evaluate the effectiveness of the learned embeddings. We conduct extensive experiments on three real-world heterogeneous graph datasets for link prediction. The empirical results show that the proposed model outperforms the state-of-the-art baselines; in particular, when comparing it to the best baseline, the F1 metric is increased by 10.35% over an Alibaba dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Nakai, Tsunato, Ye Wang, Kota Yoshida und Takeshi Fujino. „SEDMA: Self-Distillation with Model Aggregation for Membership Privacy“. Proceedings on Privacy Enhancing Technologies 2024, Nr. 1 (Januar 2024): 494–508. http://dx.doi.org/10.56553/popets-2024-0029.

Der volle Inhalt der Quelle
Annotation:
Membership inference attacks (MIAs) are important measures to evaluate potential risks of privacy leakage from machine learning (ML) models. State-of-the-art MIA defenses have achieved favorable privacy-utility trade-offs using knowledge distillation on split training datasets. However, such defenses increase computational costs as a large number of the ML models must be trained on the split datasets. In this study, we proposed a new MIA defense, called SEDMA, based on self-distillation using model aggregation to mitigate the MIAs, inspired by the model parameter averaging as used in federated learning. The key idea of SEDMA is to split the training dataset into several parts and aggregate multiple ML models trained on each split for self-distillation. The intuitive explanation of SEDMA is that model aggregation prevents model over-fitting by smoothing information related to the training data among the multiple ML models and preserving the model utility, such as in federated learning. Through our experiments on major benchmark datasets (Purchase100, Texas100, and CIFAR100), we show that SEDMA outperforms state-of-the-art MIA defenses in terms of membership privacy (MIA accuracy), model accuracy, and computational costs. Specifically, SEDMA incurs at most approximately 3 - 5% model accuracy drop, while achieving the lowest MIA accuracy in state-of-the-art empirical MIA defenses. For computational costs, SEDMA takes significantly less processing time than a defense with the state-of-the-art privacy-utility trade-offs in previous defenses. SEDMA achieves both favorable privacy-utility trade-offs and low computational costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gao, Yilin, und Fengzhu Sun. „Batch normalization followed by merging is powerful for phenotype prediction integrating multiple heterogeneous studies“. PLOS Computational Biology 19, Nr. 10 (16.10.2023): e1010608. http://dx.doi.org/10.1371/journal.pcbi.1010608.

Der volle Inhalt der Quelle
Annotation:
Heterogeneity in different genomic studies compromises the performance of machine learning models in cross-study phenotype predictions. Overcoming heterogeneity when incorporating different studies in terms of phenotype prediction is a challenging and critical step for developing machine learning algorithms with reproducible prediction performance on independent datasets. We investigated the best approaches to integrate different studies of the same type of omics data under a variety of different heterogeneities. We developed a comprehensive workflow to simulate a variety of different types of heterogeneity and evaluate the performances of different integration methods together with batch normalization by using ComBat. We also demonstrated the results through realistic applications on six colorectal cancer (CRC) metagenomic studies and six tuberculosis (TB) gene expression studies, respectively. We showed that heterogeneity in different genomic studies can markedly negatively impact the machine learning classifier’s reproducibility. ComBat normalization improved the prediction performance of machine learning classifier when heterogeneous populations are present, and could successfully remove batch effects within the same population. We also showed that the machine learning classifier’s prediction accuracy can be markedly decreased as the underlying disease model became more different in training and test populations. Comparing different merging and integration methods, we found that merging and integration methods can outperform each other in different scenarios. In the realistic applications, we observed that the prediction accuracy improved when applying ComBat normalization with merging or integration methods in both CRC and TB studies. We illustrated that batch normalization is essential for mitigating both population differences of different studies and batch effects. We also showed that both merging strategy and integration methods can achieve good performances when combined with batch normalization. In addition, we explored the potential of boosting phenotype prediction performance by rank aggregation methods and showed that rank aggregation methods had similar performance as other ensemble learning approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Bonawitz, Kallista, Peter Kairouz, Brendan McMahan und Daniel Ramage. „Federated Learning and Privacy“. Queue 19, Nr. 5 (31.10.2021): 87–114. http://dx.doi.org/10.1145/3494834.3500240.

Der volle Inhalt der Quelle
Annotation:
Centralized data collection can expose individuals to privacy risks and organizations to legal risks if data is not properly managed. Federated learning is a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client's raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. This article provides a brief introduction to key concepts in federated learning and analytics with an emphasis on how privacy technologies may be combined in real-world systems and how their use charts a path toward societal benefit from aggregate statistics in new domains and with minimized risk to individuals and to the organizations who are custodians of the data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Mu, Shengdong, Boyu Liu, Chaolung Lien und Nedjah Nadia. „Optimization of Personal Credit Evaluation Based on a Federated Deep Learning Model“. Mathematics 11, Nr. 21 (31.10.2023): 4499. http://dx.doi.org/10.3390/math11214499.

Der volle Inhalt der Quelle
Annotation:
Financial institutions utilize data for the intelligent assessment of personal credit. However, the privacy of financial data is gradually increasing, and the training data of a single financial institution may exhibit problems regarding low data volume and poor data quality. Herein, by fusing federated learning with deep learning (FL-DL), we innovatively propose a dynamic communication algorithm and an adaptive aggregation algorithm as means of effectively solving the following problems, which are associated with personal credit evaluation: data privacy protection, distributed computing, and distributed storage. The dynamic communication algorithm utilizes a combination of fixed communication intervals and constrained variable intervals, which enables the federated system to utilize multiple communication intervals in a single learning task; thus, the performance of personal credit assessment models is enhanced. The adaptive aggregation algorithm proposes a novel aggregation weight formula. This algorithm enables the aggregation weights to be automatically updated, and it enhances the accuracy of individual credit assessment by exploiting the interplay between global and local models, which entails placing an additional but small computational burden on the powerful server side rather than on the resource-constrained client side. Finally, with regard to both algorithms and the FL-DL model, experiments and analyses are conducted using Lending Club financial company data; the results of the analysis indicate that both algorithms outperform the algorithms that are being compared and that the FL-DL model outperforms the advanced learning model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wang, Yabin, Zhiheng Ma, Zhiwu Huang, Yaowei Wang, Zhou Su und Xiaopeng Hong. „Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 10209–17. http://dx.doi.org/10.1609/aaai.v37i8.26216.

Der volle Inhalt der Quelle
Annotation:
This paper focuses on the prevalent stage interference and stage performance imbalance of incremental learning. To avoid obvious stage learning bottlenecks, we propose a new incremental learning framework, which leverages a series of stage-isolated classifiers to perform the learning task at each stage, without interference from others. To be concrete, to aggregate multiple stage classifiers as a uniform one impartially, we first introduce a temperature-controlled energy metric for indicating the confidence score levels of the stage classifiers. We then propose an anchor-based energy self-normalization strategy to ensure the stage classifiers work at the same energy level. Finally, we design a voting-based inference augmentation strategy for robust inference. The proposed method is rehearsal-free and can work for almost all incremental learning scenarios. We evaluate the proposed method on four large datasets. Extensive results demonstrate the superiority of the proposed method in setting up new state-of-the-art overall performance. Code is available at https://github.com/iamwangyabin/ESN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Mbonu, Washington Enyinna, Carsten Maple und Gregory Epiphaniou. „An End-Process Blockchain-Based Secure Aggregation Mechanism Using Federated Machine Learning“. Electronics 12, Nr. 21 (05.11.2023): 4543. http://dx.doi.org/10.3390/electronics12214543.

Der volle Inhalt der Quelle
Annotation:
Federated Learning (FL) is a distributed Deep Learning (DL) technique that creates a global model through the local training of multiple edge devices. It uses a central server for model communication and the aggregation of post-trained models. The central server orchestrates the training process by sending each participating device an initial or pre-trained model for training. To achieve the learning objective, focused updates from edge devices are sent back to the central server for aggregation. While such an architecture and information flows can support the preservation of the privacy of participating device data, the strong dependence on the central server is a significant drawback of this framework. Having a central server could potentially lead to a single point of failure. Further, a malicious server may be able to successfully reconstruct the original data, which could impact on trust, transparency, fairness, privacy, and security. Decentralizing the FL process can successfully address these issues. Integrating a decentralized protocol such as Blockchain technology into Federated Learning techniques will help to address these issues and ensure secure aggregation. This paper proposes a Blockchain-based secure aggregation strategy for FL. Blockchain is implemented as a channel of communication between the central server and edge devices. It provides a mechanism of masking device local data for secure aggregation to prevent compromise and reconstruction of the training data by a malicious server. It enhances the scalability of the system, eliminates the threat of a single point of failure of the central server, reduces vulnerability in the system, ensures security, and transparent communication. Furthermore, our framework utilizes a fault-tolerant server to assist in handling dropouts and stragglers which can occur in federated environments. To reduce the training time, we synchronously implemented a callback or end-process mechanism once sufficient post-trained models have been returned for aggregation (threshold accuracy achieved). This mechanism resynchronizes clients with a stale and outdated model, minimizes the wastage of resources, and increases the rate of convergence of the global model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Pires, Jorge Manuel, und Manuel Pérez Cota. „Metadata as an Aggregation Final Model in Learning Environments“. International Journal of Technology Diffusion 7, Nr. 4 (Oktober 2016): 36–59. http://dx.doi.org/10.4018/ijtd.2016100103.

Der volle Inhalt der Quelle
Annotation:
Knowledge is a concept - like gravity. You cannot see it, but you can observe its effects. Minimize knowledge is an invisible, intangible asset and cannot be directly observed. Many people and organizations do not explicitly recognize the importance of knowledge, in contrast to their more visible financial and capital assets (Pires, 2016). To measure in a proper and impartial way it is necessary to teach in an imaginative and diverse way providing students with the maximum amount of information on a given problem, by means of multiple paths (Pires, 2016). Measuring knowledge or academic performance changing the learning curves of different cognitive functions it would be something that would change completely the learning/study methods and the ways of monitoring the progression of any student. More, it would be possible to achieve individually objectives for certain cognitive functions, through a learning curve less extensive because we would focus the attention in the fundamental details (Pires, 2016). The computer analysis of the answers and self-assessment provides multidimensional scores about the subject knowledge (Hunt, 2003). As intelligent living creatures that we are, we are not isolated from the surround space. We live on it, breath from it and have influence on us in many ways. For a correct evaluation of our behavior's we need to include in the equation all the possible factors that have the condition to affect us. That is only possible if we are always connected to everything and everything is connected to us. (Chen, 2002) defines the generic metadata attributes as a tight relation of: space, time, contents persons, events and objects related between them. (Chen, 2002) also use a layer description to establish from the ground up the structure of a lesson and a course. If we can establish links between all the subjects above we will achieve the ultimate learning experience. This is the objective of this paper, demonstrate that it is possible based in a ten years research - phase I.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zhang, Yani, Huailin Zhao, Zuodong Duan, Liangjun Huang, Jiahao Deng und Qing Zhang. „Congested Crowd Counting via Adaptive Multi-Scale Context Learning“. Sensors 21, Nr. 11 (29.05.2021): 3777. http://dx.doi.org/10.3390/s21113777.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a novel congested crowd counting network for crowd density estimation, i.e., the Adaptive Multi-scale Context Aggregation Network (MSCANet). MSCANet efficiently leverages the spatial context information to accomplish crowd density estimation in a complicated crowd scene. To achieve this, a multi-scale context learning block, called the Multi-scale Context Aggregation module (MSCA), is proposed to first extract different scale information and then adaptively aggregate it to capture the full scale of the crowd. Employing multiple MSCAs in a cascaded manner, the MSCANet can deeply utilize the spatial context information and modulate preliminary features into more distinguishing and scale-sensitive features, which are finally applied to a 1 × 1 convolution operation to obtain the crowd density results. Extensive experiments on three challenging crowd counting benchmarks showed that our model yielded compelling performance against the other state-of-the-art methods. To thoroughly prove the generality of MSCANet, we extend our method to two relevant tasks: crowd localization and remote sensing object counting. The extension experiment results also confirmed the effectiveness of MSCANet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Lu, Yao, Keweiqi Wang und Erbao He. „Many-to-Many Data Aggregation Scheduling Based on Multi-Agent Learning for Multi-Channel WSN“. Electronics 11, Nr. 20 (18.10.2022): 3356. http://dx.doi.org/10.3390/electronics11203356.

Der volle Inhalt der Quelle
Annotation:
Many-to-many data aggregation has become an indispensable technique to realize the simultaneous executions of multiple applications with less data traffic load and less energy consumption in a multi-channel WSN (wireless sensor network). The problem of how to efficiently allocate time slot and channel for each node is one of the most critical problems for many-to-many data aggregation in multi-channel WSNs, and this problem can be solved with the new distributed scheduling method without communication conflict outlined in this paper. The many-to-many data aggregation scheduling process is abstracted as a decentralized partially observable Markov decision model in a multi-agent system. In the case of embedding cooperative multi-agent learning technology, sensor nodes with group observability work in a distributed manner. These nodes cooperated and exploit local feedback information to automatically learn the optimal scheduling strategy, then select the best time slot and channel for wireless communication. Simulation results show that the new scheduling method has advantages in performance when comparing with the existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Wang, Rong, und Wei-Tek Tsai. „Asynchronous Federated Learning System Based on Permissioned Blockchains“. Sensors 22, Nr. 4 (21.02.2022): 1672. http://dx.doi.org/10.3390/s22041672.

Der volle Inhalt der Quelle
Annotation:
The existing federated learning framework is based on the centralized model coordinator, which still faces serious security challenges such as device differentiated computing power, single point of failure, poor privacy, and lack of Byzantine fault tolerance. In this paper, we propose an asynchronous federated learning system based on permissioned blockchains, using permissioned blockchains as the federated learning server, which is composed of a main-blockchain and multiple sub-blockchains, with each sub-blockchain responsible for partial model parameter updates and the main-blockchain responsible for global model parameter updates. Based on this architecture, a federated learning asynchronous aggregation protocol based on permissioned blockchain is proposed that can effectively alleviate the synchronous federated learning algorithm by integrating the learned model into the blockchain and performing two-order aggregation calculations. Therefore, the overhead of synchronization problems and the reliability of shared data is also guaranteed. We conducted some simulation experiments and the experimental results showed that the proposed architecture could maintain good training performances when dealing with a small number of malicious nodes and differentiated data quality, which has good fault tolerance, and can be applied to edge computing scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Zhou, Chendi, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai und Dejing Dou. „Efficient Device Scheduling with Multi-Job Federated Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 9971–79. http://dx.doi.org/10.1609/aaai.v36i9.21235.

Der volle Inhalt der Quelle
Annotation:
Recent years have witnessed a large amount of decentralized data in multiple (edge) devices of end-users, while the aggregation of the decentralized data remains difficult for machine learning jobs due to laws or regulations. Federated Learning (FL) emerges as an effective approach to handling decentralized data without sharing the sensitive raw data, while collaboratively training global machine learning models. The servers in FL need to select (and schedule) devices during the training process. However, the scheduling of devices for multiple jobs with FL remains a critical and open problem. In this paper, we propose a novel multi-job FL framework to enable the parallel training process of multiple jobs. The framework consists of a system model and two scheduling methods. In the system model, we propose a parallel training process of multiple jobs, and construct a cost model based on the training time and the data fairness of various devices during the training process of diverse jobs. We propose a reinforcement learning-based method and a Bayesian optimization-based method to schedule devices for multiple jobs while minimizing the cost. We conduct extensive experimentation with multiple jobs and datasets. The experimental results show that our proposed approaches significantly outperform baseline approaches in terms of training time (up to 8.67 times faster) and accuracy (up to 44.6% higher).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Yang, Fangfang, Yanxu Liu, Linlin Xu, Kui Li, Panpan Hu und Jixing Chen. „Vegetation-Ice-Bare Land Cover Conversion in the Oceanic Glacial Region of Tibet Based on Multiple Machine Learning Classifications“. Remote Sensing 12, Nr. 6 (20.03.2020): 999. http://dx.doi.org/10.3390/rs12060999.

Der volle Inhalt der Quelle
Annotation:
Oceanic glaciers are one of the most sensitive indicators of climate change. However, remotely sensed evidence of land cover change in the oceanic glacial region is still limited due to the cloudy weather during the growing season. In addition, the performance of common machine learning classification algorithms is also worth testing in this cloudy, frigid and mountainous region. In this study, three algorithms, namely, the random forest, back-propagation neural network (BPNN) and convolutional neural network algorithms, were compared in their interpretation of the land cover change in south-eastern Tibet and resulted in three findings. (1) The BPNN achieves the highest overall accuracy and Kappa coefficient compared with the other two algorithms. The overall accuracy was 97.82%, 98.07%, 98.92%, and 94.63% in 1990, 2000, 2007, and 2016, and the Kappa coefficient was 0.958, 0.959, 0.980, and 0.918 in these four years, respectively. (2) From 1990 to 2000, the dominant land cover was ice at the landscape level. The landscape fragmentation decreased and the landscape aggregation increased. From 2000 to 2016, the dominant land cover transformed from ice to vegetation. The vegetation aggregation increased, while the ice aggregation decreased. (3) When the elevation was less than 4 km, the vegetation was usually transformed into bare land; otherwise, the probability of direct transformation between vegetation and ice increased. The findings on the land cover transformation in the oceanic glacial region by multiple classification algorithms can provide both long-term evidence and methodological indications to understand the recent environmental change in the “third pole”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Jin, Xuan, Yuanzhi Yao und Nenghai Yu. „Efficient secure aggregation for privacy-preserving federated learning based on secret sharing“. JUSTC 53, Nr. 4 (2023): 1. http://dx.doi.org/10.52396/justc-2022-0116.

Der volle Inhalt der Quelle
Annotation:
Federated learning allows multiple mobile participants to jointly train a global model without revealing their local private data. Communication-computation cost and privacy preservation are key fundamental issues in federated learning. Existing secret sharing-based secure aggregation mechanisms for federated learning still suffer from significant additional costs, insufficient privacy preservation, and vulnerability to participant dropouts. In this paper, we aim to solve these issues by introducing flexible and effective secret sharing mechanisms into federated learning. We propose two novel privacy-preserving federated learning schemes: federated learning based on one-way secret sharing (FLOSS) and federated learning based on multishot secret sharing (FLMSS). Compared with the state-of-the-art works, FLOSS enables high privacy preservation while significantly reducing the communication cost by dynamically designing secretly shared content and objects. Meanwhile, FLMSS further reduces the additional cost and has the ability to efficiently enhance the robustness of participant dropouts in federated learning. Foremost, FLMSS achieves a satisfactory tradeoff between privacy preservation and communication-computation cost. Security analysis and performance evaluations on real datasets demonstrate the superiority of our proposed schemes in terms of model accuracy, privacy preservation, and cost reduction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Speck, David, André Biedenkapp, Frank Hutter, Robert Mattmüller und Marius Lindauer. „Learning Heuristic Selection with Dynamic Algorithm Configuration“. Proceedings of the International Conference on Automated Planning and Scheduling 31 (17.05.2021): 597–605. http://dx.doi.org/10.1609/icaps.v31i1.16008.

Der volle Inhalt der Quelle
Annotation:
A key challenge in satisficing planning is to use multiple heuristics within one heuristic search. An aggregation of multiple heuristic estimates, for example by taking the maximum, has the disadvantage that bad estimates of a single heuristic can negatively affect the whole search. Since the performance of a heuristic varies from instance to instance, approaches such as algorithm selection can be successfully applied. In addition, alternating between multiple heuristics during the search makes it possible to use all heuristics equally and improve performance. However, all these approaches ignore the internal search dynamics of a planning system, which can help to select the most useful heuristics for the current expansion step. We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system. Furthermore, we prove that this approach generalizes over existing approaches and that it can exponentially improve the performance of the heuristic search. To learn dynamic heuristic selection, we propose an approach based on reinforcement learning and show empirically that domain-wise learned policies, which take the internal search dynamics of a planning system into account, can exceed existing approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Li, Lu, Jiwei Qin und Jintao Luo. „A Blockchain-Based Federated-Learning Framework for Defense against Backdoor Attacks“. Electronics 12, Nr. 11 (01.06.2023): 2500. http://dx.doi.org/10.3390/electronics12112500.

Der volle Inhalt der Quelle
Annotation:
Federated learning (FL) is a technique that involves multiple participants who update their local models with private data and aggregate these models using a central server. Unfortunately, central servers are prone to single-point failures during the aggregation process, which leads to data leakage and other problems. Although many studies have shown that a blockchain can solve the single-point failure of servers, blockchains cannot identify or mitigate the effect of backdoor attacks. Therefore, this paper proposes a blockchain-based FL framework for defense against backdoor attacks. The framework utilizes blockchains to record transactions in an immutable distributed ledger network and enables decentralized FL. Furthermore, by incorporating the reverse layer-wise relevance (RLR) aggregation strategy into the participant’s aggregation algorithm and adding gradient noise to limit the effectiveness of backdoor attacks, the accuracy of backdoor attacks is substantially reduced. Furthermore, we designed a new proof-of-stake mechanism that considers the historical stakes of participants and the accuracy for selecting the miners of the local model, thereby reducing the stake rewards of malicious participants and motivating them to upload honest model parameters. Our simulation results confirm that, for 10% of malicious participants, the success rate of backdoor injection is reduced by nearly 90% compared to Vanilla FL, and the stake income of malicious devices is the lowest.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Mao, Axiu, Endai Huang, Haiming Gan und Kai Liu. „FedAAR: A Novel Federated Learning Framework for Animal Activity Recognition with Wearable Sensors“. Animals 12, Nr. 16 (21.08.2022): 2142. http://dx.doi.org/10.3390/ani12162142.

Der volle Inhalt der Quelle
Annotation:
Deep learning dominates automated animal activity recognition (AAR) tasks due to high performance on large-scale datasets. However, constructing centralised data across diverse farms raises data privacy issues. Federated learning (FL) provides a distributed learning solution to train a shared model by coordinating multiple farms (clients) without sharing their private data, whereas directly applying FL to AAR tasks often faces two challenges: client-drift during local training and local gradient conflicts during global aggregation. In this study, we develop a novel FL framework called FedAAR to achieve AAR with wearable sensors. Specifically, we devise a prototype-guided local update module to alleviate the client-drift issue, which introduces a global prototype as shared knowledge to force clients to learn consistent features. To reduce gradient conflicts between clients, we design a gradient-refinement-based aggregation module to eliminate conflicting components between local gradients during global aggregation, thereby improving agreement between clients. Experiments are conducted on a public dataset to verify FedAAR’s effectiveness, which consists of 87,621 two-second accelerometer and gyroscope data. The results demonstrate that FedAAR outperforms the state-of-the-art, on precision (75.23%), recall (75.17%), F1-score (74.70%), and accuracy (88.88%), respectively. The ablation experiments show FedAAR’s robustness against various factors (i.e., data sizes, communication frequency, and client numbers).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Liu, Tong, Akash Venkatachalam, Pratik Sanjay Bongale und Christopher M. Homan. „Learning to Predict Population-Level Label Distributions“. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (28.10.2019): 68–76. http://dx.doi.org/10.1609/hcomp.v7i1.5286.

Der volle Inhalt der Quelle
Annotation:
As machine learning (ML) plays an ever increasing role in commerce, government, and daily life, reports of bias in ML systems against groups traditionally underrepresented in computing technologies have also increased. The problem appears to be extensive, yet it remains challenging even to fully assess the scope, let alone fix it. A fundamental reason is that ML systems are typically trained to predict one correct answer or set of answers; disagreements between the annotators who provide the training labels are resolved by either discarding minority opinions (which may correspond to demographic minorities or not) or presenting all opinions flatly, with no attempt to quantify how different answers might be distributed in society. Label distribution learning associates for each data item a probability distribution over the labels for that item. While such distributions may be representative of minority beliefs or not, they at least preserve diversities of opinion that conventional learning hides or ignores and represent a fundamental first step toward ML systems that can model diversity. We introduce a strategy for learning label distributions with only five-to-ten labels per item—a range that is typical of supervised learning datasets—by aggregating human-annotated labels over multiple, similarly rated data items. Our results suggest that specific label aggregation methods can help provide reliable, representative predictions at the population level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Li, Qingtie, Xuemei Wang und Shougang Ren. „A Privacy Robust Aggregation Method Based on Federated Learning in the IoT“. Electronics 12, Nr. 13 (05.07.2023): 2951. http://dx.doi.org/10.3390/electronics12132951.

Der volle Inhalt der Quelle
Annotation:
Federated learning has been widely applied because it enables a large number of IoT devices to conduct collaborative training while maintaining private data localization. However, the security risks and threats faced by federated learning in IoT applications are becoming increasingly prominent. Except for direct data leakage, there is also a need to face threats that attackers interpret gradients and infer private information. This paper proposes a Privacy Robust Aggregation Based on Federated Learning (PBA), which can be applied to multiple server scenarios. PBA filters outliers by using the approximate Euclidean distance calculated from binary sequences and the 3σ criterion. Then, this paper provides correctness analysis and computational complexity analysis on the aggregation process of PBA. Moreover, the performance of PBA is evaluated concerning ensuring privacy and robustness in this paper. The results indicate that PBA can resist Byzantine attacks and a state-of-the-art privacy inference, which means that PBA can ensure privacy and robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Wu, Xia, Lei Xu und Liehuang Zhu. „Local Differential Privacy-Based Federated Learning under Personalized Settings“. Applied Sciences 13, Nr. 7 (24.03.2023): 4168. http://dx.doi.org/10.3390/app13074168.

Der volle Inhalt der Quelle
Annotation:
Federated learning is a distributed machine learning paradigm, which utilizes multiple clients’ data to train a model. Although federated learning does not require clients to disclose their original data, studies have shown that attackers can infer clients’ privacy by analyzing the local models shared by clients. Local differential privacy (LDP) can help to solve the above privacy issue. However, most of the existing federated learning studies based on LDP, rarely consider the diverse privacy requirements of clients. In this paper, we propose an LDP-based federated learning framework, that can meet the personalized privacy requirements of clients. We consider both independent identically distributed (IID) datasets and non-independent identically distributed (non-IID) datasets, and design model perturbation methods, respectively. Moreover, we propose two model aggregation methods, namely weighted average method and probability-based selection method. The main idea, is to weaken the impact of those privacy-conscious clients, who choose relatively small privacy budgets, on the federated model. Experiments on three commonly used datasets, namely MNIST, Fashion-MNIST, and forest cover-types, show that the proposed aggregation methods perform better than the classic arithmetic average method, in the personalized privacy preserving scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wang, Mengdi, Anna Bodonhelyi, Efe Bozkir und Enkelejda Kasneci. „TurboSVM-FL: Boosting Federated Learning through SVM Aggregation for Lazy Clients“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 14 (24.03.2024): 15546–54. http://dx.doi.org/10.1609/aaai.v38i14.29481.

Der volle Inhalt der Quelle
Annotation:
Federated learning is a distributed collaborative machine learning paradigm that has gained strong momentum in recent years. In federated learning, a central server periodically coordinates models with clients and aggregates the models trained locally by clients without necessitating access to local data. Despite its potential, the implementation of federated learning continues to encounter several challenges, predominantly the slow convergence that is largely due to data heterogeneity. The slow convergence becomes particularly problematic in cross-device federated learning scenarios where clients may be strongly limited by computing power and storage space, and hence counteracting methods that induce additional computation or memory cost on the client side such as auxiliary objective terms and larger training iterations can be impractical. In this paper, we propose a novel federated aggregation strategy, TurboSVM-FL, that poses no additional computation burden on the client side and can significantly accelerate convergence for federated classification task, especially when clients are "lazy" and train their models solely for few epochs for next global aggregation. TurboSVM-FL extensively utilizes support vector machine to conduct selective aggregation and max-margin spread-out regularization on class embeddings. We evaluate TurboSVM-FL on multiple datasets including FEMNIST, CelebA, and Shakespeare using user-independent validation with non-iid data distribution. Our results show that TurboSVM-FL can significantly outperform existing popular algorithms on convergence rate and reduce communication rounds while delivering better test metrics including accuracy, F1 score, and MCC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Peng, Cheng, Ke Chen, Lidan Shou und Gang Chen. „CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 14581–89. http://dx.doi.org/10.1609/aaai.v38i13.29374.

Der volle Inhalt der Quelle
Annotation:
Multi-modal multi-label emotion recognition (MMER) aims to identify relevant emotions from multiple modalities. The challenge of MMER is how to effectively capture discriminative features for multiple labels from heterogeneous data. Recent studies are mainly devoted to exploring various fusion strategies to integrate multi-modal information into a unified representation for all labels. However, such a learning scheme not only overlooks the specificity of each modality but also fails to capture individual discriminative features for different labels. Moreover, dependencies of labels and modalities cannot be effectively modeled. To address these issues, this paper presents ContrAstive feature Reconstruction and AggregaTion (CARAT) for the MMER task. Specifically, we devise a reconstruction-based fusion mechanism to better model fine-grained modality-to-label dependencies by contrastively learning modal-separated and label-specific features. To further exploit the modality complementarity, we introduce a shuffle-based aggregation strategy to enrich co-occurrence collaboration among labels. Experiments on two benchmark datasets CMU-MOSEI and M3ED demonstrate the effectiveness of CARAT over state-of-the-art methods. Code is available at https://github.com/chengzju/CARAT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, Pasquale De Rosa, Sara Bouchenak, Angela Bonifati, Pascal Felber, Vania Marangozova und Valerio Schiavoni. „Bias Mitigation in Federated Learning for Edge Computing“. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, Nr. 4 (19.12.2023): 1–35. http://dx.doi.org/10.1145/3631455.

Der volle Inhalt der Quelle
Annotation:
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Fallah, Mahdi, Parya Mohammadi, Mohammadreza NasiriFard und Pedram Salehpour. „Optimizing QoS Metrics for Software-Defined Networking in Federated Learning“. Mobile Information Systems 2023 (09.10.2023): 1–10. http://dx.doi.org/10.1155/2023/3896267.

Der volle Inhalt der Quelle
Annotation:
In the modern and complex realm of networking, the pursuit of ideal QoS metrics is a fundamental objective aimed at maximizing network efficiency and user experiences. Nonetheless, the accomplishment of this task is hindered by the diversity of networks, the unpredictability of network conditions, and the rapid growth of multimedia traffic. This manuscript presents an innovative method for enhancing the QoS in SDN by combining the load-balancing capabilities of FL and genetic algorithms. The proposed solution aims to improve the dispersed aggregation of multimedia traffic by prioritizing data privacy and ensuring secure network load distribution. By using federated learning, multiple clients can collectively participate in the training process of a global model without compromising the privacy of their sensitive information. This method safeguards user privacy while facilitating the aggregation of distributed multimedia traffic. In addition, genetic algorithms are used to optimize network load balancing, thereby ensuring the efficient use of network resources and mitigating the risk of individual node overload. As a result of extensive testing, this research has demonstrated significant improvements in QoS measurements compared to traditional methods. Our proposed technique outperforms existing techniques such as RR, weighted RR, server load, LBBSRT, and dynamic server approaches in terms of CPU and memory utilization, as well as server requests across three testing servers. This novel methodology has applications in multiple industries, including telecommunications, multimedia streaming, and cloud computing. The proposed method presents an innovative strategy for addressing the optimization of QoS metrics in SDN environments, while preserving data privacy and optimizing network resource usage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Wang, Shuohang, Yunshi Lan, Yi Tay, Jing Jiang und Jingjing Liu. „Multi-Level Head-Wise Match and Aggregation in Transformer for Textual Sequence Matching“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 05 (03.04.2020): 9209–16. http://dx.doi.org/10.1609/aaai.v34i05.6458.

Der volle Inhalt der Quelle
Annotation:
Transformer has been successfully applied to many natural language processing tasks. However, for textual sequence matching, simple matching between the representation of a pair of sequences might bring in unnecessary noise. In this paper, we propose a new approach to sequence pair matching with Transformer, by learning head-wise matching representations on multiple levels. Experiments show that our proposed approach can achieve new state-of-the-art performance on multiple tasks that rely only on pre-computed sequence-vector-representation, such as SNLI, MNLI-match, MNLI-mismatch, QQP, and SQuAD-binary.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie