Academic literature on the topic 'Federate learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Federate learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Federate learning"

1

Oktian, Yustus Eko, Brian Stanley, and Sang-Gon Lee. "Building Trusted Federated Learning on Blockchain." Symmetry 14, no. 7 (July 8, 2022): 1407. http://dx.doi.org/10.3390/sym14071407.

Full text
Abstract:
Federated learning enables multiple users to collaboratively train a global model using the users’ private data on users’ local machines. This way, users are not required to share their training data with other parties, maintaining user privacy; however, the vanilla federated learning proposal is mainly assumed to be run in a trusted environment, while the actual implementation of federated learning is expected to be performed in untrusted domains. This paper aims to use blockchain as a trusted federated learning platform to realize the missing “running on untrusted domain” requirement. First, we investigate vanilla federate learning issues such as client’s low motivation, client dropouts, model poisoning, model stealing, and unauthorized access. From those issues, we design building block solutions such as incentive mechanism, reputation system, peer-reviewed model, commitment hash, and model encryption. We then construct the full-fledged blockchain-based federated learning protocol, including client registration, training, aggregation, and reward distribution. Our evaluations show that the proposed solutions made federated learning more reliable. Moreover, the proposed system can motivate participants to be honest and perform best-effort training to obtain higher rewards while punishing malicious behaviors. Hence, running federated learning in an untrusted environment becomes possible.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yanbin, Yue Li, Huanliang Xu, and Shougang Ren. "An Adaptive Communication-Efficient Federated Learning to Resist Gradient-Based Reconstruction Attacks." Security and Communication Networks 2021 (April 22, 2021): 1–16. http://dx.doi.org/10.1155/2021/9919030.

Full text
Abstract:
The widely deployed devices in Internet of Things (IoT) have opened up a large amount of IoT data. Recently, federated learning emerges as a promising solution aiming to protect user privacy on IoT devices by training a globally shared model. However, the devices in the complex IoT environments pose great challenge to federate learning, which is vulnerable to gradient-based reconstruction attacks. In this paper, we discuss the relationships between the security of federated learning model and optimization technologies of decreasing communication overhead comprehensively. To promote the efficiency and security, we propose a defence strategy of federated learning which is suitable to resource-constrained IoT devices. The adaptive communication strategy is to adjust the frequency and parameter compression by analysing the training loss to ensure the security of the model. The experiments show the efficiency of our proposed method to decrease communication overhead, while preventing privacy data leakage.
APA, Harvard, Vancouver, ISO, and other styles
3

Bektemyssova, G. U., G. S. Bakirova, Sh G. Yermukhanbetova, A. Shyntore, D. B. Umutkulov, and Zh S. Mangysheva. "Analysis of the relevance and prospects of application of federate training." Bulletin of the National Engineering Academy of the Republic of Kazakhstan 92, no. 2 (June 30, 2024): 56–65. http://dx.doi.org/10.47533/2024.1606-146x.26.

Full text
Abstract:
This article examines federated learning (FOE) as an innovative approach to machine learning, different from traditional methods. In conventional machine learning (MO), data is collected on a central server to train the model. However, in the case of FO, the learning model is directed to data distributed across local devices, and learning takes place directly on these devices. In addition, the article discusses methods and algorithms of federated learning, identifies the advantages and real areas of application of federated learning. FO is used in various fields, including working with medical data and personal data of customers in sales companies. This approach is especially valuable for ensuring data confidentiality and privacy.
APA, Harvard, Vancouver, ISO, and other styles
4

Shkurti, Lamir, and Mennan Selimi. "AdaptiveMesh: Adaptive Federate Learning for Resource-Constrained Wireless Environments." International Journal of Online and Biomedical Engineering (iJOE) 20, no. 14 (November 14, 2024): 22–37. http://dx.doi.org/10.3991/ijoe.v20i14.50559.

Full text
Abstract:
Federated learning (FL) presents a decentralized approach to model training, particularly beneficial in scenarios prioritizing data privacy, such as healthcare. This paper introduces AdaptiveMesh, an FL adaptive algorithm designed to optimize training efficiency in heterogeneous wireless environments. Through dynamic adjustment of training parameters based on client performance metrics, including central processing unit (CPU) utilization and accuracy trends, AdaptiveMesh aims to enhance model convergence and resource utilization. Experimental evaluations on heterogeneous client devices demonstrate the algorithm’s effectiveness in improving model accuracy, stability, and training efficiency. Results indicate a significant impact on CPU adaptation in preventing client overloading and mitigating overheating risks. Furthermore, the results of the one-way analysis of variance (ANOVA) and regression analysis highlight significant differences in CPU usage, accuracy, and epochs between devices with varying levels of hardware capabilities. These findings underscore the algorithm’s potential for practical deployment in real-world edge computing environments, addressing challenges posed by heterogeneous device capabilities and resource constraints.
APA, Harvard, Vancouver, ISO, and other styles
5

Kholod, Ivan, Evgeny Yanaki, Dmitry Fomichev, Evgeniy Shalugin, Evgenia Novikova, Evgeny Filippov, and Mats Nordlund. "Open-Source Federated Learning Frameworks for IoT: A Comparative Review and Analysis." Sensors 21, no. 1 (December 29, 2020): 167. http://dx.doi.org/10.3390/s21010167.

Full text
Abstract:
The rapid development of Internet of Things (IoT) systems has led to the problem of managing and analyzing the large volumes of data that they generate. Traditional approaches that involve collection of data from IoT devices into one centralized repository for further analysis are not always applicable due to the large amount of collected data, the use of communication channels with limited bandwidth, security and privacy requirements, etc. Federated learning (FL) is an emerging approach that allows one to analyze data directly on data sources and to federate the results of each analysis to yield a result as traditional centralized data processing. FL is being actively developed, and currently, there are several open-source frameworks that implement it. This article presents a comparative review and analysis of the existing open-source FL frameworks, including their applicability in IoT systems. The authors evaluated the following features of the frameworks: ease of use and deployment, development, analysis capabilities, accuracy, and performance. Three different data sets were used in the experiments—two signal data sets of different volumes and one image data set. To model low-power IoT devices, computing nodes with small resources were defined in the testbed. The research results revealed FL frameworks that could be applied in the IoT systems now, but with certain restrictions on their use.
APA, Harvard, Vancouver, ISO, and other styles
6

Srinivas, C., S. Venkatramulu, V. Chandra Shekar Rao, B. Raghuram, K. Vinay Kumar, and Sreenivas Pratapagiri. "Decentralized Machine Learning based Energy Efficient Routing and Intrusion Detection in Unmanned Aerial Network (UAV)." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 6s (June 13, 2023): 517–27. http://dx.doi.org/10.17762/ijritcc.v11i6s.6960.

Full text
Abstract:
Decentralized machine learning (FL) is a system that uses federated learning (FL). Without disclosing locally stored sensitive information, FL enables multiple clients to work together to solve conventional distributed ML problems coordinated by a central server. In order to classify FLs, this research relies heavily on machine learning and deep learning techniques. The next generation of wireless networks is anticipated to incorporate unmanned aerial vehicles (UAVs) like drones into both civilian and military applications. The use of artificial intelligence (AI), and more specifically machine learning (ML) methods, to enhance the intelligence of UAV networks is desirable and necessary for the aforementioned uses. Unfortunately, most existing FL paradigms are still centralized, with a singular entity accountable for network-wide ML model aggregation and fusion. This is inappropriate for UAV networks, which frequently feature unreliable nodes and connections, and provides a possible single point of failure. There are many challenges by using high mobility of UAVs, of loss of packet frequent and difficulties in the UAV between the weak links, which affect the reliability while delivering data. An earlier UAV failure is happened by the unbalanced conception of energy and lifetime of the network is decreased; this will accelerate consequently in the overall network. In this paper, we focused mainly on the technique of security while maintaining UAV network in surveillance context, all information collected from different kinds of sources. The trust policies are based on peer-to-peer information which is confirmed by UAV network. A pre-shared UAV list or used by asymmetric encryption security in the proposal system. The wrong information can be identified when the UAV the network is hijacked physically by using this proposed technique. To provide secure routing path by using Secure Location with Intrusion Detection System (SLIDS) and conservation of energy-based prediction of link breakage done by location-based energy efficient routing (LEER) for discovering path of degree connectivity. Thus, the proposed novel architecture is named as Decentralized Federate Learning- Secure Location with Intrusion Detection System (DFL-SLIDS), which achieves 98% of routing overhead, 93% of end-to-end delay, 92% of energy efficiency, 86.4% of PDR and 97% of throughput.
APA, Harvard, Vancouver, ISO, and other styles
7

Tabaszewski, Maciej, Paweł Twardowski, Martyna Wiciak-Pikuła, Natalia Znojkiewicz, Agata Felusiak-Czyryca, and Jakub Czyżycki. "Machine Learning Approaches for Monitoring of Tool Wear during Grey Cast-Iron Turning." Materials 15, no. 12 (June 20, 2022): 4359. http://dx.doi.org/10.3390/ma15124359.

Full text
Abstract:
The dynamic development of new technologies enables the optimal computer technique choice to improve the required quality in today’s manufacturing industries. One of the methods of improving the determining process is machine learning. This paper compares different intelligent system methods to identify the tool wear during the turning of gray cast-iron EN-GJL-250 using carbide cutting inserts. During these studies, the experimental investigation was conducted with three various cutting speeds vc (216, 314, and 433 m/min) and the exact value of depth of cut ap and federate f. Furthermore, based on the vibration acceleration signals, appropriate measures were developed that were correlated with the tool condition. In this work, machine learning methods were used to predict tool condition; therefore, two tool classes were proposed, namely usable and unsuitable, and tool corner wear VBc = 0.3 mm was assumed as a wear criterium. The diagnostic measures based on acceleration vibration signals were selected as input to the models. Additionally, the assessment of significant features in the division into usable and unsuitable class was caried out. Finally, this study evaluated chosen methods (classification and regression tree, induced fuzzy rules, and artificial neural network) and selected the most effective model.
APA, Harvard, Vancouver, ISO, and other styles
8

Launet, Laëtitia, Yuandou Wang, Adrián Colomer, Jorge Igual, Cristian Pulgarín-Ospina, Spiros Koulouzis, Riccardo Bianchi, et al. "Federating Medical Deep Learning Models from Private Jupyter Notebooks to Distributed Institutions." Applied Sciences 13, no. 2 (January 9, 2023): 919. http://dx.doi.org/10.3390/app13020919.

Full text
Abstract:
Deep learning-based algorithms have led to tremendous progress over the last years, but they face a bottleneck as their optimal development highly relies on access to large datasets. To mitigate this limitation, cross-silo federated learning has emerged as a way to train collaborative models among multiple institutions without having to share the raw data used for model training. However, although artificial intelligence experts have the expertise to develop state-of-the-art models and actively share their code through notebook environments, implementing a federated learning system in real-world applications entails significant engineering and deployment efforts. To reduce the complexity of federation setups and bridge the gap between federated learning and notebook users, this paper introduces a solution that leverages the Jupyter environment as part of the federated learning pipeline and simplifies its automation, the Notebook Federator. The feasibility of this approach is then demonstrated with a collaborative model solving a digital pathology image analysis task in which the federated model reaches an accuracy of 0.8633 on the test set, as compared to the centralized configurations for each institution obtaining 0.7881, 0.6514, and 0.8096, respectively. As a fast and reproducible tool, the proposed solution enables the deployment of a cross-country federated environment in only a few minutes.
APA, Harvard, Vancouver, ISO, and other styles
9

Parekh, Nisha Harish, and Mrs Vrushali Shinde. "Federated Learning : A Paradigm Shift in Collaborative Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 11 (November 10, 2024): 1–6. http://dx.doi.org/10.55041/ijsrem38501.

Full text
Abstract:
Federated learning (FL) has emerged as an exceptionally promising method within the realm of machine learning, enabling multiple entities to jointly train a global model while maintaining decentralized data. This paper presents a comprehensive review of federated learning methodologies, applications, and challenges. We begin by elucidating the fundamental concepts underlying FL, including federated optimization algorithms, communication protocols, and privacy-preserving techniques. Subsequently, we delve into various domains where FL has found significant traction, examples include healthcare, finance, and the Internet of Things (IoT), showcasing successful deployments and innovative strategies. Furthermore, we discuss the inherent challenges associated with federated learning, such as communication overhead, heterogeneity of data sources, and privacy concerns, and explore state- of-the-art solutions proposed in literature. Finally, we outline future research directions in federated learning, including advancements in privacy-preserving techniques, scalability improvements, and extension of FL to emerging domains. This thorough examination provides a valuable asset for researchers, practitioners, and policymakers keen on grasping the panorama of federated learning and its ramifications for collaborative machine learning in dispersed settings.
APA, Harvard, Vancouver, ISO, and other styles
10

Шубин, Б., Т. Максимюк, О. Яремко, Л. Фабрі, and Д. Мрозек. "МОДЕЛЬ ІНТЕГРАЦІЇ ФЕДЕРАТИВНОГО НАВЧАННЯ В МЕРЕЖІ МОБІЛЬНОГО ЗВ’ЯЗКУ 5-ГО ПОКОЛІННЯ." Information and communication technologies, electronic engineering 2, no. 1 (August 2022): 26–35. http://dx.doi.org/10.23939/ictee2022.01.026.

Full text
Abstract:
This paper investigates the main advantages of using Federated Learning (FL) for sharing experiences between intelligent devices in the environment of 5th generation mobile communication networks. This approach makes it possible to build effective machine learning algorithms using confidential data, the loss of which may be undesirable or even dangerous for users. Therefore, for the tasks where the confidentiality of the data is required for processing and analysis, we suggest using Federated Learning (FL) approaches. In this case, all users' personal information will be processed locally on their devices. FL ensures the security of confidential data for subscribers, allows mobile network operators to reduce the amount of redundant information in the radio channel, and also allows optimizing the functioning of the mobile network. The paper presents a three-level model of integration of Federated Learning into the mobile network and describes the main features of this approach, as well as experimental studies that demonstrate the results of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Federate learning"

1

Eriksson, Henrik. "Federated Learning in Large Scale Networks : Exploring Hierarchical Federated Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292744.

Full text
Abstract:
Federated learning faces a challenge when dealing with highly heterogeneous data and it can sometimes be inadequate to adopt an approach where a single model is trained for usage at all nodes in the network. Different approaches have been investigated to succumb this issue such as adapting the trained model to each node and clustering the nodes in the network and train a different model for each cluster where the data is less heterogeneous. In this work we study the possibilities to improve the local model performance utilizing the hierarchical setup that comes with clustering the participating clients in the network. Experiments are carried out featuring a Long Short-Term Memory network to perform time series forecasting to evaluate different approaches utilizing the hierarchical setup and comparing them to standard federated learning approaches. The experiments are done using a dataset collected by Ericsson AB consisting of handovers recorded at base stations in an European city. The hierarchical approaches didn’t show any benefit over common two-level approaches.
Federated Learning står inför en utmaning när det gäller att hantera data med en hög grad av heterogenitet och det kan i vissa fall vara olämpligt att använda sig av en approach där en och samma modell är tränad för att användas av alla noder i nätverket. Olika approacher för att hantera detta problem har undersökts som att anpassa den tränade modellen till varje nod och att klustra noderna i nätverket och träna en egen modell för varje kluster inom vilket datan är mindre heterogen. I detta arbete studeras möjligheterna att förbättra prestandan hos de lokala modellerna genom att dra nytta av den hierarkiska anordning som uppstår när de deltagande noderna i nätverket grupperas i kluster. Experiment är utförda med ett Long Short-Term Memory-nätverk för att utföra tidsserieprognoser för att utvärdera olika approacher som drar nytta av den hierarkiska anordningen och jämför dem med vanliga federated learning-approacher. Experimenten är utförda med ett dataset insamlat av Ericsson AB. Det består av "handoversfrån basstationer i en europeisk stad. De hierarkiska approacherna visade inga fördelar jämfört med de vanliga två-nivåapproacherna.
APA, Harvard, Vancouver, ISO, and other styles
2

Taiello, Riccardo. "Apprentissage automatique sécurisé pour l'analyse collaborative des données de santé à grande échelle." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4031.

Full text
Abstract:
Cette thèse de doctorat explore l'intégration de la préservation de la confidentialité, de l'imagerie médicale et de l'apprentissage fédéré (FL) à l'aide de méthodes cryptographiques avancées. Dans le cadre de l'analyse d'images médicales, nous développons un cadre de recalage d'images préservant la confidentialité (PPIR). Ce cadre aborde le défi du recalage des images de manière confidentielle, sans révéler leur contenu. En étendant les paradigmes de recalage classiques, nous incorporons des outils cryptographiques tels que le calcul multipartite sécurisé et le chiffrement homomorphe pour effectuer ces opérations en toute sécurité. Ces outils sont essentiels car ils empêchent les fuites de données pendant le traitement. Étant donné les défis associés à la performance et à l'évolutivité des méthodes cryptographiques dans les données de haute dimension, nous optimisons nos opérations de recalage d'images en utilisant des approximations de gradient. Notre attention se porte sur des méthodes de recalage de plus en plus complexes, telles que les approches rigides, affines et non linéaires utilisant des splines cubiques ou des difféomorphismes, paramétrées par des champs de vitesses variables dans le temps. Nous démontrons comment ces méthodes de recalage sophistiquées peuvent intégrer des mécanismes de préservation de la confidentialité de manière efficace dans diverses tâches.Parallèlement, la thèse aborde le défi des retardataires dans l'apprentissage fédéré, en mettant l'accent sur le rôle de l'agrégation sécurisée (SA) dans l'entraînement collaboratif des modèles. Nous introduisons "Eagle", un schéma SA synchrone conçu pour optimiser la participation des dispositifs arrivant tardivement, améliorant ainsi considérablement les efficacités computationnelle et de communication. Nous présentons également "Owl", adapté aux environnements FL asynchrones tamponnés, surpassant constamment les solutions antérieures. En outre, dans le domaine de la Buffered AsyncSA, nous proposons deux nouvelles approches : "Buffalo" et "Buffalo+". "Buffalo" fait progresser les techniques de SA pour la Buffered AsyncSA, tandis que "Buffalo+" contrecarre les attaques sophistiquées que les méthodes traditionnelles ne parviennent pas à détecter. Cette solution exploite les propriétés des fonctions de hachage incrémentielles et explore la parcimonie dans la quantification des gradients locaux des modèles clients. "Buffalo" et "Buffalo+" sont validés théoriquement et expérimentalement, démontrant leur efficacité dans une nouvelle tâche de FL inter-dispositifs pour les dispositifs médicaux.Enfin, cette thèse a accordé une attention particulière à la traduction des outils de préservation de la confidentialité dans des applications réelles, notamment grâce au cadre open-source FL Fed-BioMed. Les contributions concernent l'introduction de l'une des premières implémentations pratiques de SA spécifiquement conçues pour le FL inter-silos entre hôpitaux, mettant en évidence plusieurs cas d'utilisation pratiques
This PhD thesis explores the integration of privacy preservation, medical imaging, and Federated Learning (FL) using advanced cryptographic methods. Within the context of medical image analysis, we develop a privacy-preserving image registration (PPIR) framework. This framework addresses the challenge of registering images confidentially, without revealing their contents. By extending classical registration paradigms, we incorporate cryptographic tools like secure multi-party computation and homomorphic encryption to perform these operations securely. These tools are vital as they prevent data leakage during processing. Given the challenges associated with the performance and scalability of cryptographic methods in high-dimensional data, we optimize our image registration operations using gradient approximations. Our focus extends to increasingly complex registration methods, such as rigid, affine, and non-linear approaches using cubic splines or diffeomorphisms, parameterized by time-varying velocity fields. We demonstrate how these sophisticated registration methods can integrate privacy-preserving mechanisms effectively across various tasks. Concurrently, the thesis addresses the challenge of stragglers in FL, emphasizing the role of Secure Aggregation (SA) in collaborative model training. We introduce "Eagle", a synchronous SA scheme designed to optimize participation by late-arriving devices, significantly enhancing computational and communication efficiencies. We also present "Owl", tailored for buffered asynchronous FL settings, consistently outperforming earlier solutions. Furthermore, in the realm of Buffered AsyncSA, we propose two novel approaches: "Buffalo" and "Buffalo+". "Buffalo" advances SA techniques for Buffered AsyncSA, while "Buffalo+" counters sophisticated attacks that traditional methods fail to detect, such as model replacement. This solution leverages the properties of incremental hash functions and explores the sparsity in the quantization of local gradients from client models. Both Buffalo and Buffalo+ are validated theoretically and experimentally, demonstrating their effectiveness in a new cross-device FL task for medical devices.Finally, this thesis has devoted particular attention to the translation of privacy-preserving tools in real-world applications, notably through the FL open-source framework Fed-BioMed. Contributions concern the introduction of one of the first practical SA implementations specifically designed for cross-silo FL among hospitals, showcasing several practical use cases
APA, Harvard, Vancouver, ISO, and other styles
3

Mäenpää, Dylan. "Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated Learning." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176778.

Full text
Abstract:
Due to privacy and regulatory reasons, sharing data between institutions can be difficult. Because of this, real-world data are not fully exploited by machine learning (ML). An emerging method is to train ML models with federated learning (FL) which enables clients to collaboratively train ML models without sharing raw training data. We explored peer-to-peer FL by extending a prominent centralized FL algorithm called Fedavg to function in a peer-to-peer setting. We named this extended algorithm FedavgP2P. Deep neural networks at 100 simulated clients were trained to recognize digits using FedavgP2P and the MNIST data set. Scenarios with IID and non-IID client data were studied. We compared FedavgP2P to Fedavg with respect to models' convergence behaviors and communication costs. Additionally, we analyzed the connection between local client computation, the number of neighbors each client communicates with, and how that affects performance. We also attempted to improve the FedavgP2P algorithm with heuristics based on client identities and per-class F1-scores. The findings showed that by using FedavgP2P, the mean model convergence behavior was comparable to a model trained with Fedavg. However, this came with a varying degree of variation in the 100 models' convergence behaviors and much greater communications costs (at least 14.9x more communication with FedavgP2P). By increasing the amount of local computation up to a certain level, communication costs could be saved. When the number of neighbors a client communicated with increased, it led to a lower variation of the models' convergence behaviors. The FedavgP2P heuristics did not show improved performance. In conclusion, the overall findings indicate that peer-to-peer FL is a promising approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Jiarong. "Federated Learning for Bioimage Classification." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Qiwei. "Federated Learning with Heterogeneous Challenge." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27399.

Full text
Abstract:
Federated learning allows the training of a model from the distributed data of many clients under the orchestration of a central server. With the increasing concern on privacy, federated learning draws great attention from both academia and industry. However, the heterogeneous challenges introduced by natural characters of federated learning settings significantly degrade the performance of federated learning methods. Specifically, these heterogeneous challenges include the heterogeneous data challenges and the heterogeneous scenario challenges. Data heterogeneous challenges mean the significant differences between the datasets of numerous users. In federated learning, the data is stored separately on many distanced clients, causing these challenges. In addition, the heterogeneous scenario challenges refer to the differences between the devices participating in federated learning. Furthermore, the suitable models vary among the different scenarios. However, many existing federated learning methods use a single global model for all the devices' scenarios, which is not optimal for these two challenges. We first propose a novel federated learning framework called local union in federated learning (LU-FL) to address these challenges. LU-FL incorporates the hierarchical knowledge distillation mechanism that effectively transfers knowledge among different models. So, LU-FL can enable any number of models to be used on each client. Allocating the specially designed models to different clients can mitigate the adverse effects caused by these challenges. At the same time, it can further improve the accuracy of the output models. Extensive experimental results over several popular datasets demonstrate the effectiveness of our proposed method. It can effectively reduce the harmful effects of heterogeneous challenges, improving the accuracy of the final output models and the adaptability of the clients to various scenarios. So, it lets federated learning methods be applied in more diverse scenarios. Keywords: federated learning, neural networks, knowledge distillation, computer vision
APA, Harvard, Vancouver, ISO, and other styles
6

Carlsson, Robert. "Privacy-Preserved Federated Learning : A survey of applicable machine learning algorithms in a federated environment." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424383.

Full text
Abstract:
There is a potential in the field of medicine and finance of doing collaborative machine learning. These areas gather data which can be used for developing machine learning models that could predict all from sickness in patients to acts of economical crime like fraud. The problem that exists is that the data collected is mostly of confidential nature and should be handled with precaution. This makes the standard way of doing machine learning - gather data at one centralized server - unwanted to achieve. The safety of the data have to be taken into account. In this project we will explore the Federated learning approach of ”bringing the code to the data, instead of data to the code”. It is a decentralized way of doing machine learning where models are trained on connected devices and data is never shared. Keeping the data privacypreserved.
APA, Harvard, Vancouver, ISO, and other styles
7

Dinh, The Canh. "Distributed Algorithms for Fast and Personalized Federated Learning." Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.

Full text
Abstract:
The significant increase in the number of cutting-edge user equipment (UE) results in the phenomenal growth of the data volume generated at the edge. This shift fuels the booming trend of an emerging technique named Federated Learning. In contrast to traditional methods in which data is collected and processed centrally, FL builds a global model from contributions of UE's model without sending private data then effectively ensures data privacy. However, FL faces challenges in non-identically distributed (non-IID) data, communication cost, and convergence rate. Firstly, we propose first-order optimization FL algorithms named FedApprox and FEDL to improve the convergence rate. We propose FedApprox exploiting proximal stochastic variance-reduced gradient methods and extract insights from convergence conditions via the algorithm’s parameter control. We then propose FEDL to handle heterogeneous UE data and characterize the trade-off between local computation and global communication. Experimentally, FedApprox outperforms vanilla FedAvg while FEDL outperforms FedApprox and FedAvg. Secondly, we consider the communication between edges to be more costly than local computational overhead. We propose DONE, a distributed approximate Newton-type algorithm for communication-efficient federated edge learning. DONE approximates Newton direction using classical Richardson iteration on each edge. Experimentally, DONE attains a comparable performance to Newton’s method and outperforms first-order algorithms. Finally, we address the non-IID issue by proposing pFedMe, a personalized FL algorithm using Moreau envelopes. pFedMe achieves quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. We then propose FedU, a Federated Multitask Learning algorithm using Laplacian regularization to leverage the relationships among the users' models. Experimentally, pFedMe excels FedAvg and Per-FedAvg while FedU outperforms pFedMe and MOCHA.
APA, Harvard, Vancouver, ISO, and other styles
8

Felix, Johannes Morsbach. "Hardened Model Aggregation for Federated Learning backed by Distributed Trust Towards decentralizing Federated Learning using a Blockchain." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-423621.

Full text
Abstract:
Federated learning enables the training of machine learning models on isolated data islands but also introduces new security challenges. Besides training-data-poisoning and model-update-poisoning, centralized federated learning systems are subject to a third type of poisoning attack: model-aggregation-poisoning. In this type of attack an adversary tampers with the model aggregation in order to bias the model. This can cause immense harm and severely weaken the trust a model-consumer puts into federatively trained models. This thesis proposes a hardened model aggregation scheme based on decentralization to close such attack vectors by design. It replaces the central aggregation server with a combination of decentralized computing and decentralized storage. A reference implementation based on the Ethereum platform and the Interplanetary File System (IPFS) is compared to a classic centralized federated learning system in terms of model performance, communication cost and resilience against said attacks. This thesis shows that such a decentralized federated learning system effectively eliminates model-aggregation-poisoning andtraining-disruption attacks at the cost of increased network traffic while achieving identical model performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Leconte, Louis. "Compression and federated learning : an approach to frugal machine learning." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS107.

Full text
Abstract:
Les appareils et outils “intelligents” deviennent progressivement la norme, la mise en œuvre d'algorithmes basés sur des réseaux neuronaux artificiels se développant largement. Les réseaux neuronaux sont des modèles non linéaires d'apprentissage automatique avec de nombreux paramètres qui manipulent des objets de haute dimension et obtiennent des performances de pointe dans divers domaines, tels que la reconnaissance d'images, la reconnaissance vocale, le traitement du langage naturel et les systèmes de recommandation.Toutefois, l'entraînement d'un réseau neuronal sur un appareil à faible capacité de calcul est difficile en raison de problèmes de mémoire, de temps de calcul ou d'alimentation. Une approche naturelle pour simplifier cet entraînement consiste à utiliser des réseaux neuronaux quantifiés, dont les paramètres et les opérations utilisent des primitives efficaces à faible bit. Cependant, l'optimisation d'une fonction sur un ensemble discret en haute dimension est complexe et peut encore s'avérer prohibitive en termes de puissance de calcul. C'est pourquoi de nombreuses applications modernes utilisent un réseau d'appareils pour stocker des données individuelles et partager la charge de calcul. Une nouvelle approche a été proposée, l'apprentissage fédéré, qui prend en compte un environnement distribué : les données sont stockées sur des appareils différents et un serveur central orchestre le processus d'apprentissage sur les divers appareils.Dans cette thèse, nous étudions différents aspects de l'optimisation (stochastique) dans le but de réduire les coûts énergétiques pour des appareils potentiellement très hétérogènes. Les deux premières contributions de ce travail sont consacrées au cas des réseaux neuronaux quantifiés. Notre première idée est basée sur une stratégie de recuit : nous formulons le problème d'optimisation discret comme un problème d'optimisation sous contraintes (où la taille de la contrainte est réduite au fil des itérations). Nous nous sommes ensuite concentrés sur une heuristique pour la formation de réseaux neuronaux profonds binaires. Dans ce cadre particulier, les paramètres des réseaux neuronaux ne peuvent avoir que deux valeurs. Le reste de la thèse s'est concentré sur l'apprentissage fédéré efficace. Suite à nos contributions développées pour l'apprentissage de réseaux neuronaux quantifiés, nous les avons intégrées dans un environnement fédéré. Ensuite, nous avons proposé une nouvelle technique de compression sans biais qui peut être utilisée dans n'importe quel cadre d'optimisation distribuée basé sur le gradient. Nos dernières contributions abordent le cas particulier de l'apprentissage fédéré asynchrone, où les appareils ont des vitesses de calcul et/ou un accès à la bande passante différents. Nous avons d'abord proposé une contribution qui repondère les contributions des dispositifs distribués. Dans notre travail final, à travers une analyse détaillée de la dynamique des files d'attente, nous proposons une amélioration significative des bornes de complexité fournies dans la littérature sur l'apprentissage fédéré asynchrone.En résumé, cette thèse présente de nouvelles contributions au domaine des réseaux neuronaux quantifiés et de l'apprentissage fédéré en abordant des défis critiques et en fournissant des solutions innovantes pour un apprentissage efficace et durable dans un environnement distribué et hétérogène. Bien que les avantages potentiels soient prometteurs, notamment en termes d'économies d'énergie, il convient d'être prudent car un effet rebond pourrait se produire
“Intelligent” devices and tools are gradually becoming the standard, as the implementation of algorithms based on artificial neural networks is experiencing widespread development. Neural networks consist of non-linear machine learning models that manipulate high-dimensional objects and obtain state-of-the-art performances in various areas, such as image recognition, speech recognition, natural language processing, and recommendation systems.However, training a neural network on a device with lower computing capacity can be challenging, as it can imply cutting back on memory, computing time or power. A natural approach to simplify this training is to use quantized neural networks, whose parameters and operations use efficient low-bit primitives. However, optimizing a function over a discrete set in high dimension is complex, and can still be prohibitively expensive in terms of computational power. For this reason, many modern applications use a network of devices to store individual data and share the computational load. A new approach, federated learning, considers a distributed environment: Data is stored on devices and a centralized server orchestrates the training process across multiple devices.In this thesis, we investigate different aspects of (stochastic) optimization with the goal of reducing energy costs for potentially very heterogeneous devices. The first two contributions of this work are dedicated to the case of quantized neural networks. Our first idea is based on an annealing strategy: we formulate the discrete optimization problem as a constrained optimization problem (where the size of the constraint is reduced over iterations). We then focus on a heuristic for training binary deep neural networks. In this particular framework, the parameters of the neural networks can only have two values. The rest of the thesis is about efficient federated learning. Following our contributions developed for training quantized neural network, we integrate them into a federated environment. Then, we propose a novel unbiased compression technique that can be used in any gradient based distributed optimization framework. Our final contributions address the particular case of asynchronous federated learning, where devices have different computational speeds and/or access to bandwidth. We first propose a contribution that reweights the contributions of distributed devices. Then, in our final work, through a detailed queuing dynamics analysis, we propose a significant improvement to the complexity bounds provided in the literature onasynchronous federated learning.In summary, this thesis presents novel contributions to the field of quantized neural networks and federated learning by addressing critical challenges and providing innovative solutions for efficient and sustainable learning in a distributed and heterogeneous environment. Although the potential benefits are promising, especially in terms of energy savings, caution is needed as a rebound effect could occur
APA, Harvard, Vancouver, ISO, and other styles
10

Adapa, Supriya. "TensorFlow Federated Learning: Application to Decentralized Data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Machine learning is a complex discipline. But implementing machine learning models is far less daunting and difficult than it used to be, thanks to machine learning frameworks such as Google’s TensorFlow Federated that ease the process of acquiring data, training models, serving predictions, and refining future results. There are an estimated 3 billion smartphones in the world and 7 billion connected devices. These phones and devices are constantly generating new data. Traditional analytics and machine learning need that data to be centrally collected before it is processed to yield insights, ML models, and ultimately better products. This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn’t it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what’s been learned? TensorFlow Federated (TFF) is an open-source framework for experimenting with machine learning and other computations on decentralized data. It implements an approach called Federated Learning (FL), which enables many participating clients to train shared ML models while keeping their data locally. We have designed TFF based on our experiences with developing the federated learning technology at Google, where it powers ML models for mobile keyboard predictions and on-device search. With TFF, we are excited to put a flexible, open framework for locally simulating decentralized computations into the hands of all TensorFlow users. By using Twitter datasets we have done text classification of positives and negatives tweets of Twitter Account by using the Twitter application in machine learning.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Federate learning"

1

Yang, Qiang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. Federated Learning. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01585-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ludwig, Heiko, and Nathalie Baracaldo, eds. Federated Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Qiang, Lixin Fan, and Han Yu, eds. Federated Learning. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63076-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. Federated Learning. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7083-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Uddin, M. Irfan, and Wali Khan Mashwani. Federated Learning. Boca Raton: CRC Press, 2024. http://dx.doi.org/10.1201/9781003466581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sahoo, Jayakrushna, Mariya Ouaissa, and Akarsh K. Nair. Federated Learning. New York: Apple Academic Press, 2024. http://dx.doi.org/10.1201/9781003497196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rehman, Muhammad Habib ur, and Mohamed Medhat Gaber, eds. Federated Learning Systems. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70604-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Goebel, Randy, Han Yu, Boi Faltings, Lixin Fan, and Zehui Xiong, eds. Trustworthy Federated Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28996-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Razavi-Far, Roozbeh, Boyu Wang, Matthew E. Taylor, and Qiang Yang, eds. Federated and Transfer Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-11748-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Krishnan, Saravanan, A. Jose Anand, R. Srinivasan, R. Kavitha, and S. Suresh. Handbook on Federated Learning. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003384854.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Federate learning"

1

Rehman, Atiq Ur, Samir Brahim Belhaouari, Tanya Stanko, and Vladimir Gorovoy. "Divide to Federate Clustering Concept for Unsupervised Learning." In Proceedings of Seventh International Congress on Information and Communication Technology, 19–29. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2397-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Hai, Wei Wu, Xin Tang, and Zhong Zhou. "Federate Migration in Grid-Based Virtual Wargame Collaborative Environment." In Technologies for E-Learning and Digital Entertainment, 606–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11736639_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. "Summary and Outlook." In Federated Learning, 213–15. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7083-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. "Introduction." In Federated Learning, 1–92. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7083-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. "Communication Efficient Federated Learning." In Federated Learning, 93–137. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7083-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. "Secure Federated Learning." In Federated Learning, 165–212. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7083-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jin, Yaochu, Hangyu Zhu, Jinjin Xu, and Yang Chen. "Evolutionary Multi-objective Federated Learning." In Federated Learning, 139–64. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7083-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Qiang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. "Incentive Mechanism Design for Federated Learning." In Federated Learning, 95–105. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01585-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Qiang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. "Introduction." In Federated Learning, 1–15. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01585-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Qiang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. "Vertical Federated Learning." In Federated Learning, 69–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01585-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Federate learning"

1

Seo, Seonguk, Jinkyu Kim, Geeho Kim, and Bohyung Han. "Relaxed Contrastive Learning for Federated Learning." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12279–88. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Albuquerque, R. A., L. P. Dias, Momo Ziazet, K. Vandikas, S. Ickin, B. Jaumard, C. Natalino, L. Wosinska, P. Monti, and E. Wong. "Asynchronous Federated Split Learning." In 2024 IEEE 8th International Conference on Fog and Edge Computing (ICFEC), 11–18. IEEE, 2024. http://dx.doi.org/10.1109/icfec61590.2024.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oh, Seungeun, Jihong Park, Praneeth Vepakomma, Sihun Baek, Ramesh Raskar, Mehdi Bennis, and Seong-Lyun Kim. "LocFedMix-SL: Localize, Federate, and Mix for Improved Scalability, Convergence, and Latency in Split Learning." In WWW '22: The ACM Web Conference 2022. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3485447.3512153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oh, Seungeun, Jihong Park, Praneeth Vepakomma, Sihun Baek, Ramesh Raskar, Mehdi Bennis, and Seong-Lyun Kim. "LocFedMix-SL: Localize, Federate, and Mix for Improved Scalability, Convergence, and Latency in Split Learning." In WWW '22: The ACM Web Conference 2022. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3485447.3512153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Costa, Arthur N. F. Martins da, and Pedro Silva. "Computação, Saúde e Segurança: Explorando o Potencial da Aprendizagem Federada na Detecção de Arritmias Cardíacas." In Escola Regional de Computação Aplicada à Saúde. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/ercas.2024.238587.

Full text
Abstract:
Este artigo avalia a utilização da aprendizagem federada (federated learning) no âmbito da detecção de arritmias cardíacas. Comparou-se uma abordagem tradicional de Deep Learning contra uma federada, e observou-se que a federada foi capaz de manter desempenho preditivo e tempo de treinamento similares à proposta original ao mesmo tempo que assegura a segurança e privacidade dos dados.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Gaoyang, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. "FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models." In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS). IEEE, 2021. http://dx.doi.org/10.1109/iwqos52092.2021.9521274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dupuy, Christophe, Tanya G. Roosta, Leo Long, Clement Chung, Rahul Gupta, and Salman Avestimehr. "Learnings from Federated Learning in The Real World." In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. http://dx.doi.org/10.1109/icassp43922.2022.9747113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

da Silva, Vinicios B., Renan R. de Oliveira, Antonio Oliveira-Jr, and Ronaldo M. da Costa. "Treinamento Federado Aplicado à Segmentação do Ventrículo Esquerdo." In Escola Regional de Informática de Goiás. Sociedade Brasileira de Computação, 2023. http://dx.doi.org/10.5753/erigo.2023.237317.

Full text
Abstract:
A natureza sensível dos dados médicos é um desafio para a utilização de modelos centralizados de Aprendizado de Máquina (Machine Learning - ML). Em contraste com o ML tradicional, o Aprendizado Federado (Federated Learning - FL) permite que modelos sejam treinados entre instituições sem o compartilhamento de dados. Dessa forma, este artigo apresenta uma análise comparativa do uso de um modelo tradicional de ML para a segmentação de imagens médicas em comparação com o paradigma de FL, destacando seus benefícios no desenvolvimento de modelos colaborativos.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Zhikun, Daofeng Li, Ming Zhao, Sihai Zhang, and Jinkang Zhu. "Semi-Federated Learning." In 2020 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2020. http://dx.doi.org/10.1109/wcnc45663.2020.9120453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rizk, Elsa, Stefan Vlaski, and Ali H. Sayed. "Dynamic Federated Learning." In 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). IEEE, 2020. http://dx.doi.org/10.1109/spawc48557.2020.9154327.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Federate learning"

1

Shteyn, Anastasia, Konrad Kollnig, and Calum Inverarity. Federated learning: an introduction [report]. Open Data Institute, January 2023. http://dx.doi.org/10.61557/vnfu8593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yixuan. Federated Learning User Friendly Web App. Ames (Iowa): Iowa State University, August 2023. http://dx.doi.org/10.31274/cc-20240624-720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sokolovsky, Dmitry, Sergey Sokolov, and Alexey Rezaykin. e-learning course "Informatics". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0785.29012024.

Full text
Abstract:
The e-learning course "Informatics" is compiled in accordance with the requirements of the Federal State Educational Standard of Higher Education in the specialty 33.05.01 Pharmacy (specialty level), approved by Order of the Ministry of Education and Science of the Russian Federation dated August 11, 2016 No. 1037, and taking into account the requirements of the professional standard 02.006 "Pharmacist", approved by order of the Ministry of Labor and Social Protection No. 91n of the Russian Federation dated March 9, 2016. The purpose of the course is to master the necessary amount of theoretical and practical knowledge in computer science for graduates to master competencies in accordance with the Federal State Educational Standard of Higher Education, capable and ready to perform the work functions required by the professional standard. Course objectives: to provide knowledge about the rules of working with spreadsheets; to provide knowledge about working in medical information systems and the Internet information and telecommunications network; to provide skills in working with computer science software and hardware used at various stages of obtaining and analyzing biomedical information; to learn how to use the knowledge gained to solve problems of pharmaceutical and biomedical content. The labor intensity of the course is 72 hours. The course consists of 12 didactic units.
APA, Harvard, Vancouver, ISO, and other styles
4

Ferraz, Claudio, Frederico Finan, and Diana Moreira. Corrupting Learning: Evidence from Missing Federal Education Funds in Brazil. Cambridge, MA: National Bureau of Economic Research, June 2012. http://dx.doi.org/10.3386/w18150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Inman, Robert, and Daniel Rubinfeld. Federal Institutions and the Democratic Transition: Learning from South Africa. Cambridge, MA: National Bureau of Economic Research, January 2008. http://dx.doi.org/10.3386/w13733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eugenio, Evercita. Federated Learning and Differential Privacy: What might AI-Enhanced co-design of microelectronics learn?. Office of Scientific and Technical Information (OSTI), May 2022. http://dx.doi.org/10.2172/1868417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Worley, Sean, Scott Palmer, and Nathan Woods. Building, Sustaining and Improving: Using Federal Funds for Summer Learning and Afterschool. Education Counsel, July 2022. http://dx.doi.org/10.59656/yd-os9931.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Davis, Allison Crean, John Hitchcock, Beth-Ann Tek, Holly Bozeman, Kristen Pugh, Clarissa McKithen, and Molly Hershey-Arista. A National Call to Action for Summer Learning: How Did States Respond? Westat, July 2023. http://dx.doi.org/10.59656/yd-os6574.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Badrinarayan, Aneesha, and Linda Darling-Hammond. Developing State Assessment Systems That Support Teaching and Learning: What Can the Federal Government Do? Learning Policy Institute, April 2023. http://dx.doi.org/10.54300/885.821.

Full text
Abstract:
The Every Student Succeeds Act (ESSA) invited states to use multiple measures of “higher-order thinking skills and understanding,” including “extended-performance tasks,” to create state assessment systems that support teaching for deeper learning. However, few states have been able to navigate federal assessment requirements in ways that result in tests with these features that can support high-quality instruction. This report describes three ways that federal executive action can help states realize their visions for more meaningful assessments: 1. Better align technical expectations for assessment quality with ESSA’s intentions 2. Enable ESSA’s Innovative Assessment Demonstration Authority to better support innovation 3. Create additional pathways to higher-quality assessments through existing or new funding mechanisms
APA, Harvard, Vancouver, ISO, and other styles
10

Hart, Nick Hart, Sara Stefanik Stefanik, Christopher Murell Murell, and Karol Olejniczak Olejniczak. Blueprints for Learning: A Synthesis of Federal Evidence-Building Plans Under the Evidence Act. Data Foundation, June 2024. http://dx.doi.org/10.15868/socialsector.43901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography