Tesi sul tema "Continuous and distributed machine learning"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Continuous and distributed machine learning".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.
Testo completoAddanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Johansson, Samuel, e Karol Wojtulewicz. "Machine learning algorithms in a distributed context". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148920.
Testo completoKarimi, Ahmad Maroof. "Distributed Machine Learning Based Intrusion Detection System". University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1470401374.
Testo completoZam, Anton. "Evaluating Distributed Machine Learning using IoT Devices". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42388.
Testo completoInternet of things is growing every year with new devices being added all the time. Although some of the devices are continuously in use a large amount of them are mostly idle and sitting on untapped processing power that could be used to compute machine learning computations. There currently exist a lot of different methods to combine the processing power of multiple devices to compute machine learning task these are often called distributed machine learning methods. The main focus of this thesis is to evaluate these distributed machine learning methods to see if they could be implemented on IoT devices and if so, measure how efficient and scalable these methods are. The method chosen for implementation was called “MultiWorkerMirrorStrategy” and this method was evaluated by comparing the training time, training accuracy and evaluation accuracy of 2,3 and 4 Raspberry pi:s with a nondistributed machine learning method with 1 Raspberry pi. The results showed that although the computational power increased with every added device the training time increased while the rest of the measurements stayed the same. After the results were analyzed and discussed the conclusion of this were that the overhead added for communicating between devices were to high resulting in this method being very inefficient and wouldn’t scale without some sort of optimization being added.
Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.
Testo completoDahlberg, Leslie. "Evolutionary Computation in Continuous Optimization and Machine Learning". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35674.
Testo completoOuyang, Hua. "Optimal stochastic and distributed algorithms for machine learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.
Testo completoPrueller, Hans. "Distributed online machine learning for mobile care systems". Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10875.
Testo completoKonečný, Jakub. "Stochastic, distributed and federated optimization for machine learning". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.
Testo completoWang, Sinong. "Coded Computation for Speeding up Distributed Machine Learning". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062.
Testo completoDrolia, Utsav. "Adaptive Distributed Caching for Scalable Machine Learning Services". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1004.
Testo completoSheikholeslami, Sina. "Ablation Programming for Machine Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258413.
Testo completoEftersom maskininlärningssystem används i ett ökande antal applikationer från analys av data från satellitsensorer samt sjukvården till smarta virtuella assistenter och självkörande bilar blir de också mer och mer komplexa. Detta innebär att mer tid och beräkningsresurser behövs för att träna modellerna och antalet designval och hyperparametrar kommer också att öka. På grund av denna komplexitet är det ofta svårt att förstå vilken effekt varje komponent samt designval i ett maskininlärningssystem har på slutresultatet.En enkel metod för att få insikt om vilken påverkan olika komponenter i ett maskinlärningssytem har på systemets prestanda är att utföra en ablationsstudie. En ablationsstudie är en vetenskaplig undersökning av maskininlärningssystem för att få insikt om effekterna av var och en av dess byggstenar på dess totala prestanda. Men i praktiken så är ablationsstudier ännu inte vanligt förekommande inom maskininlärning. Ett av de viktigaste skälen till detta är det faktum att för närvarande så krävs både stora ändringar av koden för att utföra en ablationsstudie, samt extra beräkningsoch tidsresurser.Vi har försökt att ta itu med dessa utmaningar genom att använda en kombination av distribuerad asynkron beräkning och maskininlärning. Vi introducerar maggy, ett ramverk med öppen källkodsram för asynkron och parallell hyperparameteroptimering och ablationsstudier med PySpark och TensorFlow. Detta ramverk möjliggör bättre resursutnyttjande samt ablationsstudier och hyperparameteroptimering i ett enhetligt och utbyggbart API.
Ngo, Ha Nhi. "Apprentissage continu et prédiction coopérative basés sur les systèmes de multi-agents adaptatifs appliqués à la prévision de la dynamique du trafic". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES043.
Testo completoLe développement rapide des technologies matérielles, logicielles et de communication des systèmes de transport ont apporté des opportunités prometteuses et aussi des défis importants pour la société humaine. Parallèlement à l'amélioration de la qualité des transports, l'augmentation du nombre de véhicules a entraîné de fréquents embouteillages, en particulier dans les grandes villes aux heures de pointe. Les embouteillages ont de nombreuses conséquences sur le coût économique, l'environnement, la santé mentale des conducteurs et la sécurité routière. Il est donc important de prévoir la dynamique du trafic et d'anticiper l'apparition des embouteillages, afin de prévenir et d'atténuer les situations de trafic perturbées, ainsi que les collisions dangereuses à la fin de la queue d'un embouteillage. De nos jours, les technologies innovatives des systèmes de transport intelligents ont apporté des ensembles de données diverses et à grande échelle sur le trafic qui sont continuellement collectées et transférées entre les dispositifs sous forme de flux de données en temps réel. Par conséquent, de nombreux services de systèmes de transport intelligents ont été développés basé sur l'analyse de données massives, y compris la prévision du trafic. Cependant, le trafic contient de nombreux facteurs variés et imprévisibles qui rendent la modélisation, l'analyse et l'apprentissage de l'évolution historique du trafic difficiles. Le système que nous proposons vise donc à remplir les cinq composantes suivantes d'un système de prévision du trafic : textbf{analyse temporelle, analyse spatiale, interprétabilité, analyse de flux et adaptabilité à plusieurs échelles de données} pour capturer les patterns historiques de trafic à partir des flux de données, fournir une explication explicite de la causalité entrée-sortie et permettre différentes applications avec divers scénarios. Pour atteindre les objectifs mentionnés, nous proposons un modèle d'agent basé sur le clustering dynamique et la théorie des systèmes multi-agents adaptatifs afin de fournir des mécanismes d'apprentissage continu et de prédiction coopérative. Le modèle d'agent proposé comprend deux processus interdépendants fonctionnant en parallèle : textbf{apprentissage local continu} et textbf{prédiction coopérative}. Le processus d'apprentissage vise à détecter, au niveau de l'agent, différents états représentatifs à partir des flux de données reçus. Basé sur le clustering dynamique, ce processus permet la mise à jour continue de la base de données d'apprentissage en s'adaptant aux nouvelles données. Simultanément, le processus de prédiction exploite la base de données apprise, dans le but d'estimer les futurs états potentiels pouvant être observés. Ce processus prend en compte l'analyse de la dépendance spatiale en intégrant la coopération entre les agents et leur voisinage. Les interactions entre les agents sont conçues sur la base de la théorie AMAS avec un ensemble de mécanismes d'auto-adaptation comprenant textbf{l'auto-organisation}, textbf{l'autocorrection} et textbf{l'auto-évolution}, permettant au système d'éviter les perturbations, de gérer la qualité de la prédiction et de prendre en compte les nouvelles informations apprises dans le calcul de la prédiction. Les expériences menées dans le contexte de la prévision de la dynamique du trafic évaluent le système sur des ensembles de données générées et réelles à différentes échelles et dans différents scénarios. Les résultats obtenus ont montré la meilleure performance de notre proposition par rapport aux méthodes existantes lorsque les données de trafic expriment de fortes variations. En outre, les mêmes conclusions retirées de différents cas d'étude renforcent la capacité du système à s'adapter à des applications multi-échelles
Patvarczki, Jozsef. "Layout Optimization for Distributed Relational Databases Using Machine Learning". Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/291.
Testo completoEmeagwali, Ijeoma. "Using distributed machine learning to predict arterial blood pressure". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91441.
Testo completo3
Cataloged from PDF version of thesis.
Includes bibliographical references (page 57).
This thesis describes how to build a flow for machine learning on large volumes of data. The end result is EC-Flow, an end to end tool for using the EC-Star distributed machine learning system. The current problem is that analysing datasets on the order of hundreds of gigabytes requires overcoming many engineering challenges apart from the theory and algorithms used in performing the machine learning and analysing the results. EC-Star is a software package that can be used to perform such learning and analysis in a highly distributed fashion. However, there are many complexities to running very large datasets through such a system that increase its difficulty of use because the user is still exposed to the low level engineering challenges inherent to manipulating big data and configuring distributed systems. EC-Flow attempts to abstract a way these difficulties, providing users with a simple interface for each step in the machine learning pipepline.
by Ijeoma Emeagwali.
M. Eng.
Shi, Shaohuai. "Communication optimizations for distributed deep learning". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/813.
Testo completoRamakrishnan, Naveen. "Distributed Learning Algorithms for Sensor Networks". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1284991632.
Testo completoYaokai, Yang. "Effective Phishing Detection Using Machine Learning Approach". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1544189633297122.
Testo completoFerdowsi, Khosrowshahi Aidin. "Distributed Machine Learning for Autonomous and Secure Cyber-physical Systems". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99466.
Testo completoDoctor of Philosophy
In order to deliver innovative technological services to their residents, smart cities will rely on autonomous cyber-physical systems (CPSs) such as cars, drones, sensors, power grids, and other networks of digital devices. Maintaining stability, robustness, and security (SRS) of those smart city CPSs is essential for the functioning of our modern economies and societies. SRS can be defined as the ability of a CPS, such as an autonomous vehicular system, to operate without disruption in its quality of service. In order to guarantee SRS of CPSs one must overcome many technical challenges such as CPSs' vulnerability to various disruptive events such as natural disasters or cyber attacks, limited resources, scale, and interdependency. Such challenges must be considered for CPSs in order to design vehicles that are controlled autonomously and whose motion is robust against unpredictable events in their trajectory, to implement stable Internet of digital devices that work with a minimum communication delay, or to secure critical infrastructure to provide services such as electricity, gas, and water systems. The primary goal of this dissertation is, thus, to develop novel foundational analytical tools, that weave together notions from machine learning, game theory, and control theory, in order to study, analyze, and optimize SRS of autonomous CPSs which eventually will improve the quality of service provided by smart cities. To this end, various frameworks and effective algorithms are proposed in order to enhance the SRS of CPSs and pave the way toward the practical deployment of autonomous CPSs and applications. The results show that the developed solutions can enable a CPS to operate efficiently while maintaining its SRS. As such, the outcomes of this research can be used as a building block for the large deployment of smart city technologies that can be of immense benefit to tomorrow's societies.
Chapala, Usha Kiran, e Sridhar Peteti. "Continuous Video Quality of Experience Modelling using Machine Learning Model Trees". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17814.
Testo completoDinh, The Canh. "Distributed Algorithms for Fast and Personalized Federated Learning". Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.
Testo completoDai, Wei. "Learning with Staleness". Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1209.
Testo completoWagy, Mark David. "Enabling Machine Science through Distributed Human Computing". ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.
Testo completoJeon, Sung-eok. "Near-Optimality of Distributed Network Management with a Machine Learning Approach". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16136.
Testo completoLee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.
Testo completoZhang, Bingwen. "Change-points Estimation in Statistical Inference and Machine Learning Problems". Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/344.
Testo completoReddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.
Testo completoTummala, Akhil. "Self-learning algorithms applied in Continuous Integration system". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16675.
Testo completoAbd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.
Testo completoAlzubi, Omar A. "Designing machine learning ensembles : a game coalition approach". Thesis, Swansea University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.678293.
Testo completoSvensson, Frida. "Scalable Distributed Reinforcement Learning for Radio Resource Management". Thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177822.
Testo completoDet finns en stor potential automatisering och optimering inom radionätverk (RAN, radio access network) genom att använda datadrivna lösningar för att på ett effektivt sätt hantera den ökade komplexiteten på grund av trafikökningar and nya teknologier som introducerats i samband med 5G. Förstärkningsinlärning (RL, reinforcement learning) har naturliga kopplingar till reglerproblem i olika tidsskalor, såsom länkanpassning, interferenshantering och kraftkontroll, vilket är vanligt förekommande i radionätverk. Att förhöja statusen på datadrivna lösningar i radionätverk kommer att vara nödvändigt för att hantera utmaningarna som uppkommer med framtida 5G nätverk. I detta arbete föreslås vi en syetematisk metodologi för att applicera RL på ett reglerproblem. I första hand används den föreslagna metodologin på ett välkänt reglerporblem. Senare anpassas metodologin till ett äkta RAN-scenario. Arbetet inkluderar utförliga resultat från simuleringar för att visa effektiviteten och potentialen hos den föreslagna metoden. En lyckad metodologi skapades men resultaten på RAN-simulatorn saknade mognad.
Immaneni, Raghu Nandan. "An efficient approach to machine learning based text classification through distributed computing". Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1603338.
Testo completoText Classification is one of the classical problems in computer science, which is primarily used for categorizing data, spam detection, anonymization, information extraction, text summarization etc. Given the large amounts of data involved in the above applications, automated and accurate training models and approaches to classify data efficiently are needed.
In this thesis, an extensive study of the interaction between natural language processing, information retrieval and text classification has been performed. A case study named “keyword extraction” that deals with ‘identifying keywords and tags from millions of text questions’ is used as a reference. Different classifiers are implemented using MapReduce paradigm on the case study and the experimental results are recorded using two newly built distributed computing Hadoop clusters. The main aim is to enhance the prediction accuracy, to examine the role of text pre-processing for noise elimination and to reduce the computation time and resource utilization on the clusters.
Costantini, Marina. "Optimization methods over networks for popular content delivery and distributed machine learning". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS182.
Testo completoThe ever-increasing number of users and applications on the Internet sets a number of challenges for network operators and engineers in order to keep up with the high traffic demands. In this scenario, making efficient use of the resources available has become imperative. In this thesis, we develop optimization methods to improve the utilization of the network in two specific applications enabled by the Internet: network edge caching and distributed model training. Network edge caching is a recent technique that proposes to store at the network edge copies of contents that have a high probability of being requested to reduce latency and improve the overall user experience. Traditionally, when a user requests a web page or application, the request is sent to a remote server that stores the data. The server retrieves the requested data and sends it back to the user. This process can be slow and can lead to latency and congestion issues, especially when multiple users are accessing the same data simultaneously. To address this issue, network operators can deploy edge caching servers close to end-users. These servers are then filled during off-peak hours with contents that have high probability of being requested, so that during times of high traffic the user can still retrieve them in a short time and high quality. On the other hand, distributed model training, or more generally, distributed optimization, is a method for training large-scale machine learning models using multiple agents that work together to find the optimal parameters of the model. In such settings, the agents interleave local computation steps with communication steps to train a model that takes into account the data of all agents. To achieve this, agents may exchange optimization values (parameters, gradients) but not the data. Here we consider two such distributed training settings: the decentralized and the federated. In the decentralized setting, agents are interconnected in a network and communicate their optimization values only to their direct neighbors. In the federated, the agents communicate with a central server that regularly averages the most recent values of (usually a subset of) the agents and broadcasts the result to all of them. Naturally, the success of such techniques relies on the frequent communication of the agents between them or with the server. Therefore, there is a great interest in designing distributed optimization algorithms that achieve state-of-the-art performance at lower communication costs. In this thesis, we propose algorithms that improve the performance of existing methods for popular content delivery and distributed machine learning by making a better utilization of the network resources. In Chapter 2, we propose an algorithm that exploits recommendation engines to design jointly the contents cached at the network edge and the recommendations shown to the user. This algorithm achieves a higher fraction of requests served by the cache than its competitors, and thus requires less communication with the remote server. In Chapter 3, we design an asynchronous algorithm for decentralized optimization that requires minimum coordination between the agents and thus allows for connection interruptions and link failures. We then show that, if the agents are allowed to increase the amount of data they transmit by a factor equal to their node degree, the convergence of this algorithm can be made much faster by letting the agents decide their communication scheme according to the gains provided by communicating with each of their neighbors. Finally, in Chapter 4 we propose an algorithm that exploits inter-agent communication within the classical federated learning setup (where, in principle, agents communicate only with the server), and which can achieve the same convergence speed as the classical setup with fewer communication rounds with the server, which constitute the main bottleneck in this setting
Tron, Gianet Eric. "A Continuous Bond". Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-24020.
Testo completoVedanbhatla, Naga V. K. Abhinav. "Distributed Approach for Peptide Identification". TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1546.
Testo completoJohansson, Tobias. "Managed Distributed TensorFlow with YARN : Enabling Large-Scale Machine Learning on Hadoop Clusters". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248007.
Testo completoApache Hadoop är den ledande öppen källkod-plattformen för lagringen och processeringen av big data. Med data lagrat i Hadoop-kluster, är det fördelaktigt att kunna köra TensorFlow-applikationer på samma kluster som håller ingående dataset för träning av maskininlärningsmodeller. TensorFlow stödjer distribuerade exekveringar där djupa neurala nätverk kan tränas genom att använda en stor mängd berräkningsnoder. Att konfigurera och starta distribuerade TensorFlowapplikationer manuellt är komplext och opraktiskt och blir värre med fler noder.Detta projekt presenterar ett ramverk som använder Hadoops resurhanterare YARN för att hantera distribuerade TensorFlow-applikationer. Förslaget är en hemmahörande YARN-applikation med en ApplicationMaster (AM) per jobb som använder AM som ett register för upptäckt innan jobbet körs. Att anpassa TensorFlow-kod till ramverket handlar typiskt om några rader kod. I jämförelse med TensorFlowOnSpark är användarupplevelse väldigt likt och insamlad prestandadata indikerar att det finns en fördel med att köra TensorFlow direkt på YARN utan något extra lager däremellan.
Sherry, Dylan J. (Dylan Jacob). "FlexGP 2.0 : multiple levels of parallelism in distributed machine learning via genetic programming". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85498.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 105-107).
This thesis presents FlexGP 2.0, a distributed cloud-backed machine learning system. FlexGP 2.0 features multiple levels of parallelism which provide a significant improvement in accuracy v.s. elapsed time. The amount of computational resources in FlexGP 2.0 can be scaled along several dimensions to support large, complex data. FlexGP 2.0's core genetic programming (GP) learner includes multithreaded C++ model evaluation and a multi-objective optimization algorithm which is extensible to pursue any number of objectives simultaneously in parallel. FlexGP 2.0 parallelizes the entire learner to obtain a large distributed population size and leverages communication between learners to increase performance via transferral of search progress between learners. FlexGP 2.0 factors training data to boost performance and enable support for increased data size and complexity. Several experiments are performed which verify the efficacy of FlexGP 2.0's multilevel parallelism. Experiments run on a large dataset from a real-world regression problem. The results demonstrate both less time to achieve the same accuracy and overall increased accuracy, and illustrate the value of FlexGP 2.0 as a platform for machine learning.
by Dylan J. Sherry.
M. Eng.
Ewing, Gabriel. "Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221.
Testo completoStaffolani, Alessandro. "A Reinforcement Learning Agent for Distributed Task Allocation". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20051/.
Testo completoLetourneau, Sylvain. "Identification of attribute interactions and generation of globally relevant continuous features in machine learning". Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29029.
Testo completoLu, Haoye. "Function Optimization-based Schemes for Designing Continuous Action Learning Automata". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39097.
Testo completoFält, Markus. "Multi-factor Authentication : System proposal and analysis of continuous authentication methods". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39212.
Testo completoMäenpää, Dylan. "Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated Learning". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176778.
Testo completoGiaretta, Lodovico. "Pushing the Limits of Gossip-Based Decentralised Machine Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253794.
Testo completoUnder de senaste åren har vi sett en kraftig ökning av närvaron och kraften hos anslutna enheter, såsom smartphones, smarta hushållsmaskiner, och smarta sensorer. Dessa enheter producerar stora mängder data som kan vara extremt värdefulla för att träna stora och avancerade maskininlärningsmodeller. Dessvärre är det ibland inte möjligt att samla in och bearbeta dessa dataset på ett centralt system, detta på grund av deras storlek eller de växande sekretesskraven för digital datahantering.För att lösa problemet har forskare utvecklar protokoller för att träna globala modeller på ett decentraliserat sätt och utnyttja beräkningsförmågan hos dessa enheter. Dessa protokoll kräver inte datan på enheter delas utan förlitar sig istället på att kommunicera delvis tränade modeller.Dessvärre så är verkliga system svåra att kontrollera och kan presentera ett brett spektrum av utmaningar som lätt överskådas i akademiska studier och simuleringar. Denna forskning analyserar gossip inlärning protokollet vilket är av de viktigaste resultaten inom decentraliserad maskininlärning, för att bedöma dess tillämplighet på verkliga scenarier.Detta arbete identifierar de huvudsakliga antagandena om protokollet och utför noggrant utformade simuleringar för att testa protokollets beteende när dessa antaganden tas bort. Resultaten visar att protokollet redan kan tillämpas i vissa miljöer, men att det misslyckas när det utsätts för vissa förhållanden som i verklighetsbaserade scenarier. Mer specifikt så kan modellerna som utbildas av protokollet vara partiska och fördelaktiga mot data lagrade i noder med snabbare kommunikationshastigheter eller ett högre antal grannar. Vidare kan vissa kommunikationstopologier få en stark negativ inverkan på modellernas konvergenshastighet.Även om denna studie kom fram till en förmildrande effekt för vissa av dessa problem så verkar det som om gossip inlärning protokollet kräver ytterligare forskningsinsatser för att säkerställa en bredare industriell tillämplighet.
Abu, Salih Bilal Ahmad Abdal Rahman. "Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing". Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/70285.
Testo completoABBASS, YAHYA. "Human-Machine Interfaces using Distributed Sensing and Stimulation Systems". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1069056.
Testo completoRazavian, Narges Sharif. "Continuous Graphical Models for Static and Dynamic Distributions: Application to Structural Biology". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/340.
Testo completoFeraudo, Angelo. "Distributed Federated Learning in Manufacturer Usage Description (MUD) Deployment Environments". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Cerca il testo completoQuintal, Kyle. "Context-Awareness for Adversarial and Defensive Machine Learning Methods in Cybersecurity". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40835.
Testo completo