Siga este link para ver outros tipos de publicações sobre o tema: Continuous and distributed machine learning.

Teses / dissertações sobre o tema "Continuous and distributed machine learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Continuous and distributed machine learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.

Texto completo da fonte
Resumo:
Support Vector Machines (SVMs) are used for a growing number of applications. A fundamental constraint on SVM learning is the management of the training set. This is because the order of computations goes as the square of the size of the training set. Typically, training sets of 1000 (500 positives and 500 negatives, for example) can be managed on a PC without hard-drive thrashing. Training sets of 10,000 however, simply cannot be managed with PC-based resources. For this reason most SVM implementations must contend with some kind of chunking process to train parts of the data at a time (10 chunks of 1000, for example, to learn the 10,000). Sequential and multi-threaded chunking methods provide a way to run the SVM on large datasets while retaining accuracy. The multi-threaded distributed SVM described in this thesis is implemented using Java RMI, and has been developed to run on a network of multi-core/multi-processor computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Addanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.

Texto completo da fonte
Resumo:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Johansson, Samuel, e Karol Wojtulewicz. "Machine learning algorithms in a distributed context". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148920.

Texto completo da fonte
Resumo:
Interest in distributed approaches to machine learning has increased significantly in recent years due to continuously increasing data sizes for training machine learning models. In this thesis we describe three popular machine learning algorithms: decision trees, Naive Bayes and support vector machines (SVM) and present existing ways of distributing them. We also perform experiments with decision trees distributed with bagging, boosting and hard data partitioning and evaluate them in terms of performance measures such as accuracy, F1 score and execution time. Our experiments show that the execution time of bagging and boosting increase linearly with the number of workers, and that boosting performs significantly better than bagging and hard data partitioning in terms of F1 score. The hard data partitioning algorithm works well for large datasets where the execution time decrease as the number of workers increase without any significant loss in accuracy or F1 score, while the algorithm performs poorly on small data with an increase in execution time and loss in accuracy and F1 score when the number of workers increase.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Karimi, Ahmad Maroof. "Distributed Machine Learning Based Intrusion Detection System". University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1470401374.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Zam, Anton. "Evaluating Distributed Machine Learning using IoT Devices". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42388.

Texto completo da fonte
Resumo:
Internet of things (IoT) blir bara större och större varje år och nya enheter läggs till hela tiden. Även om en stor del av dessa enheter är kontinuerligt använda finns det fortfarande väldigt många enheter som står inaktiva och sitter på oanvänd processorkraft som kan användas till att utföra maskininlärnings beräkningar. Det finns för nuvarande väldigt många metoder för att kombinera processorkraften av flera enheter för att utföra maskininlärnings uppgifter, dessa brukar kallas för distribuerade maskininlärnings metoder. huvudfokuset av detta arbetet är att utvärdera olika distribuerade maskininlärnings metoder för att se om de kan implementeras på IoT enheter och i fallet metoderna kan implementeras ska man mäta hur effektiva och skalbara dessa metoderna är. Den distribuerade maskininlärnings metoden som blivit implementerad i detta arbete kallas för ”MultiWorkerMirrorStrategy” och denna metod blev utvärderar genom en jämförelse på träningstiden, tränings precisionen och utvärderings precisionen av 2,3 och 4 Raspberry pi:s med en icke distribuerad metod vilket endast använt sig av 1 Raspberry pi. Resultatet av mätningarna visade att trots att processorkraften ökar för varje enhet som lagts till i clustret blir träningstiden högre samtidigt som resterande mätningar var desamma. Genom att analysera och diskutera dessa resultat drogs slutsatsen att den overhead som skapats av att enheterna kommunicerar med varandra är alldeles för hög vilket resulterar i att den implementerade metoden är väldigt ineffektiv och kan inte skallas upp utan att någon typ av optimering läggs till.
Internet of things is growing every year with new devices being added all the time. Although some of the devices are continuously in use a large amount of them are mostly idle and sitting on untapped processing power that could be used to compute machine learning computations. There currently exist a lot of different methods to combine the processing power of multiple devices to compute machine learning task these are often called distributed machine learning methods. The main focus of this thesis is to evaluate these distributed machine learning methods to see if they could be implemented on IoT devices and if so, measure how efficient and scalable these methods are. The method chosen for implementation was called “MultiWorkerMirrorStrategy” and this method was evaluated by comparing the training time, training accuracy and evaluation accuracy of 2,3 and 4 Raspberry pi:s  with a nondistributed machine learning method with 1 Raspberry pi. The results showed that although the computational power increased with every added device the training time increased while the rest of the measurements stayed the same. After the results were analyzed and discussed the conclusion of this were that the overhead added for communicating between devices were to high resulting in this method being very inefficient and wouldn’t scale without some sort of optimization being added.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dahlberg, Leslie. "Evolutionary Computation in Continuous Optimization and Machine Learning". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35674.

Texto completo da fonte
Resumo:
Evolutionary computation is a field which uses natural computational processes to optimize mathematical and industrial problems. Differential Evolution, Particle Swarm Optimization and Estimation of Distribution Algorithm are some of the newer emerging varieties which have attracted great interest among researchers. This work has compared these three algorithms on a set of mathematical and machine learning benchmarks and also synthesized a new algorithm from the three other ones and compared it to them. The results from the benchmark show which algorithm is best suited to handle various machine learning problems and presents the advantages of using the new algorithm. The new algorithm called DEDA (Differential Estimation of Distribution Algorithms) has shown promising results at both machine learning and mathematical optimization tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ouyang, Hua. "Optimal stochastic and distributed algorithms for machine learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.

Texto completo da fonte
Resumo:
Stochastic and data-distributed optimization algorithms have received lots of attention from the machine learning community due to the tremendous demand from the large-scale learning and the big-data related optimization. A lot of stochastic and deterministic learning algorithms are proposed recently under various application scenarios. Nevertheless, many of these algorithms are based on heuristics and their optimality in terms of the generalization error is not sufficiently justified. In this talk, I will explain the concept of an optimal learning algorithm, and show that given a time budget and proper hypothesis space, only those achieving the lower bounds of the estimation error and the optimization error are optimal. Guided by this concept, we investigated the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We proposed a novel algorithm named Accelerated Nonsmooth Stochastic Gradient Descent, which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algorithm that can achieve the optimal O(1/t) rate for minimizing nonsmooth loss functions. The fast rates are confirmed by empirical comparisons with state-of-the-art algorithms including the averaged SGD. The Alternating Direction Method of Multipliers (ADMM) is another flexible method to explore function structures. In the second part we proposed stochastic ADMM that can be applied to a general class of convex and nonsmooth functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/sqrt{t}) for convex functions and O(log t/t) for strongly convex functions. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm. We also extend the scalability of stochastic algorithms to nonlinear kernel machines, where the problem is formulated as a constrained dual quadratic optimization. The simplex constraint can be handled by the classic Frank-Wolfe method. The proposed stochastic Frank-Wolfe methods achieve comparable or even better accuracies than state-of-the-art batch and online kernel SVM solvers, and are significantly faster. The last part investigates the problem of data-distributed learning. We formulate it as a consensus-constrained optimization problem and solve it with ADMM. It turns out that the underlying communication topology is a key factor in achieving a balance between a fast learning rate and computation resource consumption. We analyze the linear convergence behavior of consensus ADMM so as to characterize the interplay between the communication topology and the penalty parameters used in ADMM. We observe that given optimal parameters, the complete bipartite and the master-slave graphs exhibit the fastest convergence, followed by bi-regular graphs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Prueller, Hans. "Distributed online machine learning for mobile care systems". Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10875.

Texto completo da fonte
Resumo:
Telecare and especially Mobile Care Systems are getting more and more popular. They have two major benefits: first, they drastically improve the living standards and even health outcomes for patients. In addition, they allow significant cost savings for adult care by reducing the needs for medical staff. A common drawback of current Mobile Care Systems is that they are rather stationary in most cases and firmly installed in patients’ houses or flats, which makes them stay very near to or even in their homes. There is also an upcoming second category of Mobile Care Systems which are portable without restricting the moving space of the patients, but with the major drawback that they have either very limited computational abilities and only a rather low classification quality or, which is most frequently, they only have a very short runtime on battery and therefore indirectly restrict the freedom of moving of the patients once again. These drawbacks are inherently caused by the restricted computational resources and mainly the limitations of battery based power supply of mobile computer systems. This research investigates the application of novel Artificial Intelligence (AI) and Machine Learning (ML) techniques to improve the operation of 2 Mobile Care Systems. As a result, based on the Evolving Connectionist Systems (ECoS) paradigm, an innovative approach for a highly efficient and self-optimising distributed online machine learning algorithm called MECoS - Moving ECoS - is presented. It balances the conflicting needs of providing a highly responsive complex and distributed online learning classification algorithm by requiring only limited resources in the form of computational power and energy. This approach overcomes the drawbacks of current mobile systems and combines them with the advantages of powerful stationary approaches. The research concludes that the practical application of the presented MECoS algorithm offers substantial improvements to the problems as highlighted within this thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Konečný, Jakub. "Stochastic, distributed and federated optimization for machine learning". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.

Texto completo da fonte
Resumo:
We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a variance reduction property that enables linear convergence for strongly convex objectives. Second, we study distributed setting, in which the data describing the optimization problem does not fit into a single computing node. In this case, traditional methods are inefficient, as the communication costs inherent in distributed optimization become the bottleneck. We propose a communication-efficient framework which iteratively forms local subproblems that can be solved with arbitrary local optimization algorithms. Finally, we introduce the concept of Federated Optimization/Learning, where we try to solve the machine learning problems without having data stored in any centralized manner. The main motivation comes from industry when handling user-generated data. The current prevalent practice is that companies collect vast amounts of user data and store them in datacenters. An alternative we propose is not to collect the data in first place, and instead occasionally use the computational power of users' devices to solve the very same optimization problems, while alleviating privacy concerns at the same time. In such setting, minimization of communication rounds is the primary goal, and we demonstrate that solving the optimization problems in such circumstances is conceptually tractable.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Wang, Sinong. "Coded Computation for Speeding up Distributed Machine Learning". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Drolia, Utsav. "Adaptive Distributed Caching for Scalable Machine Learning Services". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1004.

Texto completo da fonte
Resumo:
Applications for Internet-enabled devices use machine learning to process captured data to make intelligent decisions or provide information to users. Typically, the computation to process the data is executed in cloud-based backends. The devices are used for sensing data, offloading it to the cloud, receiving responses and acting upon them. However, this approach leads to high end-to-end latency due to communication over the Internet. This dissertation proposes reducing this response time by minimizing offloading, and pushing computation close to the source of the data, i.e. to edge servers and devices themselves. To adapt to the resource constrained environment at the edge, it presents an approach that leverages spatiotemporal locality to push subparts of the model to the edge. This approach is embodied in a distributed caching framework, Cachier. Cachier is built upon a novel caching model for recognition, and is distributed across edge servers and devices. The analytical caching model for recognition provides a formulation for expected latency for recognition requests in Cachier. The formulation incorporates the effects of compute time and accuracy. It also incorporates network conditions, thus providing a method to compute expected response times under various conditions. This is utilized as a cost function by Cachier, at edge servers and devices. By analyzing requests at the edge server, Cachier caches relevant parts of the trained model at edge servers, which is used to respond to requests, minimizing the number of requests that go to the cloud. Then, Cachier uses context-aware prediction to prefetch parts of the trained model onto devices. The requests can then be processed on the devices, thus minimizing the number of offloaded requests. Finally, Cachier enables cooperation between nearby devices to allow exchanging prefetched data, reducing the dependence on remote servers even further. The efficacy of Cachier is evaluated by using it with an art recognition application. The application is driven using real world traces gathered at museums. By conducting a large-scale study with different control variables, we show that Cachier can lower latency, increase scalability and decrease infrastructure resource usage, while maintaining high accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Sheikholeslami, Sina. "Ablation Programming for Machine Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258413.

Texto completo da fonte
Resumo:
As machine learning systems are being used in an increasing number of applications from analysis of satellite sensory data and health-care analytics to smart virtual assistants and self-driving cars they are also becoming more and more complex. This means that more time and computing resources are needed in order to train the models and the number of design choices and hyperparameters will increase as well. Due to this complexity, it is usually hard to explain the effect of each design choice or component of the machine learning system on its performance.A simple approach for addressing this problem is to perform an ablation study, a scientific examination of a machine learning system in order to gain insight on the effects of its building blocks on its overall performance. However, ablation studies are currently not part of the standard machine learning practice. One of the key reasons for this is the fact that currently, performing an ablation study requires major modifications in the code as well as extra compute and time resources.On the other hand, experimentation with a machine learning system is an iterative process that consists of several trials. A popular approach for execution is to run these trials in parallel, on an Apache Spark cluster. Since Apache Spark follows the Bulk Synchronous Parallel model, parallel execution of trials includes several stages, between which there will be barriers. This means that in order to execute a new set of trials, all trials from the previous stage must be finished. As a result, we usually end up wasting a lot of time and computing resources on unpromising trials that could have been stopped soon after their start.We have attempted to address these challenges by introducing MAGGY, an open-source framework for asynchronous and parallel hyperparameter optimization and ablation studies with Apache Spark and TensorFlow. This framework allows for better resource utilization as well as ablation studies and hyperparameter optimization in a unified and extendable API.
Eftersom maskininlärningssystem används i ett ökande antal applikationer från analys av data från satellitsensorer samt sjukvården till smarta virtuella assistenter och självkörande bilar blir de också mer och mer komplexa. Detta innebär att mer tid och beräkningsresurser behövs för att träna modellerna och antalet designval och hyperparametrar kommer också att öka. På grund av denna komplexitet är det ofta svårt att förstå vilken effekt varje komponent samt designval i ett maskininlärningssystem har på slutresultatet.En enkel metod för att få insikt om vilken påverkan olika komponenter i ett maskinlärningssytem har på systemets prestanda är att utföra en ablationsstudie. En ablationsstudie är en vetenskaplig undersökning av maskininlärningssystem för att få insikt om effekterna av var och en av dess byggstenar på dess totala prestanda. Men i praktiken så är ablationsstudier ännu inte vanligt förekommande inom maskininlärning. Ett av de viktigaste skälen till detta är det faktum att för närvarande så krävs både stora ändringar av koden för att utföra en ablationsstudie, samt extra beräkningsoch tidsresurser.Vi har försökt att ta itu med dessa utmaningar genom att använda en kombination av distribuerad asynkron beräkning och maskininlärning. Vi introducerar maggy, ett ramverk med öppen källkodsram för asynkron och parallell hyperparameteroptimering och ablationsstudier med PySpark och TensorFlow. Detta ramverk möjliggör bättre resursutnyttjande samt ablationsstudier och hyperparameteroptimering i ett enhetligt och utbyggbart API.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Ngo, Ha Nhi. "Apprentissage continu et prédiction coopérative basés sur les systèmes de multi-agents adaptatifs appliqués à la prévision de la dynamique du trafic". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES043.

Texto completo da fonte
Resumo:
Le développement rapide des technologies matérielles, logicielles et de communication des systèmes de transport ont apporté des opportunités prometteuses et aussi des défis importants pour la société humaine. Parallèlement à l'amélioration de la qualité des transports, l'augmentation du nombre de véhicules a entraîné de fréquents embouteillages, en particulier dans les grandes villes aux heures de pointe. Les embouteillages ont de nombreuses conséquences sur le coût économique, l'environnement, la santé mentale des conducteurs et la sécurité routière. Il est donc important de prévoir la dynamique du trafic et d'anticiper l'apparition des embouteillages, afin de prévenir et d'atténuer les situations de trafic perturbées, ainsi que les collisions dangereuses à la fin de la queue d'un embouteillage. De nos jours, les technologies innovatives des systèmes de transport intelligents ont apporté des ensembles de données diverses et à grande échelle sur le trafic qui sont continuellement collectées et transférées entre les dispositifs sous forme de flux de données en temps réel. Par conséquent, de nombreux services de systèmes de transport intelligents ont été développés basé sur l'analyse de données massives, y compris la prévision du trafic. Cependant, le trafic contient de nombreux facteurs variés et imprévisibles qui rendent la modélisation, l'analyse et l'apprentissage de l'évolution historique du trafic difficiles. Le système que nous proposons vise donc à remplir les cinq composantes suivantes d'un système de prévision du trafic : textbf{analyse temporelle, analyse spatiale, interprétabilité, analyse de flux et adaptabilité à plusieurs échelles de données} pour capturer les patterns historiques de trafic à partir des flux de données, fournir une explication explicite de la causalité entrée-sortie et permettre différentes applications avec divers scénarios. Pour atteindre les objectifs mentionnés, nous proposons un modèle d'agent basé sur le clustering dynamique et la théorie des systèmes multi-agents adaptatifs afin de fournir des mécanismes d'apprentissage continu et de prédiction coopérative. Le modèle d'agent proposé comprend deux processus interdépendants fonctionnant en parallèle : textbf{apprentissage local continu} et textbf{prédiction coopérative}. Le processus d'apprentissage vise à détecter, au niveau de l'agent, différents états représentatifs à partir des flux de données reçus. Basé sur le clustering dynamique, ce processus permet la mise à jour continue de la base de données d'apprentissage en s'adaptant aux nouvelles données. Simultanément, le processus de prédiction exploite la base de données apprise, dans le but d'estimer les futurs états potentiels pouvant être observés. Ce processus prend en compte l'analyse de la dépendance spatiale en intégrant la coopération entre les agents et leur voisinage. Les interactions entre les agents sont conçues sur la base de la théorie AMAS avec un ensemble de mécanismes d'auto-adaptation comprenant textbf{l'auto-organisation}, textbf{l'autocorrection} et textbf{l'auto-évolution}, permettant au système d'éviter les perturbations, de gérer la qualité de la prédiction et de prendre en compte les nouvelles informations apprises dans le calcul de la prédiction. Les expériences menées dans le contexte de la prévision de la dynamique du trafic évaluent le système sur des ensembles de données générées et réelles à différentes échelles et dans différents scénarios. Les résultats obtenus ont montré la meilleure performance de notre proposition par rapport aux méthodes existantes lorsque les données de trafic expriment de fortes variations. En outre, les mêmes conclusions retirées de différents cas d'étude renforcent la capacité du système à s'adapter à des applications multi-échelles
Le développement rapide des technologies matérielles, logicielles et de communication des systèmes de transport ont apporté des opportunités prometteuses et aussi des défis importants pour la société humaine. Parallèlement à l'amélioration de la qualité des transports, l'augmentation du nombre de véhicules a entraîné de fréquents embouteillages, en particulier dans les grandes villes aux heures de pointe. Les embouteillages ont de nombreuses conséquences sur le coût économique, l'environnement, la santé mentale des conducteurs et la sécurité routière. Il est donc important de prévoir la dynamique du trafic et d'anticiper l'apparition des embouteillages, afin de prévenir et d'atténuer les situations de trafic perturbées, ainsi que les collisions dangereuses à la fin de la queue d'un embouteillage. De nos jours, les technologies innovatives des systèmes de transport intelligents ont apporté des ensembles de données diverses et à grande échelle sur le trafic qui sont continuellement collectées et transférées entre les dispositifs sous forme de flux de données en temps réel. Par conséquent, de nombreux services de systèmes de transport intelligents ont été développés basé sur l'analyse de données massives, y compris la prévision du trafic. Cependant, le trafic contient de nombreux facteurs variés et imprévisibles qui rendent la modélisation, l'analyse et l'apprentissage de l'évolution historique du trafic difficiles. Le système que nous proposons vise donc à remplir les cinq composantes suivantes d'un système de prévision du trafic : textbf{analyse temporelle, analyse spatiale, interprétabilité, analyse de flux et adaptabilité à plusieurs échelles de données} pour capturer les patterns historiques de trafic à partir des flux de données, fournir une explication explicite de la causalité entrée-sortie et permettre différentes applications avec divers scénarios. Pour atteindre les objectifs mentionnés, nous proposons un modèle d'agent basé sur le clustering dynamique et la théorie des systèmes multi-agents adaptatifs afin de fournir des mécanismes d'apprentissage continu et de prédiction coopérative. Le modèle d'agent proposé comprend deux processus interdépendants fonctionnant en parallèle : textbf{apprentissage local continu} et textbf{prédiction coopérative}. Le processus d'apprentissage vise à détecter, au niveau de l'agent, différents états représentatifs à partir des flux de données reçus. Basé sur le clustering dynamique, ce processus permet la mise à jour continue de la base de données d'apprentissage en s'adaptant aux nouvelles données. Simultanément, le processus de prédiction exploite la base de données apprise, dans le but d'estimer les futurs états potentiels pouvant être observés. Ce processus prend en compte l'analyse de la dépendance spatiale en intégrant la coopération entre les agents et leur voisinage. Les interactions entre les agents sont conçues sur la base de la théorie AMAS avec un ensemble de mécanismes d'auto-adaptation comprenant textbf{l'auto-organisation}, textbf{l'autocorrection} et textbf{l'auto-évolution}, permettant au système d'éviter les perturbations, de gérer la qualité de la prédiction et de prendre en compte les nouvelles informations apprises dans le calcul de la prédiction. Les expériences menées dans le contexte de la prévision de la dynamique du trafic évaluent le système sur des ensembles de données générées et réelles à différentes échelles et dans différents scénarios. Les résultats obtenus ont montré la meilleure performance de notre proposition par rapport aux méthodes existantes lorsque les données de trafic expriment de fortes variations. En outre, les mêmes conclusions retirées de différents cas d'étude renforcent la capacité du système à s'adapter à des applications multi-échelles
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Patvarczki, Jozsef. "Layout Optimization for Distributed Relational Databases Using Machine Learning". Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/291.

Texto completo da fonte
Resumo:
A common problem when running Web-based applications is how to scale-up the database. The solution to this problem usually involves having a smart Database Administrator determine how to spread the database tables out amongst computers that will work in parallel. Laying out database tables across multiple machines so they can act together as a single efficient database is hard. Automated methods are needed to help eliminate the time required for database administrators to create optimal configurations. There are four operators that we consider that can create a search space of possible database layouts: 1) denormalizing, 2) horizontally partitioning, 3) vertically partitioning, and 4) fully replicating. Textbooks offer general advice that is useful for dealing with extreme cases - for instance you should fully replicate a table if the level of insert to selects is close to zero. But even this seemingly obvious statement is not necessarily one that will lead to a speed up once you take into account that some nodes might be a bottle neck. There can be complex interactions between the 4 different operators which make it even more difficult to predict what the best thing to do is. Instead of using best practices to do database layout, we need a system that collects empirical data on when these 4 different operators are effective. We have implemented a state based search technique to try different operators, and then we used the empirically measured data to see if any speed up occurred. We recognized that the costs of creating the physical database layout are potentially large, but it is necessary since we want to know the "Ground Truth" about what is effective and under what conditions. After creating a dataset where these four different operators have been applied to make different databases, we can employ machine learning to induce rules to help govern the physical design of the database across an arbitrary number of computer nodes. This learning process, in turn, would allow the database placement algorithm to get better over time as it trains over a set of examples. What this algorithm calls for is that it will try to learn 1) "What is a good database layout for a particular application given a query workload?" and 2) "Can this algorithm automatically improve itself in making recommendations by using machine learned rules to try to generalize when it makes sense to apply each of these operators?" There has been considerable research done in parallelizing databases where large amounts of data are shipped from one node to another to answer a single query. Sometimes the costs of shipping the data back and forth might be high, so in this work we assume that it might be more efficient to create a database layout where each query can be answered by a single node. To make this assumption requires that all the incoming query templates are known beforehand. This requirement can easily be satisfied in the case of a Web-based application due to the characteristic that users typically interact with the system through a web interface such as web forms. In this case, unseen queries are not necessarily answerable, without first possibly reconstructing the data on a single machine. Prior knowledge of these exact query templates allows us to select the best possible database table placements across multiple nodes. But in the case of trying to improve the efficiency of a Web-based application, a web site provider might feel that they are willing to suffer the inconvenience of not being able to answer an arbitrary query, if they are in turn provided with a system that runs more efficiently.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Emeagwali, Ijeoma. "Using distributed machine learning to predict arterial blood pressure". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91441.

Texto completo da fonte
Resumo:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
3
Cataloged from PDF version of thesis.
Includes bibliographical references (page 57).
This thesis describes how to build a flow for machine learning on large volumes of data. The end result is EC-Flow, an end to end tool for using the EC-Star distributed machine learning system. The current problem is that analysing datasets on the order of hundreds of gigabytes requires overcoming many engineering challenges apart from the theory and algorithms used in performing the machine learning and analysing the results. EC-Star is a software package that can be used to perform such learning and analysis in a highly distributed fashion. However, there are many complexities to running very large datasets through such a system that increase its difficulty of use because the user is still exposed to the low level engineering challenges inherent to manipulating big data and configuring distributed systems. EC-Flow attempts to abstract a way these difficulties, providing users with a simple interface for each step in the machine learning pipepline.
by Ijeoma Emeagwali.
M. Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Shi, Shaohuai. "Communication optimizations for distributed deep learning". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/813.

Texto completo da fonte
Resumo:
With the increasing amount of data and the growing computing power, deep learning techniques using deep neural networks (DNNs) have been successfully applied in many practical artificial intelligence applications. The mini-batch stochastic gradient descent (SGD) algorithm and its variants are the most widely used algorithms in training deep models. The SGD algorithm is an iterative algorithm that needs to update the model parameters many times by traversing the training data, which is very time-consuming even using the single powerful GPU or TPU. Therefore, it becomes a common practice to exploit multiple processors (e.g., GPUs or TPUs) to accelerate the training process using distributed SGD. However, the iterative nature of distributed SGD requires multiple processors to iteratively communicate with each other to collaboratively update the model parameters. The intensive communication cost easily becomes the system bottleneck and limits the system scalability. In this thesis, we study the communication-efficient techniques for distributed SGD to improve the system scalability and thus accelerate the training process. We identify the performance issues in distributed SGD through benchmarking and modeling and then propose several communication optimization algorithms to address the communication issues. First, we build a performance model with a directed acyclic graph (DAG) to modeling the training process of distributed SGD and verify the model with extensive benchmarks on existing state-of-the-art deep learning frameworks including Caffe, MXNet, TensorFlow, and CNTK. Our benchmarking and modeling point out that existing optimizations for the communication problems are sub-optimal, which we need to address in this thesis. Second, to address the startup problem (due to the high latency of each communication) of layer-wise communications with wait-free backpropagation (WFBP), we propose an optimal gradient merging solution for WFBP, named MG-WFBP, that exploits the layer-wise property to well overlap the communication tasks with the computing tasks and can be adaptive to the training environments. Experiments are conducted on dense-GPU clusters with Ethernet and InfiniBand, and the results show that MG-WFBP can well address the startup problem in distributed training of layer-wise structured DNNs. Third, to make the high computing-intensive training tasks be possible in GPU clusters with low- bandwidth interconnect, we investigate the gradient compression techniques in distributed training. The top-{dollar}k{dollar} sparsification can well compress the communication traffic with little impact on the model convergence, but it suffers from a linear communication complexity to the number of workers so that top-{dollar}k{dollar} sparsification cannot scale well in large-scale clusters. To address the problem, we propose a global top-{dollar}k{dollar} (gTop-{dollar}k{dollar}) sparsification algorithm that reduces the communication complexity to be logarithmic to the number of workers. We also provide detailed theoretical analysis for the gTop-{dollar}k{dollar} SGD training algorithm, and the theoretical results show that our gTop-{dollar}k{dollar} SGD has the same order of convergence rate with SGD. Experiments are conducted on up to 64-GPU cluster to verify that gTop-{dollar}k{dollar} SGD significantly improves the system scalability with only a slight impact on the model convergence. Lastly, to enjoy the both benefits of the pipelining technique and the gradient sparsification algorithm, we propose a new distributed training algorithm, layer-wise adaptive gradient sparsification SGD (LAGS-SGD), which supports layer-wise sparsification and communication, and we theoretically and empirically prove that the LAGS-SGD preserves the convergence properties. To further alliterate the impact of the startup problem of layer-wise communications in LAGS-SGD, we also propose the optimal gradient merging solution for LAGS-SGD, named OMGS-SGD, and theoretical prove its optimality. The experimental results on a 16-node GPU cluster connected 1Gbps Ethernet show that OMGS-SGD can always improve the system scalability while the model convergence properties are not affected
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ramakrishnan, Naveen. "Distributed Learning Algorithms for Sensor Networks". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1284991632.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Yaokai, Yang. "Effective Phishing Detection Using Machine Learning Approach". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1544189633297122.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Ferdowsi, Khosrowshahi Aidin. "Distributed Machine Learning for Autonomous and Secure Cyber-physical Systems". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99466.

Texto completo da fonte
Resumo:
Autonomous cyber-physical systems (CPSs) such as autonomous connected vehicles (ACVs), unmanned aerial vehicles (UAVs), critical infrastructure (CI), and the Internet of Things (IoT) will be essential to the functioning of our modern economies and societies. Therefore, maintaining the autonomy of CPSs as well as their stability, robustness, and security (SRS) in face of exogenous and disruptive events is a critical challenge. In particular, it is crucial for CPSs to be able to not only operate optimally in the vicinity of a normal state but to also be robust and secure so as to withstand potential failures, malfunctions, and intentional attacks. However, to evaluate and improve the SRS of CPSs one must overcome many technical challenges such as the unpredictable behavior of a CPS's cyber-physical environment, the vulnerability to various disruptive events, and the interdependency between CPSs. The primary goal of this dissertation is, thus, to develop novel foundational analytical tools, that weave together notions from machine learning, game theory, and control theory, in order to study, analyze, and optimize SRS of autonomous CPSs. Towards achieving this overarching goal, this dissertation led to several major contributions. First, a comprehensive control and learning framework was proposed to thwart cyber and physical attacks on ACV networks. This framework brings together new ideas from optimal control and reinforcement learning (RL) to derive a new optimal safe controller for ACVs in order to maximize the street traffic flow while minimizing the risk of accidents. Simulation results show that the proposed optimal safe controller outperforms the current state of the art controllers by maximizing the robustness of ACVs to physical attacks. Furthermore, using techniques from convex optimization and deep RL a joint trajectory and scheduling policy is proposed in UAV-assisted networks that aims at maintaining the freshness of ground node data at the UAV. The analytical and simulation results show that the proposed policy can outperform policies such discretized state RL and value-based methods in terms of maximizing the freshness of data. Second, in the IoT domain, a novel watermarking algorithm, based on long short term memory cells, is proposed for dynamic authentication of IoT signals. The proposed watermarking algorithm is coupled with a game-theoretic framework so as to enable efficient authentication in massive IoT systems. Simulation results show that using our approach, IoT messages can be transmitted from IoT devices with an almost 100% reliability. Next, a brainstorming generative adversarial network (BGAN) framework is proposed. It is shown that this framework can learn to generate real-looking data in a distributed fashion while preserving the privacy of agents (e.g. IoT devices, ACVs, etc). The analytical and simulation results show that the proposed BGAN architecture allows heterogeneous neural network designs for agents, works without reliance on a central controller, and has a lower communication over head compared to other state-of-the-art distributed architectures. Last, but not least, the SRS challenges of interdependent CI (ICI) are addressed. Novel game-theoretic frameworks are proposed that allow the ICI administrator to assign different protection levels on ICI components to maximizing the expected ICI security. The mixed-strategy Nash of the games are derived analytically. Simulation results coupled with theoretical analysis show that, using the proposed games, the administrator can maximize the security level in ICI components. In summary, this dissertation provided major contributions across the areas of CPSs, machine learning, game theory, and control theory with the goal of ensuring SRS across various domains such as autonomous vehicle networks, IoT systems, and ICIs. The proposed approaches provide the necessary fundamentals that can lay the foundations of SRS in CPSs and pave the way toward the practical deployment of autonomous CPSs and applications.
Doctor of Philosophy
In order to deliver innovative technological services to their residents, smart cities will rely on autonomous cyber-physical systems (CPSs) such as cars, drones, sensors, power grids, and other networks of digital devices. Maintaining stability, robustness, and security (SRS) of those smart city CPSs is essential for the functioning of our modern economies and societies. SRS can be defined as the ability of a CPS, such as an autonomous vehicular system, to operate without disruption in its quality of service. In order to guarantee SRS of CPSs one must overcome many technical challenges such as CPSs' vulnerability to various disruptive events such as natural disasters or cyber attacks, limited resources, scale, and interdependency. Such challenges must be considered for CPSs in order to design vehicles that are controlled autonomously and whose motion is robust against unpredictable events in their trajectory, to implement stable Internet of digital devices that work with a minimum communication delay, or to secure critical infrastructure to provide services such as electricity, gas, and water systems. The primary goal of this dissertation is, thus, to develop novel foundational analytical tools, that weave together notions from machine learning, game theory, and control theory, in order to study, analyze, and optimize SRS of autonomous CPSs which eventually will improve the quality of service provided by smart cities. To this end, various frameworks and effective algorithms are proposed in order to enhance the SRS of CPSs and pave the way toward the practical deployment of autonomous CPSs and applications. The results show that the developed solutions can enable a CPS to operate efficiently while maintaining its SRS. As such, the outcomes of this research can be used as a building block for the large deployment of smart city technologies that can be of immense benefit to tomorrow's societies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Chapala, Usha Kiran, e Sridhar Peteti. "Continuous Video Quality of Experience Modelling using Machine Learning Model Trees". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17814.

Texto completo da fonte
Resumo:
Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users. In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Dinh, The Canh. "Distributed Algorithms for Fast and Personalized Federated Learning". Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.

Texto completo da fonte
Resumo:
The significant increase in the number of cutting-edge user equipment (UE) results in the phenomenal growth of the data volume generated at the edge. This shift fuels the booming trend of an emerging technique named Federated Learning. In contrast to traditional methods in which data is collected and processed centrally, FL builds a global model from contributions of UE's model without sending private data then effectively ensures data privacy. However, FL faces challenges in non-identically distributed (non-IID) data, communication cost, and convergence rate. Firstly, we propose first-order optimization FL algorithms named FedApprox and FEDL to improve the convergence rate. We propose FedApprox exploiting proximal stochastic variance-reduced gradient methods and extract insights from convergence conditions via the algorithm’s parameter control. We then propose FEDL to handle heterogeneous UE data and characterize the trade-off between local computation and global communication. Experimentally, FedApprox outperforms vanilla FedAvg while FEDL outperforms FedApprox and FedAvg. Secondly, we consider the communication between edges to be more costly than local computational overhead. We propose DONE, a distributed approximate Newton-type algorithm for communication-efficient federated edge learning. DONE approximates Newton direction using classical Richardson iteration on each edge. Experimentally, DONE attains a comparable performance to Newton’s method and outperforms first-order algorithms. Finally, we address the non-IID issue by proposing pFedMe, a personalized FL algorithm using Moreau envelopes. pFedMe achieves quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. We then propose FedU, a Federated Multitask Learning algorithm using Laplacian regularization to leverage the relationships among the users' models. Experimentally, pFedMe excels FedAvg and Per-FedAvg while FedU outperforms pFedMe and MOCHA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Dai, Wei. "Learning with Staleness". Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1209.

Texto completo da fonte
Resumo:
A fundamental assumption behind most machine learning (ML) algorithms and analyses is the sequential execution. That is, any update to the ML model can be immediately applied and the new model is always available for the next algorithmic step. This basic assumption, however, can be costly to realize, when the computation is carried out across multiple machines, linked by commodity networks that are usually 104 times slower than the memory speed due to fundamental hardware limitations. As a result, concurrent ML computation in the distributed settings often needs to handle delayed updates and perform learning in the presence of staleness. This thesis characterizes learning with staleness from three directions: (1) We extend the theoretical analyses of a number of classical ML algorithms, including stochastic gradient descent, proximal gradient descent on non-convex problems, and Frank-Wolfe algorithms, to explicitly incorporate staleness into their convergence characterizations. (2)We conduct simulation and large-scale distributed experiments to study the empirical effects of staleness on ML algorithms under indeterministic executions. Our results reveal that staleness is a key parameter governing the convergence speed for all considered ML algorithms, with varied ramifications. (3) We design staleness-minimizing parameter server systems by optimizing synchronization methods to effectively reduce the runtime staleness. The proposed optimization of a bounded consistency model utilizes the additional network bandwidths to communicate updates eagerly, relieving users of the burden to tune the staleness level. By minimizing staleness at the framework level, our system stabilizes diverging optimization paths and substantially accelerates convergence across ML algorithms without any modification to the ML programs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Wagy, Mark David. "Enabling Machine Science through Distributed Human Computing". ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.

Texto completo da fonte
Resumo:
Distributed human computing techniques have been shown to be effective ways of accessing the problem-solving capabilities of a large group of anonymous individuals over the World Wide Web. They have been successfully applied to such diverse domains as computer security, biology and astronomy. The success of distributed human computing in various domains suggests that it can be utilized for complex collaborative problem solving. Thus it could be used for "machine science": utilizing machines to facilitate the vetting of disparate human hypotheses for solving scientific and engineering problems. In this thesis, we show that machine science is possible through distributed human computing methods for some tasks. By enabling anonymous individuals to collaborate in a way that parallels the scientific method -- suggesting hypotheses, testing and then communicating them for vetting by other participants -- we demonstrate that a crowd can together define robot control strategies, design robot morphologies capable of fast-forward locomotion and contribute features to machine learning models for residential electric energy usage. We also introduce a new methodology for empowering a fully automated robot design system by seeding it with intuitions distilled from the crowd. Our findings suggest that increasingly large, diverse and complex collaborations that combine people and machines in the right way may enable problem solving in a wide range of fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Jeon, Sung-eok. "Near-Optimality of Distributed Network Management with a Machine Learning Approach". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16136.

Texto completo da fonte
Resumo:
An analytical framework is developed for distributed management of large networks where each node makes locally its decisions. Two issues remain open. One is whether a distributed algorithm would result in a near-optimal management. The other is the complexity, i.e., whether a distributed algorithm would scale gracefully with a network size. We study these issues through modeling, approximation, and randomized distributed algorithms. For near-optimality issue, we first derive a global probabilistic model of network management variables which characterizes the complex spatial dependence of the variables. The spatial dependence results from externally imposed management constraints and internal properties of communication environments. We then apply probabilistic graphical models in machine learning to show when and whether the global model can be approximated by a local model. This study results in a sufficient condition for distributed management to be nearly optimal. We then show how to obtain a near-optimal configuration through decentralized adaptation of local configurations. We next derive a near-optimal distributed inference algorithm based on the derived local model. We characterize the trade-off between near-optimality and complexity of distributed and statistical management. We validate our formulation and theory through simulations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Lee, Dong Ryeol. "A distributed kernel summation framework for machine learning and scientific applications". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44727.

Texto completo da fonte
Resumo:
The class of computational problems I consider in this thesis share the common trait of requiring consideration of pairs (or higher-order tuples) of data points. I focus on the problem of kernel summation operations ubiquitous in many data mining and scientific algorithms. In machine learning, kernel summations appear in popular kernel methods which can model nonlinear structures in data. Kernel methods include many non-parametric methods such as kernel density estimation, kernel regression, Gaussian process regression, kernel PCA, and kernel support vector machines (SVM). In computational physics, kernel summations occur inside the classical N-body problem for simulating positions of a set of celestial bodies or atoms. This thesis attempts to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide a unified, efficient parallel kernel summation framework that can utilize: (1) various types of deterministic and probabilistic approximations that may be suitable for both low and high-dimensional problems with a large number of data points; (2) indexing the data using any multi-dimensional binary tree with both distributed memory (MPI) and shared memory (OpenMP/Intel TBB) parallelism; (3) a dynamic load balancing scheme to adjust work imbalances during the computation. I will first summarize my previous research in serial kernel summation algorithms. This work started from Greengard/Rokhlin's earlier work on fast multipole methods for the purpose of approximating potential sums of many particles. The contributions of this part of this thesis include the followings: (1) reinterpretation of Greengard/Rokhlin's work for the computer science community; (2) the extension of the algorithms to use a larger class of approximation strategies, i.e. probabilistic error bounds via Monte Carlo techniques; (3) the multibody series expansion: the generalization of the theory of fast multipole methods to handle interactions of more than two entities; (4) the first O(N) proof of the batch approximate kernel summation using a notion of intrinsic dimensionality. Then I move onto the problem of parallelization of the kernel summations and tackling the scaling of two other kernel methods, Gaussian process regression (kernel matrix inversion) and kernel PCA (kernel matrix eigendecomposition). The artifact of this thesis has contributed to an open-source machine learning package called MLPACK which has been first demonstrated at the NIPS 2008 and subsequently at the NIPS 2011 Big Learning Workshop. Completing a portion of this thesis involved utilization of high performance computing resource at XSEDE (eXtreme Science and Engineering Discovery Environment) and NERSC (National Energy Research Scientific Computing Center).
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Zhang, Bingwen. "Change-points Estimation in Statistical Inference and Machine Learning Problems". Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/344.

Texto completo da fonte
Resumo:
"Statistical inference plays an increasingly important role in science, finance and industry. Despite the extensive research and wide application of statistical inference, most of the efforts focus on uniform models. This thesis considers the statistical inference in models with abrupt changes instead. The task is to estimate change-points where the underlying models change. We first study low dimensional linear regression problems for which the underlying model undergoes multiple changes. Our goal is to estimate the number and locations of change-points that segment available data into different regions, and further produce sparse and interpretable models for each region. To address challenges of the existing approaches and to produce interpretable models, we propose a sparse group Lasso (SGL) based approach for linear regression problems with change-points. Then we extend our method to high dimensional nonhomogeneous linear regression models. Under certain assumptions and using a properly chosen regularization parameter, we show several desirable properties of the method. We further extend our studies to generalized linear models (GLM) and prove similar results. In practice, change-points inference usually involves high dimensional data, hence it is prone to tackle for distributed learning with feature partitioning data, which implies each machine in the cluster stores a part of the features. One bottleneck for distributed learning is communication. For this implementation concern, we design communication efficient algorithm for feature partitioning data sets to speed up not only change-points inference but also other classes of machine learning problem including Lasso, support vector machine (SVM) and logistic regression."
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Reddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.

Texto completo da fonte
Resumo:
Modern machine learning systems pose several new statistical, scalability, privacy and ethical challenges. With the advent of massive datasets and increasingly complex tasks, scalability has especially become a critical issue in these systems. In this thesis, we focus on fundamental challenges related to scalability, such as computational and communication efficiency, in modern machine learning applications. The underlying central message of this thesis is that classical statistical thinking leads to highly effective optimization methods for modern big data applications. The first part of the thesis investigates optimization methods for solving large-scale nonconvex Empirical Risk Minimization (ERM) problems. Such problems have surged into prominence, notably through deep learning, and have led to exciting progress. However, our understanding of optimization methods suitable for these problems is still very limited. We develop and analyze a new line of optimization methods for nonconvex ERM problems, based on the principle of variance reduction. We show that our methods exhibit fast convergence to stationary points and improve the state-of-the-art in several nonconvex ERM settings, including nonsmooth and constrained ERM. Using similar principles, we also develop novel optimization methods that provably converge to second-order stationary points. Finally, we show that the key principles behind our methods can be generalized to overcome challenges in other important problems such as Bayesian inference. The second part of the thesis studies two critical aspects of modern distributed machine learning systems — asynchronicity and communication efficiency of optimization methods. We study various asynchronous stochastic algorithms with fast convergence for convex ERM problems and show that these methods achieve near-linear speedups in sparse settings common to machine learning. Another key factor governing the overall performance of a distributed system is its communication efficiency. Traditional optimization algorithms used in machine learning are often ill-suited for distributed environments with high communication cost. To address this issue, we dis- cuss two different paradigms to achieve communication efficiency of algorithms in distributed environments and explore new algorithms with better communication complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Tummala, Akhil. "Self-learning algorithms applied in Continuous Integration system". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16675.

Texto completo da fonte
Resumo:
Context: Continuous Integration (CI) is a software development practice where a developer integrates a code into the shared repository. And, then an automated system verifies the code and runs automated test cases to find integration error. For this research, Ericsson’s CI system is used. The tests that are performed in CI are regression tests. Based on the time scopes, the regression test suites are categorized into hourly and daily test suits. The hourly test is performed on all the commits made in a day, whereas the daily test is performed at night on the latest build that passed the hourly test. Here, the hourly and daily test suites are static, and the hourly test suite is a subset of the daily test suite. Since the daily test is performed at the end of the day, the results are obtained on the next day, which is delaying the feedback to the developers regarding the integration errors. To mitigate this problem, research is performed to find the possibility of creating a learning model and integrating into the CI system, which can then create a dynamic hourly test suite for faster feedback. Objectives: This research aims to find the suitable machine learning algorithm for CI system and investigate the feasibility of creating self-learning test machinery. This goal is achieved by examining the CI system and, finding out what type data is required for creating the learning model for prioritizing the test cases. Once the necessary data is obtained, then the selected algorithms are evaluated to find the suitable learning algorithm for creating self-learning test machinery. And then, the investigation is done whether the created learning model can be integrated into the CI workflow to create the self-learning test machinery. Methods: In this research, an experiment is conducted for evaluating the learning algorithms. For this experimentation, the data is provided by Ericsson AB, Gothenburg. The dataset consists of the daily test information and the test case results. The algorithms that are evaluated in this experiment are Naïve Bayes, Support vector machines, and Decision trees. This evaluation is done by performing leave-one-out cross-validation. And, the learning algorithm performance is calculated by using the prediction accuracy. After obtaining the accuracies, the algorithms are compared to find the suitable machine learning algorithm for CI system. Results: Based on the Experiment results it is found that support vector machines have outperformed Naïve Bayes and Decision tree algorithms in performance. But, due to the challenges present in the current CI system, the created learning model is not feasible to integrate into the CI. The primary challenge faced by the CI system is, mapping of test case failure to its respective commit is no possible (cannot find which commit made the test case to fail). This is because the daily test is performed on the latest build which is the combination of commits made in that day. Another challenge present is low data storage. Due to this low data storage, problems like the curse of dimensionality and class imbalance has occurred. Conclusions: By conducting this research, a suitable learning algorithm is identified for creating a self-learning machinery. And, also identified the challenges facing to integrate the model in CI. Based on the results obtained from the experiment, it is recognized that support vector machines have high prediction accuracy in test case result classification compared to Naïve Bayes and Decision trees.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Texto completo da fonte
Resumo:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Alzubi, Omar A. "Designing machine learning ensembles : a game coalition approach". Thesis, Swansea University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.678293.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Svensson, Frida. "Scalable Distributed Reinforcement Learning for Radio Resource Management". Thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177822.

Texto completo da fonte
Resumo:
There is a large potential for automation and optimization in radio access networks (RANs) using a data-driven approach to efficiently handle the increase in complexity due to the steep growth in traffic and new technologies introduced with the development of 5G. Reinforcement learning (RL) has natural applications in RAN control loops such as link adaptation, interference management and power control at different timescales commonly occurring in the RAN context. Elevating the status of data-driven solutions in RAN and building a new, scalable, distributed and data-friendly RAN architecture will be needed to competitively tackle the challenges of coming 5G networks. In this work, we propose a systematic, efficient and robust methodology for applying RL on different control problems. Firstly, the proposed methodology is evaluated using a well-known control problem. Then, it is adapted to a real-world RAN scenario. Extensive simulation results are provided to show the effectiveness and potential of the proposed approach. The methodology was successfully created but results on a RAN-simulator were not mature
Det finns en stor potential automatisering och optimering inom radionätverk (RAN, radio access network) genom att använda datadrivna lösningar för att på ett effektivt sätt hantera den ökade komplexiteten på grund av trafikökningar and nya teknologier som introducerats i samband med 5G. Förstärkningsinlärning (RL, reinforcement learning) har naturliga kopplingar till reglerproblem i olika tidsskalor, såsom länkanpassning, interferenshantering och kraftkontroll, vilket är vanligt förekommande i radionätverk. Att förhöja statusen på datadrivna lösningar i radionätverk kommer att vara nödvändigt för att hantera utmaningarna som uppkommer med framtida 5G nätverk. I detta arbete föreslås vi en syetematisk metodologi för att applicera RL på ett reglerproblem. I första hand används den föreslagna metodologin på ett välkänt reglerporblem. Senare anpassas metodologin till ett äkta RAN-scenario. Arbetet inkluderar utförliga resultat från simuleringar för att visa effektiviteten och potentialen hos den föreslagna metoden. En lyckad metodologi skapades men resultaten på RAN-simulatorn saknade mognad.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Immaneni, Raghu Nandan. "An efficient approach to machine learning based text classification through distributed computing". Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1603338.

Texto completo da fonte
Resumo:

Text Classification is one of the classical problems in computer science, which is primarily used for categorizing data, spam detection, anonymization, information extraction, text summarization etc. Given the large amounts of data involved in the above applications, automated and accurate training models and approaches to classify data efficiently are needed.

In this thesis, an extensive study of the interaction between natural language processing, information retrieval and text classification has been performed. A case study named “keyword extraction” that deals with ‘identifying keywords and tags from millions of text questions’ is used as a reference. Different classifiers are implemented using MapReduce paradigm on the case study and the experimental results are recorded using two newly built distributed computing Hadoop clusters. The main aim is to enhance the prediction accuracy, to examine the role of text pre-processing for noise elimination and to reduce the computation time and resource utilization on the clusters.

Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Costantini, Marina. "Optimization methods over networks for popular content delivery and distributed machine learning". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS182.

Texto completo da fonte
Resumo:
Le nombre d'utilisateurs et d'applications sur l'internet pose un grand nombre de défis aux opérateurs de réseaux pour pouvoir répondre aux demandes de trafic élevées. Dans ce contexte, il est devenu impératif d'utiliser efficacement les ressources disponibles. Dans cette thèse, nous développons des méthodes d'optimisation pour améliorer l'utilisation du réseau dans deux applications : la mise en cache à la périphérie du réseau et l'apprentissage de modèles distribués. La mise en cache à la périphérie du réseau est une technique qui propose de stocker à la périphérie du réseau des copies de contenus populaires afin de réduire la latence et d'améliorer l'expérience de l'utilisateur. Traditionnellement, lorsqu'un utilisateur demande une page web ou une application, la requête est envoyée à un serveur distant qui stocke les données. Le serveur récupère les données demandées et les renvoie à l'utilisateur. Ce processus peut entraîner des problèmes de congestion. Pour résoudre ce problème, les opérateurs de réseau peuvent déployer des serveurs de mise en cache à proximité des utilisateurs. Ces serveurs sont ensuite remplis pendant les heures creuses avec des contenus qui ont une forte probabilité d'être demandés, de sorte que pendant les périodes de fort trafic, l'utilisateur peut toujours les récupérer en peu de temps. D'autre part, l'apprentissage distribué de modèles, ou plus généralement l'optimisation distribuée, est une méthode d'entraînement de modèles d'apprentissage automatique utilisant plusieurs agents qui travaillent ensemble pour trouver les paramètres optimaux du modèle. Dans ce cadre, les agents intercalent des étapes de calcul local avec des étapes de communication pour entraîner un modèle qui prend en compte les données de tous les agents. Nous considérons ici deux contextes de formation distribuée : le contexte décentralisé et le contexte fédéré. Dans le cadre décentralisé, les agents sont interconnectés dans un réseau et ne communiquent leurs valeurs d'optimisation qu'à leurs voisins directs. Dans le cadre fédéré, les agents communiquent avec un serveur central qui calcule régulièrement la moyenne des valeurs les plus récentes d' un sous-ensemble d'agents et diffuse le résultat à tous les agents. Naturellement, le succès de ces techniques repose sur la communication fréquente des agents. C'est pourquoi la conception d'algorithmes d'optimisation distribués permettant d'obtenir des performances de pointe à des coûts de communication moindres suscite un grand intérêt. Dans cette thèse, nous proposons des algorithmes qui améliorent les performances des méthodes existantes pour la fourniture de contenu populaire et l'apprentissage automatique distribué en faisant une meilleure utilisation des ressources du réseau. Dans le chapitre 2, nous proposons un algorithme qui exploite les moteurs de recommandation pour concevoir conjointement les contenus mis en cache à la périphérie du réseau et les recommandations présentées à l'utilisateur. Cet algorithme permet d'atteindre une fraction plus élevée de demandes servies par le cache que ses concurrents, et nécessite donc moins de communication avec le serveur distant. Dans le chapitre 3, nous concevons un algorithme asynchrone pour l'optimisation décentralisée qui nécessite une coordination minimale entre les agents et permet donc des interruptions de connexion et des défaillances de liaison. Nous montrons que la convergence de cet algorithme peut être rendue beaucoup plus rapide en laissant les agents décider de leur schéma de communication en fonction des gains fournis par la communication avec chacun de leurs voisins. Enfin, au chapitre 4, nous proposons un algorithme qui exploite la communication inter-agents dans le cadre de l'apprentissage fédéré classique et qui peut atteindre la même vitesse de convergence que la configuration classique avec moins de cycles de communication avec le serveur, qui constituent le principal goulot d'étranglement dans ce cadre
The ever-increasing number of users and applications on the Internet sets a number of challenges for network operators and engineers in order to keep up with the high traffic demands. In this scenario, making efficient use of the resources available has become imperative. In this thesis, we develop optimization methods to improve the utilization of the network in two specific applications enabled by the Internet: network edge caching and distributed model training. Network edge caching is a recent technique that proposes to store at the network edge copies of contents that have a high probability of being requested to reduce latency and improve the overall user experience. Traditionally, when a user requests a web page or application, the request is sent to a remote server that stores the data. The server retrieves the requested data and sends it back to the user. This process can be slow and can lead to latency and congestion issues, especially when multiple users are accessing the same data simultaneously. To address this issue, network operators can deploy edge caching servers close to end-users. These servers are then filled during off-peak hours with contents that have high probability of being requested, so that during times of high traffic the user can still retrieve them in a short time and high quality. On the other hand, distributed model training, or more generally, distributed optimization, is a method for training large-scale machine learning models using multiple agents that work together to find the optimal parameters of the model. In such settings, the agents interleave local computation steps with communication steps to train a model that takes into account the data of all agents. To achieve this, agents may exchange optimization values (parameters, gradients) but not the data. Here we consider two such distributed training settings: the decentralized and the federated. In the decentralized setting, agents are interconnected in a network and communicate their optimization values only to their direct neighbors. In the federated, the agents communicate with a central server that regularly averages the most recent values of (usually a subset of) the agents and broadcasts the result to all of them. Naturally, the success of such techniques relies on the frequent communication of the agents between them or with the server. Therefore, there is a great interest in designing distributed optimization algorithms that achieve state-of-the-art performance at lower communication costs. In this thesis, we propose algorithms that improve the performance of existing methods for popular content delivery and distributed machine learning by making a better utilization of the network resources. In Chapter 2, we propose an algorithm that exploits recommendation engines to design jointly the contents cached at the network edge and the recommendations shown to the user. This algorithm achieves a higher fraction of requests served by the cache than its competitors, and thus requires less communication with the remote server. In Chapter 3, we design an asynchronous algorithm for decentralized optimization that requires minimum coordination between the agents and thus allows for connection interruptions and link failures. We then show that, if the agents are allowed to increase the amount of data they transmit by a factor equal to their node degree, the convergence of this algorithm can be made much faster by letting the agents decide their communication scheme according to the gains provided by communicating with each of their neighbors. Finally, in Chapter 4 we propose an algorithm that exploits inter-agent communication within the classical federated learning setup (where, in principle, agents communicate only with the server), and which can achieve the same convergence speed as the classical setup with fewer communication rounds with the server, which constitute the main bottleneck in this setting
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Tron, Gianet Eric. "A Continuous Bond". Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-24020.

Texto completo da fonte
Resumo:
As Digital Personal Assistants get increasingly present in our lives, repositioning Conversational Interfaces within Interaction Design could be beneficial. More contributions seem possible beyond the commercial vision of Conversational Agents as digital assistants. In this thesis, Design fiction is adopted as an approach to explore a future for these technologies, focusing on the possible social and ritual practices that might arise when Conversational Agents and Artificial Intelligence are employed in contexts such as mortality and grief. As a secondary but related concern, it is argued that designers need to come to find ways to work with Artificial Intelligence and Machine Learning, uncovering the “AI blackbox”, and understanding its basic functioning, therefore, Machine learning is explored as a design material. This research through design project presents a scenario where the data we leave behind us are used after we die to build conversational models that become digital altars, shaping the way we deal with grief and death. This is presented through a semi-functional prototype and a diegetic prototype in the form of a short video.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Vedanbhatla, Naga V. K. Abhinav. "Distributed Approach for Peptide Identification". TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1546.

Texto completo da fonte
Resumo:
A crucial step in protein identification is peptide identification. The Peptide Spectrum Match (PSM) information set is enormous. Hence, it is a time-consuming procedure to work on a single machine. PSMs are situated by a cross connection, a factual score, or a probability that the match between the trial and speculative is right and original. This procedure takes quite a while to execute. So, there is demand for enhancement of the performance to handle extensive peptide information sets. Development of appropriate distributed frameworks are expected to lessen the processing time. The designed framework uses a peptide handling algorithm named C-Ranker, which takes peptide data as an input then identifies the accurate PSMs. The framework has two steps: Execute the C-Ranker algorithm on servers specified by the user and compare the correct PSM’s data generated via the distributed approach with the normal execution approach of C-Ranker. The objective of this framework is to process expansive peptide datasets utilizing a distributive approach. The nature of the solution calls for parallel execution and hence a decision to implement the same in Java has been taken. The results clearly show that distributed C-Ranker executes in less time as compared to the conventional centralized CRanker application. Around 66.67% of the overall reduction in execution time is shown with this approach. Besides, there is a reduction in the average memory usage with the distributed system running C-Ranker on multiple servers. A great significant benefit that may get overlooked is the fact the distributed CRanker can be used to solve extraordinarily large problems without incurring expenses for a powerful computer or a super computer. Comparison of this approach with An Apache Hadoop Framework for peptide identification with respect to the cost, execution times and flexibility were discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Johansson, Tobias. "Managed Distributed TensorFlow with YARN : Enabling Large-Scale Machine Learning on Hadoop Clusters". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248007.

Texto completo da fonte
Resumo:
Apache Hadoop is the dominant open source platform for the storage and processing of Big Data. With the data stored in Hadoop clusters, it is advantageous to be able to run TensorFlow applications on the same cluster that holds the input data sets for training machine learning models. TensorFlow supports distributed executions where Deep Neural Networks can be trained utilizing a large amount of compute nodes. To configure and launch distributed TensorFlow applications manually is complex and impractical, and gets worse with more nodes. This project presents a framework that utilizes Hadoop’s resource manager YARN to manage distributed TensorFlow applications. The proposal is a native YARN application with one ApplicationMaster (AM) per job, utilizing the AM as a registry for discovery prior to job execution. Conforming TensorFlow code to the framework typically is about a few lines of code. In comparison to TensorFlowOnSpark, the user experience is very similar, and collected performance data indicates that there exists an advantage of running TensorFlow directly on YARN with no extra layer in between.
Apache Hadoop är den ledande öppen källkod-plattformen för lagringen och processeringen av big data. Med data lagrat i Hadoop-kluster, är det fördelaktigt att kunna köra TensorFlow-applikationer på samma kluster som håller ingående dataset för träning av maskininlärningsmodeller. TensorFlow stödjer distribuerade exekveringar där djupa neurala nätverk kan tränas genom att använda en stor mängd berräkningsnoder. Att konfigurera och starta distribuerade TensorFlowapplikationer manuellt är komplext och opraktiskt och blir värre med fler noder.Detta projekt presenterar ett ramverk som använder Hadoops resurhanterare YARN för att hantera distribuerade TensorFlow-applikationer. Förslaget är en hemmahörande YARN-applikation med en ApplicationMaster (AM) per jobb som använder AM som ett register för upptäckt innan jobbet körs. Att anpassa TensorFlow-kod till ramverket handlar typiskt om några rader kod. I jämförelse med TensorFlowOnSpark är användarupplevelse väldigt likt och insamlad prestandadata indikerar att det finns en fördel med att köra TensorFlow direkt på YARN utan något extra lager däremellan.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Sherry, Dylan J. (Dylan Jacob). "FlexGP 2.0 : multiple levels of parallelism in distributed machine learning via genetic programming". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85498.

Texto completo da fonte
Resumo:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-107).
This thesis presents FlexGP 2.0, a distributed cloud-backed machine learning system. FlexGP 2.0 features multiple levels of parallelism which provide a significant improvement in accuracy v.s. elapsed time. The amount of computational resources in FlexGP 2.0 can be scaled along several dimensions to support large, complex data. FlexGP 2.0's core genetic programming (GP) learner includes multithreaded C++ model evaluation and a multi-objective optimization algorithm which is extensible to pursue any number of objectives simultaneously in parallel. FlexGP 2.0 parallelizes the entire learner to obtain a large distributed population size and leverages communication between learners to increase performance via transferral of search progress between learners. FlexGP 2.0 factors training data to boost performance and enable support for increased data size and complexity. Several experiments are performed which verify the efficacy of FlexGP 2.0's multilevel parallelism. Experiments run on a large dataset from a real-world regression problem. The results demonstrate both less time to achieve the same accuracy and overall increased accuracy, and illustrate the value of FlexGP 2.0 as a platform for machine learning.
by Dylan J. Sherry.
M. Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Ewing, Gabriel. "Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Staffolani, Alessandro. "A Reinforcement Learning Agent for Distributed Task Allocation". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20051/.

Texto completo da fonte
Resumo:
Al giorno d'oggi il reinforcement learning ha dimostrato di essere davvero molto efficace nel machine learning in svariati campi, come ad esempio i giochi, il riconoscimento vocale e molti altri. Perciò, abbiamo deciso di applicare il reinforcement learning ai problemi di allocazione, in quanto sono un campo di ricerca non ancora studiato con questa tecnica e perchè questi problemi racchiudono nella loro formulazione un vasto insieme di sotto-problemi con simili caratteristiche, per cui una soluzione per uno di essi si estende ad ognuno di questi sotto-problemi. In questo progetto abbiamo realizzato un applicativo chiamato Service Broker, il quale, attraverso il reinforcement learning, apprende come distribuire l'esecuzione di tasks su dei lavoratori asincroni e distribuiti. L'analogia è quella di un cloud data center, il quale possiede delle risorse interne - possibilmente distribuite nella server farm -, riceve dei tasks dai suoi clienti e li esegue su queste risorse. L'obiettivo dell'applicativo, e quindi del data center, è quello di allocare questi tasks in maniera da minimizzare il costo di esecuzione. Inoltre, al fine di testare gli agenti del reinforcement learning sviluppati è stato creato un environment, un simulatore, che permettesse di concentrarsi nello sviluppo dei componenti necessari agli agenti, invece che doversi anche occupare di eventuali aspetti implementativi necessari in un vero data center, come ad esempio la comunicazione con i vari nodi e i tempi di latenza di quest'ultima. I risultati ottenuti hanno dunque confermato la teoria studiata, riuscendo a ottenere prestazioni migliori di alcuni dei metodi classici per il task allocation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Letourneau, Sylvain. "Identification of attribute interactions and generation of globally relevant continuous features in machine learning". Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29029.

Texto completo da fonte
Resumo:
Datasets found in real world applications of machine learning are often characterized by low-level attributes with important interactions among them. Such interactions may increase the complexity of the learning task by limiting the usefulness of the attributes to dispersed regions of the representation space. In such cases, we say that the attributes are locally relevant. To obtain adequate performance with locally relevant attributes, the learning algorithm must be able to analyse the interacting attributes simultaneously and fit an appropriate model for the type of interactions observed. This is a complex task that surpasses the ability of most existing machine learning systems. This research proposes a solution to this problem by extending the initial representation with new globally relevant features. The new features make explicit the important information that was previously hidden by the initial interactions, thus reducing the complexity of the learning task. This dissertation first proposes an idealized study of the potential benefits of globally relevant features assuming perfect knowledge of the interactions among the initial attributes. This study involves synthetic data and a variety of machine learning systems. Recognizing that not all interactions produce a negative effect on performance, the dissertation introduces a novel technique named Relevance graphs to identify the interactions that negatively affect the performance of existing learning systems. The tool of interactive relevance graphs addresses another important need by providing the user with an opportunity to participate in the construction of a new representation that cancels the effects of the negative attribute interactions. The dissertation extends the concept of relevance graphs by introducing a series of algorithms for the automatic discovery of appropriate transformations. We use the named GLOREF (GLObally RElevant Features) to designate the approach that integrates these algorithms. The dissertation fully describes the GLOREF approach along with an extensive empirical evaluation with both synthetic and UCI datasets. This evaluation shows that the features produced by the GLOREF approach significantly improve the accuracy with both synthetic and real-world data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Lu, Haoye. "Function Optimization-based Schemes for Designing Continuous Action Learning Automata". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39097.

Texto completo da fonte
Resumo:
The field of Learning Automata (LA) has been studied and analyzed extensively for more than four decades; however, almost all the papers have concentrated on the LA working in Environments that have a finite number of actions. This is a well-established model of computation, and expedient, epsilon-optimal and absolutely expedient machines have been designed for stationary and non-stationary Environments. There are only a few papers which deal with Environments possessing an infinite number of actions. These papers assume a well-defined and rather simple uni-modal functional form, like the Gaussian function, for the Environment's infinite reward probabilities. This thesis pioneers the concept and presents a series of continuous action LA (CALA) algorithms that do not require the function of the Environment's infinite reward probabilities to obey a well-established uni-modal functional form. Instead, this function can be, but not limited to, a multi-modal function as long as it satisfies some weak constraints. Moreover, as our discussion evolves, the constraints are further relaxed. In all these cases, we demonstrate that the underlying machines converge in an epsilon-optimal manner to the optimal action of an infinite action set. Based on the CALA algorithms proposed, we report a global maximum search algorithm, which can find the maximum points of a real-valued function by sampling the function's values that could be contaminated by noise. This thesis also investigates the performance limit of the action-taking scheme, sampling actions based on probability density functions, which is used by all currently available CALA algorithms. In more details, given a reward function, we define an index of the function which is the least upper bound of the performance that a CALA algorithm can possibly achieve. Besides, we also report a CALA algorithm that meets this upper bound in an epsilon-optimal manner. By investigating the problem from a different perspective, we argue that the algorithms proposed are closely related to the family of “Stochastic Point Location” problems involving either discretized steps or d-ary parallel machines. The thesis includes the detailed proofs of the assertions and highlights the niche contributions within the broader theory of learning. To the best of our knowledge, there are no comparable results reported in the literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Fält, Markus. "Multi-factor Authentication : System proposal and analysis of continuous authentication methods". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39212.

Texto completo da fonte
Resumo:
It is common knowledge that the average user has multiple online accounts which all require a password. Some studies have shown that the number password for the average user is around 25. Considering this, one can see that it is unreasonable to expect the average user to have 25 truly unique passwords. Because of this multi-factor authentication could potentially be used to reduce the number of passwords to remember while maintaining and possibly exceeding the security of unique passwords. This thesis therefore, aims to examine continuous authentication methods as well as proposing an authentication system for combining various authentication methods. This was done by developing an authentication system using three different authentication factors. This system used a secret sharing scheme so that the authentication factors could be weighted according to their perceived security. The system also proposes a secure storage method for the secret shares and the feasibility of this is shown. The continuous authentication methods tests were done by testing various machine learning methods on two public datasets. The methods were graded on accuracy and the rate at which the wrong user was accepted. This showed that random forest and decision trees worked well on the particular datasets. Ensemble learning was then tested to see how the two continuous factors performed once combined into a single classifier. This gave an equal error rate of around 5% which is comparable to state-of-the-art methods used for similar datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Mäenpää, Dylan. "Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated Learning". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176778.

Texto completo da fonte
Resumo:
Due to privacy and regulatory reasons, sharing data between institutions can be difficult. Because of this, real-world data are not fully exploited by machine learning (ML). An emerging method is to train ML models with federated learning (FL) which enables clients to collaboratively train ML models without sharing raw training data. We explored peer-to-peer FL by extending a prominent centralized FL algorithm called Fedavg to function in a peer-to-peer setting. We named this extended algorithm FedavgP2P. Deep neural networks at 100 simulated clients were trained to recognize digits using FedavgP2P and the MNIST data set. Scenarios with IID and non-IID client data were studied. We compared FedavgP2P to Fedavg with respect to models' convergence behaviors and communication costs. Additionally, we analyzed the connection between local client computation, the number of neighbors each client communicates with, and how that affects performance. We also attempted to improve the FedavgP2P algorithm with heuristics based on client identities and per-class F1-scores. The findings showed that by using FedavgP2P, the mean model convergence behavior was comparable to a model trained with Fedavg. However, this came with a varying degree of variation in the 100 models' convergence behaviors and much greater communications costs (at least 14.9x more communication with FedavgP2P). By increasing the amount of local computation up to a certain level, communication costs could be saved. When the number of neighbors a client communicated with increased, it led to a lower variation of the models' convergence behaviors. The FedavgP2P heuristics did not show improved performance. In conclusion, the overall findings indicate that peer-to-peer FL is a promising approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Giaretta, Lodovico. "Pushing the Limits of Gossip-Based Decentralised Machine Learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253794.

Texto completo da fonte
Resumo:
Recent years have seen a sharp increase in the ubiquity and power of connected devices, such as smartphones, smart appliances and smart sensors. These de- vices produce large amounts of data that can be extremely precious for training larger, more advanced machine learning models. Unfortunately, it is some- times not possible to collect and process these datasets on a central system, due either to their size or to the growing privacy requirements of digital data handling.To overcome this limit, researchers developed protocols to train global models in a decentralised fashion, exploiting the computational power of these edge devices. These protocols do not require any of the data on the device to be shared, relying instead on communicating partially-trained models.Unfortunately, real-world systems are notoriously hard to control, and may present a wide range of challenges that are easily overlooked in academic stud- ies and simulations. This research analyses the gossip learning protocol, one of the main results in the area of decentralised machine learning, to assess its applicability to real-world scenarios.Specifically, this work identifies the main assumptions built into the pro- tocol, and performs carefully-crafted simulations in order to test its behaviour when these assumptions are lifted. The results show that the protocol can al- ready be applied to certain environments, but that it fails when exposed to certain conditions that appear in some real-world scenarios. In particular, the models trained by the protocol may be biased towards the data stored in nodes with faster communication speeds or a higher number of neighbours. Further- more, certain communication topologies can have a strong negative impact on the convergence speed of the models.While this study also suggests effective mitigations for some of these is- sues, it appears that the gossip learning protocol requires further research ef- forts, in order to ensure a wider industrial applicability.
Under de senaste åren har vi sett en kraftig ökning av närvaron och kraften hos anslutna enheter, såsom smartphones, smarta hushållsmaskiner, och smarta sensorer. Dessa enheter producerar stora mängder data som kan vara extremt värdefulla för att träna stora och avancerade maskininlärningsmodeller. Dessvärre är det ibland inte möjligt att samla in och bearbeta dessa dataset på ett centralt system, detta på grund av deras storlek eller de växande sekretesskraven för digital datahantering.För att lösa problemet har forskare utvecklar protokoller för att träna globala modeller på ett decentraliserat sätt och utnyttja beräkningsförmågan hos dessa enheter. Dessa protokoll kräver inte datan på enheter delas utan förlitar sig istället på att kommunicera delvis tränade modeller.Dessvärre så är verkliga system svåra att kontrollera och kan presentera ett brett spektrum av utmaningar som lätt överskådas i akademiska studier och simuleringar. Denna forskning analyserar gossip inlärning protokollet vilket är av de viktigaste resultaten inom decentraliserad maskininlärning, för att bedöma dess tillämplighet på verkliga scenarier.Detta arbete identifierar de huvudsakliga antagandena om protokollet och utför noggrant utformade simuleringar för att testa protokollets beteende när dessa antaganden tas bort. Resultaten visar att protokollet redan kan tillämpas i vissa miljöer, men att det misslyckas när det utsätts för vissa förhållanden som i verklighetsbaserade scenarier. Mer specifikt så kan modellerna som utbildas av protokollet vara partiska och fördelaktiga mot data lagrade i noder med snabbare kommunikationshastigheter eller ett högre antal grannar. Vidare kan vissa kommunikationstopologier få en stark negativ inverkan på modellernas konvergenshastighet.Även om denna studie kom fram till en förmildrande effekt för vissa av dessa problem så verkar det som om gossip inlärning protokollet kräver ytterligare forskningsinsatser för att säkerställa en bredare industriell tillämplighet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Abu, Salih Bilal Ahmad Abdal Rahman. "Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing". Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/70285.

Texto completo da fonte
Resumo:
This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

ABBASS, YAHYA. "Human-Machine Interfaces using Distributed Sensing and Stimulation Systems". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1069056.

Texto completo da fonte
Resumo:
As the technology moves towards more natural human-machine interfaces (e.g. bionic limbs, teleoperation, virtual reality), it is necessary to develop a sensory feedback system in order to foster embodiment and achieve better immersion in the control system. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing a wide bandwidth of information. To provide this type of feedback, it is necessary to develop a distributed sensing system that could extract a wide range of information during the interaction between the robot and the environment. In addition, a distributed feedback interface is needed to deliver such information to the user. This thesis proposes the development of a distributed sensing system (e-skin) to acquire tactile sensation, a first integration of distributed sensing system on a robotic hand, the development of a sensory feedback system that compromises the distributed sensing system and a distributed stimulation system, and finally the implementation of deep learning methods for the classification of tactile data. It’s core focus addresses the development and testing of a sensory feedback system, based on the latest distributed sensing and stimulation techniques. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives, and the used methodology and contributions; as well as six studies that tackled the development of human-machine interfaces.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Razavian, Narges Sharif. "Continuous Graphical Models for Static and Dynamic Distributions: Application to Structural Biology". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/340.

Texto completo da fonte
Resumo:
Generative models of protein structure enable researchers to predict the behavior of proteins under different conditions. Continuous graphical models are powerful and efficient tools for modeling static and dynamic distributions, which can be used for learning generative models of molecular dynamics. In this thesis, we develop new and improved continuous graphical models, to be used in modeling of protein structure. We first present von Mises graphical models, and develop consistent and efficient algorithms for sparse structure learning and parameter estimation, and inference. We compare our model to sparse Gaussian graphical model and show it outperforms GGMs on synthetic and Engrailed protein molecular dynamics datasets. Next, we develop algorithms to estimate Mixture of von Mises graphical models using Expectation Maximization, and show that these models outperform Von Mises, Gaussian and mixture of Gaussian graphical models in terms of accuracy of prediction in imputation test of non-redundant protein structure datasets. We then use non-paranormal and nonparametric graphical models, which have extensive representation power, and compare several state of the art structure learning methods that can be used prior to nonparametric inference in reproducing kernel Hilbert space embedded graphical models. To be able to take advantage of the nonparametric models, we also propose feature space embedded belief propagation, and use random Fourier based feature approximation in our proposed feature belief propagation, to scale the inference algorithm to larger datasets. To improve the scalability further, we also show the integration of Coreset selection algorithm with the nonparametric inference, and show that the combined model scales to large datasets with very small adverse effect on the quality of predictions. Finally, we present time varying sparse Gaussian graphical models, to learn smoothly varying graphical models of molecular dynamics simulation data, and present results on CypA protein
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Feraudo, Angelo. "Distributed Federated Learning in Manufacturer Usage Description (MUD) Deployment Environments". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Encontre o texto completo da fonte
Resumo:
Il costante avanzamento dei dispositivi Internet of Things (IoT) in diversi ambienti, ha provocato la necessità di nuovi meccanismi di sicurezza e monitoraggio in una rete. Tali dispositvi sono spesso considerati fonti di vulnerabilità sfruttabili da malintenzionati per accedere alla rete o condurre altri attacchi. Questo è dovuto alla natura stessa dei dispositivi, ovvero offrire servizi aventi a che fare con dati sensibili (p.es. videocamere) seppur con risorse molto limitate. Una soluzione in questa direzione, è l'impiego della specifica Manufacturer Usage Description (MUD), che impone al maufacturer dei dispositivi di fornire dei file contenenti un particolare pattern di comunicazione che i dispositivi da lui prodotti dovranno adottare. Tuttavia, tale specifica riduce solo parzialmente le suddette vulnerabilità. Infatti, diventa inverosimile definire un pattern di comunicazione per dispositivi IoT aventi un traffico di rete molto generico (p.es. Alexa). Perciò, è di grande interesse studiare un sistema di anomaly detection basato su tecniche di machine learning, che riesca a colmare tali vulnerabilità. In questo lavoro, verranno esplorate tre prototipi di implementazione della specifica MUD, che si concluderà con la scelta di una tra queste. Successivamente, verrà prodotta una Proof-of-Concept uniforme a tale specifica, contenente un'ulteriore entità in grado di fornire maggiore autorità all'amministratore di rete in quest'ambiente. In una seconda fase, verrà analizzata un'architettura distribuita che riesca ad effettuare learning di anomalie direttamente sui dispositivi sfruttando il concetto di Federated Learning, il che significa garantire la privacy dei dati. L'idea fondamentale di questo lavoro è quindi quella di proporre un'architettura basata su queste due nuove tecnologie, in grado di ridurre al minimo vulnerabilità proprie dei dispositivi IoT in un ambiente distribuito garantendo il più possibile la privacy dei dati.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Quintal, Kyle. "Context-Awareness for Adversarial and Defensive Machine Learning Methods in Cybersecurity". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40835.

Texto completo da fonte
Resumo:
Machine Learning has shown great promise when combined with large volumes of historical data and produces great results when combined with contextual properties. In the world of the Internet of Things, the extraction of information regarding context, or contextual information, is increasingly prominent with scientific advances. Combining such advancements with artificial intelligence is one of the themes in this thesis. Particularly, there are two major areas of interest: context-aware attacker modelling and context-aware defensive methods. Both areas use authentication methods to either infiltrate or protect digital systems. After a brief introduction in chapter 1, chapter 2 discusses the current extracted contextual information within cybersecurity studies, and how machine learning accomplishes a variety of cybersecurity goals. Chapter 3 introduces an attacker injection model, championing the adversarial methods. Then, chapter 4 extracts contextual data and provides an intelligent machine learning technique to mitigate anomalous behaviours. Chapter 5 explores the feasibility of adopting a similar defensive methodology in the cyber-physical domain, and future directions are presented in chapter 6. Particularly, we begin this thesis by explaining the need for further improvements in cybersecurity using contextual information and discuss its feasibility, now that ubiquitous sensors exist in our everyday lives. These sensors often show a high correlation with user identity in surprising combinations. Our first contribution lay within the domain of Mobile CrowdSensing (MCS). Despite its benefits, MCS requires proper security solutions to prevent various attacks, notably injection attacks. Our smart-injection model, SINAM, monitors data traffic in an online-learning manner, simulating an injection model with undetection rates of 99%. SINAM leverages contextual similarities within a given sensing campaign to mimic anomalous injections. On the flip-side, we investigate how contextual features can be utilized to improve authentication methods in an enterprise context. Also motivated by the emergence of omnipresent mobile devices, we expand the Spatio-temporal features of unfolding contexts by introducing three contextual metrics: document shareability, document valuation, and user cooperation. These metrics are vetted against modern machine learning techniques and achieved an average of 87% successful authentication attempts. Our third contribution aims to further improve such results but introducing a Smart Enterprise Access Control (SEAC) technique. Combining the new contextual metrics with SEAC achieved an authenticity precision of 99% and a recall of 97%. Finally, the last contribution is an introductory study on risk analysis and mitigation using context. Here, cyber-physical coupling metrics are created to extract a precise representation of unfolding contexts in the medical field. The presented consensus algorithm achieves initial system conveniences and security ratings of 88% and 97% with these news metrics. Even as a feasibility study, physical context extraction shows good promise in improving cybersecurity decisions. In short, machine learning is a powerful tool when coupled with contextual data and is applicable across many industries. Our contributions show how the engineering of contextual features, adversarial and defensive methods can produce applicable solutions in cybersecurity, despite minor shortcomings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia