Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Dynamic machine learning.

Dissertationen zum Thema „Dynamic machine learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Dynamic machine learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Höstklint, Niklas, und Jesper Larsson. „Dynamic Test Case Selection using Machine Learning“. Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296634.

Der volle Inhalt der Quelle
Annotation:
Testing code is a vital part at any software producing company, to ensure no faulty code slips through that could have detrimental consequences.  At Ericsson, testing code before publishing is a very costly process which can take several hours. Currently, every single test is run for all submitted code.  This report aims to address the issue by building a machine learning model that determines which tests need to be run, so that unnecessary tests are left out, saving time and resources. It is however important to find the failures, as having certain failures pass through into production could have all types of economic, environmental and social consequences. The result shows that there is great potential in several different types of models. A Linear Regression model found 92% of all failures within running 25% of all test categories. The linear model however plateaus before finding the final failures. If finding 100% of failures is essential, a Support Vector Regression model proved the most efficient as it was the only model to find 100% of failures within 90% of test categories being run.
Testning av kod är en avgörande del för alla mjukvaruproducerande företag, för att säkerställa att ingen felaktig kod som kan ha skadlig påverkan publiceras. Hos Ericsson är testning av kod innan det ska publiceras en väldigt dyr process som kan ta flera timmar. Vid tiden denna rapport skrivs så körs varenda test för all inlämnad kod. Denna rapport har som mål att lösa/reducera problemet genom att bygga en modell med maskininlärning som avgör vilka tester som ska köras, så onödiga tester lämnas utanför vilket i sin tur sparar tid och resurser.  Dock är det viktigt att hitta alla misslyckade tester, eftersom att tillåta dessa passera till produktionen kan innebära alla möjliga olika ekonomiska, miljömässiga och sociala konsekvenser.  Resultaten visar att det finns stor potential i flera olika typer av modeller.  En linjär regressionsmodell hittade 92% av alla fel inom att 25% av alla test kategorier körts. Den linjära modellen träffar dock en platå innan den hittar de sista felen. Om det är essentiellt att hitta 100% av felen, så visade sig en support vector regressionsmodell vara mest effektiv, då den var den enda modellen som lyckades hitta 100% av alla fel inom att 90% alla test kategorier hade körts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rowe, Michael C. (Michael Charles). „A Machine Learning Method Suitable for Dynamic Domains“. Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc278720/.

Der volle Inhalt der Quelle
Annotation:
The efficacy of a machine learning technique is domain dependent. Some machine learning techniques work very well for certain domains but are ill-suited for other domains. One area that is of real-world concern is the flexibility with which machine learning techniques can adapt to dynamic domains. Currently, there are no known reports of any system that can learn dynamic domains, short of starting over (i.e., re-running the program). Starting over is neither time nor cost efficient for real-world production environments. This dissertation studied a method, referred to as Experience Based Learning (EBL), that attempts to deal with conditions related to learning dynamic domains. EBL is an extension of Instance Based Learning methods. The hypothesis of the study related to this research was that the EBL method would automatically adjust to domain changes and still provide classification accuracy similar to methods that require starting over. To test this hypothesis, twelve widely studied machine learning datasets were used. A dynamic domain was simulated by presenting these datasets in an uninterrupted cycle of train, test, and retrain. The order of the twelve datasets and the order of records within each dataset were randomized to control for order biases in each of ten runs. As a result, these methods provided datasets that represent extreme levels of domain change. Using the above datasets, EBL's mean classification accuracies for each dataset were compared to the published static domain results of other machine learning systems. The results indicated that the EBL's system performance was not statistically different (p>0.30) from the other machine learning methods. These results indicate that the EBL system is able to adjust to an extreme level of domain change and yet produce satisfactory results. This finding supports the use of the EBL method in real-world environments that incur rapid changes to both variables and values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Narmack, Kirilll. „Dynamic Speed Adaptation for Curves using Machine Learning“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233545.

Der volle Inhalt der Quelle
Annotation:
The vehicles of tomorrow will be more sophisticated, intelligent and safe than the vehicles of today. The future is leaning towards fully autonomous vehicles. This degree project provides a data driven solution for a speed adaptation system that can be used to compute a vehicle speed for curves, suitable for the underlying driving style of the driver, road properties and weather conditions. A speed adaptation system for curves aims to compute a vehicle speed suitable for curves that can be used in Advanced Driver Assistance Systems (ADAS) or in Autonomous Driving (AD) applications. This degree project was carried out at Volvo Car Corporation. Literature in the field of speed adaptation systems and factors affecting the vehicle speed in curves was reviewed. Naturalistic driving data was both collected by driving and extracted from Volvo's data base and further processed. A novel speed adaptation system for curves was invented, implemented and evaluated. This speed adaptation system is able to compute a vehicle speed suitable for the underlying driving style of the driver, road properties and weather conditions. Two different artificial neural networks and two mathematical models were used to compute the desired vehicle speed in curves. These methods were compared and evaluated.
Morgondagens fordon kommer att vara mer sofistikerade, intelligenta och säkra än dagens fordon. Framtiden lutar mot fullständigt autonoma fordon. Detta examensarbete tillhandahåller en datadriven lösning för ett hastighetsanpassningssystem som kan beräkna ett fordons hastighet i kurvor som är lämpligt för förarens körstil, vägens egenskaper och rådande väder. Ett hastighetsanpassningssystem för kurvor har som mål att beräkna en fordonshastighet för kurvor som kan användas i Advanced Driver Assistance Systems (ADAS) eller Autonomous Driving (AD) applikationer. Detta examensarbete utfördes på Volvo Car Corporation. Litteratur kring hastighetsanpassningssystem samt faktorer som påverkar ett fordons hastighet i kurvor studerades. Naturalistisk bilkörningsdata samlades genom att köra bil samt extraherades från Volvos databas och bearbetades. Ett nytt hastighetsanpassningssystem uppfanns, implementerades samt utvärderades. Hastighetsanpassningssystemet visade sig vara kapabelt till att beräkna en lämplig fordonshastighet för förarens körstil under rådande väderförhållanden och vägens egenskaper. Två olika artificiella neuronnätverk samt två matematiska modeller användes för att beräkna fordonets hastighet. Dessa metoder jämfördes och utvärderades.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sîrbu, Adela-Maria. „Dynamic machine learning for supervised and unsupervised classification“. Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0002/document.

Der volle Inhalt der Quelle
Annotation:
La direction de recherche que nous abordons dans la thèse est l'application des modèles dynamiques d'apprentissage automatique pour résoudre les problèmes de classification supervisée et non supervisée. Les problèmes particuliers que nous avons décidé d'aborder dans la thèse sont la reconnaissance des piétons (un problème de classification supervisée) et le groupement des données d'expression génétique (un problème de classification non supervisée). Les problèmes abordés sont représentatifs pour les deux principaux types de classification et sont très difficiles, ayant une grande importance dans la vie réelle. La première direction de recherche que nous abordons dans le domaine de la classification non supervisée dynamique est le problème de la classification dynamique des données d'expression génétique. L'expression génétique représente le processus par lequel l'information d'un gène est convertie en produits de gènes fonctionnels : des protéines ou des ARN ayant différents rôles dans la vie d'une cellule. La technologie des micro-réseaux moderne est aujourd'hui utilisée pour détecter expérimentalement les niveaux d'expression de milliers de gènes, dans des conditions différentes et au fil du temps. Une fois que les données d'expression génétique ont été recueillies, l'étape suivante consiste à analyser et à extraire des informations biologiques utiles. L'un des algorithmes les plus populaires traitant de l'analyse des données d'expression génétique est le groupement, qui consiste à diviser un certain ensemble en groupes, où les composants de chaque groupe sont semblables les uns aux autres données. Dans le cas des ensembles de données d'expression génique, chaque gène est représenté par ses valeurs d'expression (caractéristiques), à des points distincts dans le temps, dans les conditions contrôlées. Le processus de regroupement des gènes est à la base des études génomiques qui visent à analyser les fonctions des gènes car il est supposé que les gènes qui sont similaires dans leurs niveaux d'expression sont également relativement similaires en termes de fonction biologique. Le problème que nous abordons dans le sens de la recherche de classification non supervisée dynamique est le regroupement dynamique des données d'expression génique. Dans notre cas, la dynamique à long terme indique que l'ensemble de données ne sont pas statiques, mais elle est sujette à changement. Pourtant, par opposition aux approches progressives de la littérature, où l'ensemble de données est enrichie avec de nouveaux gènes (instances) au cours du processus de regroupement, nos approches abordent les cas lorsque de nouvelles fonctionnalités (niveaux d'expression pour de nouveaux points dans le temps) sont ajoutés à la gènes déjà existants dans l'ensemble de données. À notre connaissance, il n'y a pas d'approches dans la littérature qui traitent le problème de la classification dynamique des données d'expression génétique, définis comme ci-dessus. Dans ce contexte, nous avons introduit trois algorithmes de groupement dynamiques que sont capables de gérer de nouveaux niveaux d'expression génique collectés, en partant d'une partition obtenue précédente, sans la nécessité de ré-exécuter l'algorithme à partir de zéro. L'évaluation expérimentale montre que notre méthode est plus rapide et plus précis que l'application de l'algorithme de classification à partir de zéro sur la fonctionnalité étendue ensemble de données
The research direction we are focusing on in the thesis is applying dynamic machine learning models to salve supervised and unsupervised classification problems. We are living in a dynamic environment, where data is continuously changing and the need to obtain a fast and accurate solution to our problems has become a real necessity. The particular problems that we have decided te approach in the thesis are pedestrian recognition (a supervised classification problem) and clustering of gene expression data (an unsupervised classification. problem). The approached problems are representative for the two main types of classification and are very challenging, having a great importance in real life.The first research direction that we approach in the field of dynamic unsupervised classification is the problem of dynamic clustering of gene expression data. Gene expression represents the process by which the information from a gene is converted into functional gene products: proteins or RNA having different roles in the life of a cell. Modern microarray technology is nowadays used to experimentally detect the levels of expressions of thousand of genes, across different conditions and over time. Once the gene expression data has been gathered, the next step is to analyze it and extract useful biological information. One of the most popular algorithms dealing with the analysis of gene expression data is clustering, which involves partitioning a certain data set in groups, where the components of each group are similar to each other. In the case of gene expression data sets, each gene is represented by its expression values (features), at distinct points in time, under the monitored conditions. The process of gene clustering is at the foundation of genomic studies that aim to analyze the functions of genes because it is assumed that genes that are similar in their expression levels are also relatively similar in terms of biological function.The problem that we address within the dynamic unsupervised classification research direction is the dynamic clustering of gene expression data. In our case, the term dynamic indicates that the data set is not static, but it is subject to change. Still, as opposed to the incremental approaches from the literature, where the data set is enriched with new genes (instances) during the clustering process, our approaches tackle the cases when new features (expression levels for new points in time) are added to the genes already existing in the data set. To our best knowledge, there are no approaches in the literature that deal with the problem of dynamic clustering of gene expression data, defined as above. In this context we introduced three dynamic clustering algorithms which are able to handle new collected gene expression levels, by starting from a previous obtained partition, without the need to re-run the algorithm from scratch. Experimental evaluation shows that our method is faster and more accurate than applying the clustering algorithm from scratch on the feature extended data set
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Boulegane, Dihia. „Machine learning algorithms for dynamic Internet of Things“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT048.

Der volle Inhalt der Quelle
Annotation:
La croissance rapide de l’Internet des Objets (IdO) ainsi que la prolifération des capteurs ont donné lieu à diverses sources de données qui génèrent continuellement de grandes quantités de données et à une grande vitesse sous la forme de flux. Ces flux sont essentiels dans le processus de prise de décision dans différents secteurs d’activité et ce grâce aux techniques d’intelligence artificielle et d’apprentissage automatique afin d’extraire des connaissances précieuses et les transformer en actions pertinentes. Par ailleurs, les données sont souvent associées à un indicateur temporel, appelé flux de données temporel qui est défini comme étant une séquence infinie d’observations capturées à intervalles réguliers, mais pas nécessairement. La prévision est une tâche complexe dans le domaine de l’IA et vise à comprendre le processus générant les observations au fil du temps sur la base d’un historique de données afin de prédire le comportement futur. L’apprentissage incremental et adaptatif est le domaine de recherche émergeant dédié à l’analyse des flux de données. La thèse se penche sur les méthodes d’ensemble qui fusionnent de manière dynamique plusieurs modèles prédictifs accomplissant ainsi des résultats compétitifs malgré leur coût élevé en termes de mémoire et de temps de calcul. Nous étudions différentes approches pour estimer la performance de chaque modèle de prévision individuel compris dans l’ensemble en fonction des données en introduisant de nouvelles méthodes basées sur le fenêtrage et le méta-apprentissage. Nous proposons différentes méthodes de sélection qui visent à constituer un comité de modèles précis et divers. Les prédictions de ces modèles sont ensuite pondérées et agrégées. La deuxième partie de la thèse traite de la compression des méthodes d’ensemble qui vise à produire un modèle individuel afin d’imiter le comportement d’un ensemble complexe tout en réduisant son coût. Pour finir, nous présentons ”Real-Time Machine Learning Competition on Data Streams”, dans le cadre de BigDataCup Challenge de la conférence IEEE Big Data 2019 ainsi que la plateforme dédiée SCALAR
With the rapid growth of Internet-of-Things (IoT) devices and sensors, sources that are continuously releasing and curating vast amount of data at high pace in the form of stream. The ubiquitous data streams are essential for data driven decisionmaking in different business sectors using Artificial Intelligence (AI) and Machine Learning (ML) techniques in order to extract valuable knowledge and turn it to appropriate actions. Besides, the data being collected is often associated with a temporal indicator, referred to as temporal data stream that is a potentially infinite sequence of observations captured over time at regular intervals, but not necessarily. Forecasting is a challenging tasks in the field of AI and aims at understanding the process generating the observations over time based on past data in order to accurately predict future behavior. Stream Learning is the emerging research field which focuses on learning from infinite and evolving data streams. The thesis tackles dynamic model combination that achieves competitive results despite their high computational costs in terms of memory and time. We study several approaches to estimate the predictive performance of individual forecasting models according to the data and contribute by introducing novel windowing and meta-learning based methods to cope with evolving data streams. Subsequently, we propose different selection methods that aim at constituting a committee of accurate and diverse models. The predictions of these models are then weighted and aggregated. The second part addresses model compression that aims at building a single model to mimic the behavior of a highly performing and complex ensemble while reducing its complexity. Finally, we present the first streaming competition ”Real-time Machine Learning Competition on Data Streams”, at the IEEE Big Data 2019 conference, using the new SCALAR platform
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Brun, Yuriy 1981. „Software fault identification via dynamic analysis and machine learning“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/17939.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 65-67).
I propose a technique that identifies program properties that may indicate errors. The technique generates machine learning models of run-time program properties known to expose faults, and applies these models to program properties of user-written code to classify and rank properties that may lead the user to errors. I evaluate an implementation of the technique, the Fault Invariant Classifier, that demonstrates the efficacy of the error finding technique. The implementation uses dynamic invariant detection to generate program properties. It uses support vector machine and decision tree learning tools to classify those properties. Given a set of properties produced by the program analysis, some of which are indicative of errors, the technique selects a subset of properties that are most likely to reveal an error. The experimental evaluation over 941,000 lines of code, showed that a user must examine only the 2.2 highest-ranked properties for C programs and 1.7 for Java programs to find a fault-revealing property. The technique increases the relevance (the concentration of properties that reveal errors) by a factor of 50 on average for C programs, and 4.8 for Java programs.
by Yuriy Brun.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Emani, Murali Krishna. „Adaptive parallelism mapping in dynamic environments using machine learning“. Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10469.

Der volle Inhalt der Quelle
Annotation:
Modern day hardware platforms are parallel and diverse, ranging from mobiles to data centers. Mainstream parallel applications execute in the same system competing for resources. This resource contention may lead to a drastic degradation in a program’s performance. In addition, the execution environment composed of workloads and hardware resources, is dynamic and unpredictable. Efficient matching of program parallelism to machine parallelism under uncertainty is hard. The mapping policies that determine the optimal allocation of work to threads should anticipate these variations. This thesis proposes solutions to the mapping of parallel programs in dynamic environments. It employs predictive modelling techniques to determine the best degree of parallelism. Firstly, this thesis proposes a machine learning-based model to determine the optimal thread number for a target program co-executing with varying workloads. For this purpose, this offline trained model uses static code features and dynamic runtime information as input. Next, this thesis proposes a novel solution to monitor the proposed offline model and adjust its decisions in response to the environment changes. It develops a second predictive model for determining how the future environment should be, if the current thread prediction was optimal. Depending on how close this prediction was to the actual environment, the predicted thread numbers are adjusted. Furthermore, considering the multitude of potential execution scenarios where no single policy is best suited in all cases, this work proposes an approach based on the idea of mixture of experts. It considers a number of offline experts or mapping policies, each specialized for a given scenario, and learns online the best expert that is optimal for the current execution. When evaluated on highly dynamic executions, these solutions are proven to surpass default, state-of-art adaptive and analytic approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dahlberg, Love. „Dynamic algorithm selection for machine learning on time series“. Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72576.

Der volle Inhalt der Quelle
Annotation:
We present a software that can dynamically determine what machine learning algorithm is best to use in a certain situation given predefined traits. The produced software uses ideal conditions to exemplify how such a solution could function. The software is designed to train a selection algorithm that can predict the behavior of the specified testing algorithms to derive which among them is the best. The software is used to summarize and evaluate a collection of selection algorithm predictions to determine  which testing algorithm was the best during that entire period. The goal of this project is to provide a prediction evaluation software solution can lead towards a realistic implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Caceres, Carlos Antonio. „Machine Learning Techniques for Gesture Recognition“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/52556.

Der volle Inhalt der Quelle
Annotation:
Classification of human movement is a large field of interest to Human-Machine Interface researchers. The reason for this lies in the large emphasis humans place on gestures while communicating with each other and while interacting with machines. Such gestures can be digitized in a number of ways, including both passive methods, such as cameras, and active methods, such as wearable sensors. While passive methods might be the ideal, they are not always feasible, especially when dealing in unstructured environments. Instead, wearable sensors have gained interest as a method of gesture classification, especially in the upper limbs. Lower arm movements are made up of a combination of multiple electrical signals known as Motor Unit Action Potentials (MUAPs). These signals can be recorded from surface electrodes placed on the surface of the skin, and used for prosthetic control, sign language recognition, human machine interface, and a myriad of other applications. In order to move a step closer to these goal applications, this thesis compares three different machine learning tools, which include Hidden Markov Models (HMMs), Support Vector Machines (SVMs), and Dynamic Time Warping (DTW), to recognize a number of different gestures classes. It further contrasts the applicability of these tools to noisy data in the form of the Ninapro dataset, a benchmarking tool put forth by a conglomerate of universities. Using this dataset as a basis, this work paves a path for the analysis required to optimize each of the three classifiers. Ultimately, care is taken to compare the three classifiers for their utility against noisy data, and a comparison is made against classification results put forth by other researchers in the field. The outcome of this work is 90+ % recognition of individual gestures from the Ninapro dataset whilst using two of the three distinct classifiers. Comparison against previous works by other researchers shows these results to outperform all other thus far. Through further work with these tools, an end user might control a robotic or prosthetic arm, or translate sign language, or perhaps simply interact with a computer.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Botlani-Esfahani, Mohsen. „Modeling of Dynamic Allostery in Proteins Enabled by Machine Learning“. Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6804.

Der volle Inhalt der Quelle
Annotation:
Regulation of protein activity is essential for normal cell functionality. Many proteins are regulated allosterically, that is, with spatial gaps between stimulation and active sites. Biological stimuli that regulate proteins allosterically include, for example, ions and small molecules, post-translational modifications, and intensive state-variables like temperature and pH. These effectors can not only switch activities on-and-off, but also fine-tune activities. Understanding the underpinnings of allostery, that is, how signals are propagated between distant sites, and how transmitted signals manifest themselves into regulation of protein activity, has been one of the central foci of biology for over 50 years. Today, the importance of such studies goes beyond basic pedagogical interests as bioengineers seek design features to control protein function for myriad purposes, including design of nano-biosensors, drug delivery vehicles, synthetic cells and organic-synthetic interfaces. The current phenomenological view of allostery is that signaling and activity control occur via effector-induced changes in protein conformational ensembles. If the structures of two states of a protein differ from each other significantly, then thermal fluctuations can be neglected and an atomically detailed model of regulation can be constructed in terms of how their minimum-energy structures differ between states. However, when the minimum-energy structures of states differ from each other only marginally and the difference is comparable to thermal fluctuations, then a mechanistic model cannot be constructed solely on the basis of differences in protein structure. Understanding the mechanism of dynamic allostery requires not only assessment of high-dimensional conformational ensembles of the various individual states, including inactive, transition and active states, but also relationships between them. This challenge faces many diverse protein families, including G-protein coupled receptors, immune cell receptors, heat shock proteins, nuclear transcription factors and viral attachment proteins, whose mechanisms, despite numerous studies, remain poorly understood. This dissertation deals with the development of new methods that significantly boost the applicability of molecular simulation techniques to probe dynamic allostery in these proteins. Specifically, it deals with two different methods, one to obtain quantitative estimates for subtle differences between conformational ensembles, and the other to relate conformational ensemble differences to allosteric signal communication. Both methods are enabled by a new application of the mathematical framework of machine learning. These methods are applied to (a) identify specific effects of employed force fields on conformational ensembles, (b) compare multiple ensembles against each other for determination of common signaling pathways induced by different effectors, (c) identify the effects of point mutations on conformational ensemble shifts in proteins, and (d) understand the mechanism of dynamic allostery in a PDZ domain. These diverse applications essentially demonstrate the generality of the developed approaches, and specifically set the foundation for future studies on PDZ domains and viral attachment proteins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Zorello, Ligia Maria Moreira. „Dynamic CPU frequency scaling using machine learning for NFV applications“. Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-30012019-100044/.

Der volle Inhalt der Quelle
Annotation:
Growth in the Information and Communication Technology sector is increasing the need to improve the quality of service and energy efficiency, as this industry has already surpassed 12% of global energy consumption in 2017. Data centers correspond to a large part of this consumption, accounting for about 15% of energy expenditure on the Information and Communication Technology domain; moreover, the subsystem that generates the most costs for data center operators is that of servers and storage. Many solutions have been proposed to reduce server consumption, such as the use of dynamic voltage and frequency scaling, a technology that enables the adaptation of energy consumption to the workload by modifying the operating voltage and frequency, although they are not optimized for network traffic. In this thesis, a control method was developed using a prediction engine based on the analysis of the ongoing traffic. Machine learning algorithms based on Neural Networks and Support Vector Machines have been used, and it was verified that it is possible to reduce power consumption by up to 12% on servers with Intel Sandy Bridge processor and up to 21 % in servers with Intel Haswell processor when compared to the maximum frequency, which is currently the most used solution in the industry.
O crescimento do setor de Tecnologia da Informação e Comunicação está aumentando a necessidade de melhorar a qualidade de serviço e a eficiência energética, pois o setor já ultrapassou a marca de 12% do consumo energético global em 2017. Data centers correspondem a grande parte desse consumo, representando cerca de 15% dos gastos com energia do setor Tecnologia Informação e Comunicação; além disso, o subsistema que gera mais custos para operadores de data centers é o de servidores e armazenamento. Muitas soluções foram propostas a fim de reduzir o consumo de energia com servidores, como o uso de escalonamento dinâmico de tensão e frequência, uma tecnologia que permite adaptar o consumo de energia à carga de trabalho, embora atualmente não sejam otimizadas para o processamento do tráfego de rede. Nessa dissertação, foi desenvolvido um método de controle usando um mecanismo de previsão baseado na análise do tráfego que chega aos servidores. Os algoritmos de aprendizado de máquina baseados em Redes Neurais e em Máquinas de Vetores de Suporte foram utilizados, e foi verificado que é possível reduzir o consumo de energia em até 12% em servidores com processador Intel Sandy Bridge e em até 21% em servidores com processador Intel Haswell quando comparado com a frequência máxima, que é atualmente a solução mais utilizada na indústria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Renner, Michael Robert. „Machine Learning Simulation: Torso Dynamics of Robotic Biped“. Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/34602.

Der volle Inhalt der Quelle
Annotation:
Military, Medical, Exploratory, and Commercial robots have much to gain from exchanging wheels for legs. However, the equations of motion of dynamic bipedal walker models are highly coupled and non-linear, making the selection of an appropriate control scheme difficult. A temporal difference reinforcement learning method known as Q-learning develops complex control policies through environmental exploration and exploitation. As a proof of concept, Q-learning was applied through simulation to a benchmark single pendulum swing-up/balance task; the value function was first approximated with a look-up table, and then an artificial neural network. We then applied Evolutionary Function Approximation for Reinforcement Learning to effectively control the swing-leg and torso of a 3 degree of freedom active dynamic bipedal walker in simulation. The model began each episode in a stationary vertical configuration. At each time-step the learning agent was rewarded for horizontal hip displacement scaled by torso altitude--which promoted faster walking while maintaining an upright posture--and one of six coupled torque activations were applied through two first-order filters. Over the course of 23 generations, an approximation of the value function was evolved which enabled walking at an average speed of 0.36 m/s. The agent oscillated the torso forward then backward at each step, driving the walker forward for forty-two steps in thirty seconds without falling over. This work represents the foundation for improvements in anthropomorphic bipedal robots, exoskeleton mechanisms to assist in walking, and smart prosthetics.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kulich, Martin. „Dynamic Template Adjustment in Continuous Keystroke Dynamics“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234927.

Der volle Inhalt der Quelle
Annotation:
Dynamika úhozů kláves je jednou z behaviorálních biometrických charakteristik, kterou je možné použít pro průběžnou autentizaci uživatelů. Vzhledem k tomu, že styl psaní na klávesnici se v čase mění, je potřeba rovněž upravovat biometrickou šablonu. Tímto problémem se dosud, alespoň pokud je autorovi známo, žádná studie nezabývala. Tato diplomová práce se pokouší tuto mezeru zaplnit. S pomocí dat o časování úhozů od 22 dobrovolníků bylo otestováno několik technik klasifikace, zda je možné je upravit na online klasifikátory, zdokonalující se bez učitele. Výrazné zlepšení v rozpoznání útočníka bylo zaznamenáno u jednotřídového statistického klasifikátoru založeného na normované Euklidovské vzdálenosti, v průměru o 23,7 % proti původní verzi bez adaptace, zlepšení však bylo pozorováno u všech testovacích sad. Změna míry rozpoznání správného uživatele se oproti tomu různila, avšak stále zůstávala na přijatelných hodnotách.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Yang, Donghai, und 杨东海. „Dynamic planning and scheduling in manufacturing systems with machine learning approaches“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41757968.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Arslan, Oktay. „Machine learning and dynamic programming algorithms for motion planning and control“. Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54317.

Der volle Inhalt der Quelle
Annotation:
Robot motion planning is one of the central problems in robotics, and has received considerable amount of attention not only from roboticists but also from the control and artificial intelligence (AI) communities. Despite the different types of applications and physical properties of robotic systems, many high-level tasks of autonomous systems can be decomposed into subtasks which require point-to-point navigation while avoiding infeasible regions due to the obstacles in the workspace. This dissertation aims at developing a new class of sampling-based motion planning algorithms that are fast, efficient and asymptotically optimal by employing ideas from Machine Learning (ML) and Dynamic Programming (DP). First, we interpret the robot motion planning problem as a form of a machine learning problem since the underlying search space is not known a priori, and utilize random geometric graphs to compute consistent discretizations of the underlying continuous search space. Then, we integrate existing DP algorithms and ML algorithms to the framework of sampling-based algorithms for better exploitation and exploration, respectively. We introduce a novel sampling-based algorithm, called RRT#, that improves upon the well-known RRT* algorithm by leveraging value and policy iteration methods as new information is collected. The proposed algorithms yield provable guarantees on correctness, completeness and asymptotic optimality. We also develop an adaptive sampling strategy by considering exploration as a classification (or regression) problem, and use online machine learning algorithms to learn the relevant region of a query, i.e., the region that contains the optimal solution, without significant computational overhead. We then extend the application of sampling-based algorithms to a class of stochastic optimal control problems and problems with differential constraints. Specifically, we introduce the Path Integral - RRT algorithm, for solving optimal control of stochastic systems and the CL-RRT# algorithm that uses closed-loop prediction for trajectory generation for differential systems. One of the key benefits of CL-RRT# is that for many systems, given a low-level tracking controller, it is easier to handle differential constraints, so complex steering procedures are not needed, unlike most existing kinodynamic sampling-based algorithms. Implementation results of sampling-based planners for route planning of a full-scale autonomous helicopter under the Autonomous Aerial Cargo/Utility System Program (AACUS) program are provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Xu, Jin. „Machine Learning – Based Dynamic Response Prediction of High – Speed Railway Bridges“. Thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278538.

Der volle Inhalt der Quelle
Annotation:
Targeting heavier freights and transporting passengers with higher speeds became the strategic railway development during the past decades significantly increasing interests on railway networks. Among different components of a railway network, bridges constitute a major portion imposing considerable construction and maintenance costs. On the other hand, heavier axle loads and higher trains speeds may cause resonance occurrence on bridges; which consequently limits operational train speed and lines. Therefore, satisfaction of new expectations requires conducting a large number of dynamic assessments/analyses on bridges, especially on existing ones. Evidently, such assessments need detailed information, expert engineers and consuming considerable computational costs. In order to save the computational efforts and decreasing required amount of expertise in preliminary evaluation of dynamic responses, predictive models using artificial neural network (ANN) are proposed in this study. In this regard, a previously developed closed-form solution method (based on solving a series of moving force) was adopted to calculate the dynamic responses (maximum deck deflection and maximum vertical deck acceleration) of randomly generated bridges. Basic variables in generation of random bridges were extracted both from literature and geometrical properties of existing bridges in Sweden. Different ANN architectures including number of inputs and neurons were considered to train the most accurate and computationally cost-effective mode. Then, the most efficient model was selected by comparing their performance using absolute error (ERR), Root Mean Square Error (RMSE) and coefficient of determination (R2). The obtained results revealed that the ANN model can acceptably predict the dynamic responses. The proposed model presents Err of about 11.1% and 9.9% for prediction of maximum acceleration and maximum deflection, respectively. Furthermore, its R2 for maximum acceleration and maximum deflection predictions equal to 0.982 and 0.998, respectively. And its RMSE is 0.309 and 1.51E-04 for predicting the maximum acceleration and maximum deflection prediction, respectively. Finally, sensitivity analyses were conducted to evaluate the importance of each input variable on the outcomes. It was noted that the span length of the bridge and speed of the train are the most influential parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Gyawali, Sanij. „Dynamic Load Modeling from PSSE-Simulated Disturbance Data using Machine Learning“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100591.

Der volle Inhalt der Quelle
Annotation:
Load models have evolved from simple ZIP model to composite model that incorporates the transient dynamics of motor loads. This research utilizes the latest trend on Machine Learning and builds reliable and accurate composite load model. A composite load model is a combination of static (ZIP) model paralleled with a dynamic model. The dynamic model, recommended by Western Electricity Coordinating Council (WECC), is an induction motor representation. In this research, a dual cage induction motor with 20 parameters pertaining to its dynamic behavior, starting behavior, and per unit calculations is used as a dynamic model. For machine learning algorithms, a large amount of data is required. The required PMU field data and the corresponding system models are considered Critical Energy Infrastructure Information (CEII) and its access is limited. The next best option for the required amount of data is from a simulating environment like PSSE. The IEEE 118 bus system is used as a test setup in PSSE and dynamic simulations generate the required data samples. Each of the samples contains data on Bus Voltage, Bus Current, and Bus Frequency with corresponding induction motor parameters as target variables. It was determined that the Artificial Neural Network (ANN) with multivariate input to single parameter output approach worked best. Recurrent Neural Network (RNN) is also experimented side by side to see if an additional set of information of timestamps would help the model prediction. Moreover, a different definition of a dynamic model with a transfer function-based load is also studied. Here, the dynamic model is defined as a mathematical representation of the relation between bus voltage, bus frequency, and active/reactive power flowing in the bus. With this form of load representation, Long-Short Term Memory (LSTM), a variation of RNN, performed better than the concurrent algorithms like Support Vector Regression (SVR). The result of this study is a load model consisting of parameters defining the load at load bus whose predictions are compared against simulated parameters to examine their validity for use in contingency analysis.
Master of Science
Independent system Operators (ISO) and Distribution system operators (DSO) have a responsibility to provide uninterrupted power supply to consumers. That along with the longing to keep operating cost minimum, engineers and planners study the system beforehand and seek to find the optimum capacity for each of the power system elements like generators, transformers, transmission lines, etc. Then they test the overall system using power system models, which are mathematical representation of the real components, to verify the stability and strength of the system. However, the verification is only as good as the system models that are used. As most of the power systems components are controlled by the operators themselves, it is easy to develop a model from their perspective. The load is the only component controlled by consumers. Hence, the necessity of better load models. Several studies have been made on static load modeling and the performance is on par with real behavior. But dynamic loading, which is a load behavior dependent on time, is rather difficult to model. Some attempts on dynamic load modeling can be found already. Physical component-based and mathematical transfer function based dynamic models are quite widely used for the study. These load structures are largely accepted as a good representation of the systems dynamic behavior. With a load structure in hand, the next task is estimating their parameters. In this research, we tested out some new machine learning methods to accurately estimate the parameters. Thousands of simulated data are used to train machine learning models. After training, we validated the models on some other unseen data. This study finally goes on to recommend better methods to load modeling.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Yang, Donghai. „Dynamic planning and scheduling in manufacturing systems with machine learning approaches“. Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B41757968.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sathyan, Anoop. „Intelligent Machine Learning Approaches for Aerospace Applications“. University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1491558309625214.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Jackson, John Taylor. „Improving Swarm Performance by Applying Machine Learning to a New Dynamic Survey“. DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1857.

Der volle Inhalt der Quelle
Annotation:
A company, Unanimous AI, has created a software platform that allows individuals to come together as a group or a human swarm to make decisions. These human swarms amplify the decision-making capabilities of both the individuals and the group. One way Unanimous AI increases the swarm’s collective decision-making capabilities is by limiting the swarm to more informed individuals on the given topic. The previous way Unanimous AI selected users to enter the swarm was improved upon by a new methodology that is detailed in this study. This new methodology implements a new type of survey that collects data that is more indicative of a user’s knowledge on the subject than the previous survey. This study also identifies better metrics for predicting each user’s performance when predicting Major League Baseball game outcomes throughout a given week. This study demonstrates that the new machine learning models and data extraction schemes are approximately 12% more accurate than the currently implemented methods at predicting user performance. Finally, this study shows how predicting a user’s performance based purely on their inputs can increase the average performance of a group by limiting the group to the top predicted performers. This study shows that by limiting the group to the top predicted performers across five different weeks of MLB predictions, the average group performance was increased up to 5.5%, making this a superior method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Hammami, Seif Eddine. „Dynamic network resources optimization based on machine learning and cellular data mining“. Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0015/document.

Der volle Inhalt der Quelle
Annotation:
Les traces réelles de réseaux cellulaires représentent une mine d’information utile pour améliorer les performances des réseaux. Des traces comme les CDRs (Call detail records) contiennent des informations horodatées sur toutes les interactions des utilisateurs avec le réseau sont exploitées dans cette thèse. Nous avons proposé des nouvelles approches dans l’étude et l’analyse des problématiques des réseaux de télécommunications, qui sont basé sur les traces réelles et des algorithmes d’apprentissage automatique. En effet, un outil global d’analyse de données, pour la classification automatique des stations de base, la prédiction de la charge de réseau et la gestion de la bande passante est proposé ainsi qu’un outil pour la détection automatique des anomalies de réseau. Ces outils ont été validés par des applications directes, et en utilisant différentes topologies de réseaux comme les réseaux WMN et les réseaux basés sur les drone-cells. Nous avons montré ainsi, qu’en utilisant des outils d’analyse de données avancés, il est possible d’optimiser dynamiquement les réseaux mobiles et améliorer la gestion de la bande passante
Real datasets of mobile network traces contain valuable information about the network resources usage. These traces may be used to enhance and optimize the network performances. A real dataset of CDR (Call Detail Records) traces, that include spatio-temporal information about mobile users’ activities, are analyzed and exploited in this thesis. Given their large size and the fact that these are real-world datasets, information extracted from these datasets have intensively been used in our work to develop new algorithms that aim to revolutionize the infrastructure management mechanisms and optimize the usage of resource. We propose, in this thesis, a framework for network profiles classification, load prediction and dynamic network planning based on machine learning tools. We also propose a framework for network anomaly detection. These frameworks are validated using different network topologies such as wireless mesh networks (WMN) and drone-cell based networks. We show that using advanced data mining techniques, our frameworks are able to help network operators to manage and optimize dynamically their networks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hammami, Seif Eddine. „Dynamic network resources optimization based on machine learning and cellular data mining“. Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0015.

Der volle Inhalt der Quelle
Annotation:
Les traces réelles de réseaux cellulaires représentent une mine d’information utile pour améliorer les performances des réseaux. Des traces comme les CDRs (Call detail records) contiennent des informations horodatées sur toutes les interactions des utilisateurs avec le réseau sont exploitées dans cette thèse. Nous avons proposé des nouvelles approches dans l’étude et l’analyse des problématiques des réseaux de télécommunications, qui sont basé sur les traces réelles et des algorithmes d’apprentissage automatique. En effet, un outil global d’analyse de données, pour la classification automatique des stations de base, la prédiction de la charge de réseau et la gestion de la bande passante est proposé ainsi qu’un outil pour la détection automatique des anomalies de réseau. Ces outils ont été validés par des applications directes, et en utilisant différentes topologies de réseaux comme les réseaux WMN et les réseaux basés sur les drone-cells. Nous avons montré ainsi, qu’en utilisant des outils d’analyse de données avancés, il est possible d’optimiser dynamiquement les réseaux mobiles et améliorer la gestion de la bande passante
Real datasets of mobile network traces contain valuable information about the network resources usage. These traces may be used to enhance and optimize the network performances. A real dataset of CDR (Call Detail Records) traces, that include spatio-temporal information about mobile users’ activities, are analyzed and exploited in this thesis. Given their large size and the fact that these are real-world datasets, information extracted from these datasets have intensively been used in our work to develop new algorithms that aim to revolutionize the infrastructure management mechanisms and optimize the usage of resource. We propose, in this thesis, a framework for network profiles classification, load prediction and dynamic network planning based on machine learning tools. We also propose a framework for network anomaly detection. These frameworks are validated using different network topologies such as wireless mesh networks (WMN) and drone-cell based networks. We show that using advanced data mining techniques, our frameworks are able to help network operators to manage and optimize dynamically their networks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Damay, Gabriel. „Dynamic Decision Trees and Community-based Graph Embeddings : towards Interpretable Machine Learning“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT047.

Der volle Inhalt der Quelle
Annotation:
L'apprentissage automatique est le domaine des sciences informatiques dont le but est de créer des modèles et des solutions à partir de données sans savoir exactement les instructions qui dirigent intrinsèquement ces modèles. Ce domaine a obtenu des résultats impressionnants mais il est l'objet le sujet d'inquiétudes en raison notamment de l'impossibilité de comprendre et d'auditer les modèles qu'il produit. L'apprentissage automatique interprétable propose une solution à ces inquiétudes en créant des modèles qui sont interprétables de façon inhérante. Cette thèse contribue à l'apprentissage automatique interprétable de deux façons.Tout d'abord, nous étudions les arbres de décision. Il s'agit d'un groupe de méthodes d'apprentissage automatique très connu et qui est interprétable par la façon même dont il est conçu. Cependant, les données réelles sont souvent dynamiques et peu d'algorithmes existent pour maintenir un arbre de décision quand des données peuvent à la fois être ajoutées et supprimées de l'ensemble d'entrainement. Nous proposons un nouvel algorithme nommé FuDyADT pour résoudre ce problème.Ensuite, quand les données sont représentées sous forme de graphe, une technique d'apprentissage automatique très commune, nommée "embedding", consiste à projeter les données sur un espace vectoriel. Ce type de méthodes est cependant non-interprétable en général. Nous proposons un nouvel algorithme d'embedding appelé Parfaite, qui est basé sur la factorisation de la matrice de PageRank personnalisé. Cet algorithme est conçu pour que ses résultats soient interprétables.Nous étudions chacun de ces algorithmes sur un plan à la fois théorique et expérimental. Nous montrons que FuDyADT est au minimum comparable aux algorithmes à l'état de l'art dans les conditions habituelles, tout en étant également capable de fonctionner dans des contextes inhabituels comme dans le cas où des données sont supprimés ou dans le cas où certaines des données sont numériques. Quant à Parfaite, il produit des dimensions d'embedding qui sont alignées avec les communautés du graphe, et qui sont donc interprétables
Machine Learning is the field of computer science that interests in building models and solutions from data without knowing exactly the set of instructions internal to these models and solutions. This field has achieved great results but is now under scrutiny for the inability to understand or audit its models among other concerns. Interpretable Machine Learning addresses these concerns by building models that are inherently interpretable. This thesis contributes to Interpretable Machine Learning in two ways.First, we study Decision Trees. This is a very popular group of Machine Learning methods for classification problems and it is interpretable by design. However, real world data is often dynamic, but few algorithms can maintain a decision tree when data can be both inserted and deleted from the training set. We propose a new algorithm called FuDyADT to solve this problem.Second, when data are represented as graphs, a very common machine learning technique called "embedding" consists in projecting them onto a vectorial space. This kind of method however is usually not interpretable. We propose a new embedding algorithm called Parfaite based on the factorization of the Personalized PageRank matrix. This algorithm is designed to provide interpretable results.We study both algorithms theoretically and experimentally. We show that FuDyADT is at least comparable to state-of-the-art algorithms in the usual setting, while also being able to handle unusual settings such as deletions of data and numerical features. Parfaite on the other hand produces embedding dimensions that align with the communities of the graph, making the embedding interpretable
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Wenerstrom, Brent K. „Temporal Data Mining in a Dynamic Feature Space“. BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/761.

Der volle Inhalt der Quelle
Annotation:
Many interesting real-world applications for temporal data mining are hindered by concept drift. One particular form of concept drift is characterized by changes to the underlying feature space. Seemingly little has been done to address this issue. This thesis presents FAE, an incremental ensemble approach to mining data subject to concept drift. FAE achieves better accuracies over four large datasets when compared with a similar incremental learning algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Madjar, Nicole, und Filip Lindblom. „Machine Learning implementation for Stress-Detection“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280897.

Der volle Inhalt der Quelle
Annotation:
This project is about trying to apply machine learning theories on a selection of data points in order to see if an improvement of current methodology within stress detection and measure selecting could be applicable for the company Linkura AB. Linkura AB is a medical technology company based in Linköping and handles among other things stress measuring for different companies employees, as well as health coaching for selecting measures. In this report we experiment with different methods and algorithms under the collective name of Unsupervised Learning, to identify visible patterns and behaviour of data points and further on we analyze it with the quantity of data received. The methods that have been practiced on during the project are “K-means algorithm” and a dynamic hierarchical clustering algorithm. The correlation between the different data points parameters is analyzed to optimize the resource consumption, also experiments with different number of parameters are tested and discussed with an expert in stress coaching. The results stated that both algorithms can create clusters for the risk groups, however, the dynamic clustering method clearly demonstrate the optimal number of clusters that should be used. Having consulted with mentors and health coaches regarding the analysis of the produced clusters, a conclusion that the dynamic hierarchical cluster algorithm gives more accurate clusters to represent risk groups were done. The conclusion of this project is that the machine learning algorithms that have been used, can categorize data points with stress behavioral correlations, which is usable in measure testimonials. Further research should be done with a greater set of data for a more optimal result, where this project can form the basis for the implementations.
Detta projekt handlar om att försöka applicera maskininlärningsmodeller på ett urval av datapunkter för att ta reda på huruvida en förbättring av nuvarande praxis inom stressdetektering och  åtgärdshantering kan vara applicerbart för företaget Linkura AB. Linkura AB är ett medicintekniskt företag baserat i Linköping och hanterar bland annat stressmätning hos andra företags anställda, samt hälso-coachning för att ta fram åtgärdspunkter för förbättring. I denna rapport experimenterar vi med olika metoder under samlingsnamnet oövervakad maskininlärning för att identifiera synbara mönster och beteenden inom datapunkter, och vidare analyseras detta i förhållande till den mängden data vi fått tillgodosett. De modeller som har använts under projektets gång har varit “K-Means algoritm” samt en dynamisk hierarkisk klustermodell. Korrelationen mellan olika datapunktsparametrar analyseras för att optimera resurshantering, samt experimentering med olika antal parametrar inkluderade i datan testas och diskuteras med expertis inom hälso-coachning. Resultaten påvisade att båda algoritmerna kan generera kluster för riskgrupper, men där den dynamiska modellen tydligt påvisar antalet kluster som ska användas för optimalt resultat. Efter konsultering med mentorer samt expertis inom hälso-coachning så drogs en slutsats om att den dynamiska modellen levererar tydligare riskkluster för att representera riskgrupper för stress. Slutsatsen för projektet blev att maskininlärningsmodeller kan kategorisera datapunkter med stressrelaterade korrelationer, vilket är användbart för åtgärdsbestämmelser. Framtida arbeten bör göras med ett större mängd data för mer optimerade resultat, där detta projekt kan ses som en grund för dessa implementeringar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wenerstrom, Brent. „Temporal data mining in a dynamic feature space /“. Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1317.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Bhardwaj, Ananya. „Biomimetic Detection of Dynamic Signatures in Foliage Echoes“. Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102299.

Der volle Inhalt der Quelle
Annotation:
Horseshoe bats (family Rhinolophidae) are among the bat species that dynamically deform their reception baffles (pinnae) and emission baffles (noseleaves) during signal reception and emissions, respectively. These dynamics are a focus of prior studies that demonstrated that these effects could introduce time-variance within emitted and received signals. Recent lab based experiments with biomimetic hardware have shown that these dynamics can also inject time-variant signatures into echoes from simple targets. However, complex foliage echoes, which comprise a large portion of the received echoes and contain useful information for these bats, have not been studied in prior research. We used a biomimetic sonarhead which replicated these dynamics, to collect a large dataset of foliage echoes (>55,000). To generate a neuromorphic representation of echoes that was representative of the neural spikes in bat brains, we developed an auditory processing model based on Horseshoe bat physiological data. Then, machine learning classifiers were employed to classify these spike representations of echoes into distinct groups, based on the presence or absence of dynamics' effects. Our results showed that classification with up to 80% accuracy was possible, indicating the presence of these effects in foliage echoes, and their persistence through the auditory processing. These results suggest that these dynamics' effects might be present in bat brains, and therefore have the potential to inform behavioral decisions. Our results also indicated that potential benefits from these effects might be location specific, as our classifier was more effective in classifying echoes from the same physical location, compared to a dataset with significant variation in recording locations. This result suggested that advantages of these effects may be limited to the context of particular surroundings if the bat brain similarly fails to generalize over variation in locations.
Master of Science
Horseshoe bats (family Rhinolophidae) are an echolocating bat species, i.e., they emit sound waves and use the corresponding echoes received from the environment to gather information for navigation. This species of bats demonstrate the behavior of deforming their emitter (noseleaf), and ears (pinna), while emitting or receiving echolocation signals. Horseshoe bats are adept at navigating in the dark through dense foliage. Their impressive navigational abilities are of interest to researchers, as their biology can inspire solutions for autonomous drone navigation in foliage and underwater. Prior research, through numerical studies and experimental reproductions, has found that these deformations can introduce time-dependent changes in the emitted and received signals. Furthermore, recent research using a biomimetic robot has found that echoes received from simple shapes, such as cube and sphere, also contain time-dependent changes. However, prior studies have not used foliage echoes in their analysis, which are more complex, since they include a large number of randomly distributed targets (leaves). Foliage echoes also constitute a large share of echoes from the bats' habitats, hence an understanding of the effects of the dynamic deformations on these foliage echoes is of interest. Since echolocation signals exist within bat brains as neural spikes, it is also important to understand if these dynamic effects can be identified within such signal representations, as that would indicate that these effects are available to the bats' brains. In this study, a biomimetic robot that mimicked the dynamic pinna and noseleaf deformation was used to collect a large dataset (>55,000) of echoes from foliage. A signal processing model that mimicked the auditory processing of these bats and generated simulated spike responses was also developed. Supervised machine learning was used to classify these simulated spike responses into two groups based on the presence or absence of these dynamics' effects. The success of the machine learning classifiers of up to 80% accuracy suggested that the dynamic effects exist within foliage echoes and also spike-based representations. The machine learning classifier was more accurate when classifying echoes from a small confined area, as compared to echoes distributed over a larger area with varying foliage. This result suggests that any potential benefits from these effects might be location-specific if the bat brain similarly fails to generalize over the variation in echoes from different locations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Xue, Yongjian. „Dynamic Transfer Learning for One-class Classification : a Multi-task Learning Approach“. Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0006.

Der volle Inhalt der Quelle
Annotation:
Le but de cette thèse est de minimiser la perte de performance d'un système de détection lorsqu'il rencontre un changement de distribution de données à la suite d’un événement connu (maintenance, ajout de capteur etc.). L'idée est d'utiliser l'approche d'apprentissage par transfert pour exploiter l'information apprise avant l’événement pour adapter le détecteur au système modifié. Un modèle d'apprentissage multitâche est proposé pour résoudre ce problème. Il utilise un paramètre pour équilibrer la quantité d'informations apportées par l'ancien système par rapport au nouveau. Ce modèle est formalisé de manière à pouvoir être résolu par un SVM mono-classe classique avec une matrice de noyau spécifique. Pour sélectionner le paramètre de contrôle, une méthode qui calcule les solutions pour toutes les valeurs du paramètre introduit et un critère de sélection de sa valeur optimale sont proposés. Les expériences menées dans le cas de changement de distribution et d’ajout de capteurs montrent que ce modèle permet une transition en douceur de l'ancien système vers le nouveau. De plus, comme le modèle proposé peut être formulé comme un SVM mono-classe classique, des algorithmes d'apprentissage en ligne pour SVM mono-classe sont étudiés dans le but d'obtenir un taux de fausses alarmes stable au cours de la phase de transition. Ils peuvent être appliqués directement à l'apprentissage en ligne du modèle proposé
The aim of this thesis is to minimize the performance loss of a one-class detection system when it encounters a data distribution change. The idea is to use transfer learning approach to transfer learned information from related old task to the new one. According to the practical applications, we divide this transfer learning problem into two parts, one part is the transfer learning in homogenous space and the other part is in heterogeneous space. A multi-task learning model is proposed to solve the above problem; it uses one parameter to balance the amount of information brought by the old task versus the new task. This model is formalized so that it can be solved by classical one-class SVM except with a different kernel matrix. To select the control parameter, a kernel path solution method is proposed. It computes all the solutions along that introduced parameter and criteria are proposed to choose the corresponding optimal solution at given number of new samples. Experiments show that this model can give a smooth transition from the old detection system to the new one whenever it encounters a data distribution change. Moreover, as the proposed model can be solved by classical one-class SVM, online learning algorithms for one-class SVM are studied later in the purpose of getting a constant false alarm rate. It can be applied to the online learning of the proposed model directly
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Curtis, Brian J. „Machine Learning and Cellular Automata| Applications in Modeling Dynamic Change in Urban Environments“. Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785215.

Der volle Inhalt der Quelle
Annotation:

There have been several studies advocating the need for, and the feasibility of, using advanced techniques to support decision makers in urban planning and resource monitoring. One such advanced technique includes a framework that leverages remote sensing and geospatial information systems (GIS) in conjunction with cellular automata (CA) to monitor land use / land change phenomena like urban sprawling. Much research has been conducted using various learning techniques spanning all levels of complexity - from simple logistical regression to advance artificial intelligence methods (e.g., artificial neural networks). In a high percentage of the published research, simulations are performed leveraging only one or two techniques and applied to a case study of a single geographical region. Typically, the findings are favorable and demonstrate the studied methods are superior. This work found no research being conducted to compare the performance of several machine learning techniques across an array of geographical locations. Additionally, current literature was found lacking in investigating the impact various scene parameters (e.g., sprawl, urban growth) had on the simulation results. Therefore, this research set out to understand the sensitivities and correlations associated with the selection of machine learning methods used in CA based models. The results from this research indicate more simplistic algorithms, which are easier to comprehend and implement, have the potential to perform equally as well as compared to more complicated algorithms. Also, it is shown that the quantity of urbanization in the studied area directly impacts the simulation results.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Tahkola, M. (Mikko). „Developing dynamic machine learning surrogate models of physics-based industrial process simulation models“. Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201906042313.

Der volle Inhalt der Quelle
Annotation:
Abstract. Dynamic physics-based models of industrial processes can be computationally heavy which prevents using them in some applications, e.g. in process operator training. Suitability of machine learning in creating surrogate models of a physics-based unit operation models was studied in this research. The main motivation for this was to find out if machine learning model can be accurate enough to replace the corresponding physics-based components in dynamic modelling and simulation software Apros® which is developed by VTT Technical Research Centre of Finland Ltd and Fortum. This study is part of COCOP project, which receive funding from EU, and INTENS project that is Business Finland funded. The research work was divided into a literature study and an experimental part. In the literature study, the steps of modelling with data-driven methods were studied and artificial neural network architectures suitable for dynamic modelling were investigated. Based on that, four neural network architectures were chosen for the case studies. In the first case study, linear and nonlinear autoregressive models with exogenous inputs (ARX and NARX respectively) were used in modelling dynamic behaviour of a water tank process build in Apros®. In the second case study, also Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) were considered and compared with the previously mentioned ARX and NARX models. The workflow from selecting the input and output variables for the machine learning model and generating the datasets in Apros® to implement the machine learning models back to Apros® was defined. Keras is an open source neural network library running on Python that was utilised in the model generation framework which was developed as a part of this study. Keras library is a very popular library that allow fast experimenting. The framework make use of random hyperparameter search and each model is tested on a validation dataset in dynamic manner, i.e. in multi-step-ahead configuration, during the optimisation. The best models based in terms of average normalised root mean squared error (NRMSE) is selected for further testing. The results of the case studies show that accurate multi-step-ahead models can be built using recurrent artificial neural networks. In the first case study, the linear ARX model achieved slightly better NRMSE value than the nonlinear one, but the accuracy of both models was on a very good level with the average NRMSE being lower than 0.1 %. The generalisation ability of the models was tested using multiple datasets and the models proved to generalise well. In the second case study, there were more difference between the models’ accuracies. This was an expected result as the studied process contains nonlinearities and thus the linear ARX model performed worse in predicting some output variables than the nonlinear ones. On the other hand, ARX model performed better with some other output variables. However, also in the second case study the model NRMSE values were on good level, being 1.94–3.60 % on testing dataset. Although the workflow to implement machine learning models in Apros® using its Python binding was defined, the actual implementation need more work. Experimenting with Keras neural network models in Apros® was noticed to slow down the simulation even though the model was fast when testing it outside of Apros®. The Python binding in Apros® do not seem to cause overhead to the calculation process which is why further investigating is needed. It is obvious that the machine learning model must be very accurate if it is to be implemented in Apros® because it needs to be able interact with the physics-based model. The actual accuracy requirement that Apros® sets should be also studied to know if and in which direction the framework made for this study needs to be developed.Dynaamisten surrogaattimallien kehittäminen koneoppimismenetelmillä teollisuusprosessien fysiikkapohjaisista simulaatiomalleista. Tiivistelmä. Teollisuusprosessien toimintaa jäljittelevät dynaamiset fysiikkapohjaiset simulaatiomallit voivat laajuudesta tai yksityiskohtien määrästä johtuen olla laskennallisesti raskaita. Tämä voi rajoittaa simulaatiomallin käyttöä esimerkiksi prosessioperaattorien koulutuksessa ja hidastaa simulaattorin avulla tehtävää prosessien optimointia. Tässä tutkimuksessa selvitettiin koneoppimismenetelmillä luotujen mallien soveltuvuutta fysiikkapohjaisten yksikköoperaatiomallien surrogaattimallinnukseen. Fysiikkapohjaiset mallit on luotu teollisuusprosessien dynaamiseen mallinnukseen ja simulointiin kehitetyllä Apros®-ohjelmistolla, jota kehittää Teknologian tutkimuskeskus VTT Oy ja Fortum. Työ on osa COCOP-projektia, joka saa rahoitusta EU:lta, ja INTENS-projektia, jota rahoittaa Business Finland. Työ on jaettu kirjallisuusselvitykseen ja kahteen kokeelliseen case-tutkimukseen. Kirjallisuusosiossa selvitettiin datapohjaisen mallinnuksen eri vaiheet ja tutkittiin dynaamiseen mallinnukseen soveltuvia neuroverkkorakenteita. Tämän perusteella valittiin neljä neuroverkkoarkkitehtuuria case-tutkimuksiin. Ensimmäisessä case-tutkimuksessa selvitettiin lineaarisen ja epälineaarisen autoregressive model with exogenous inputs (ARX ja NARX) -mallin soveltuvuutta pinnankorkeuden säädöllä varustetun vesisäiliömallin dynaamisen käyttäytymisen mallintamiseen. Toisessa case-tutkimuksessa tarkasteltiin edellä mainittujen mallityyppien lisäksi Long Short-Term Memory (LSTM) ja Gated Recurrent Unit (GRU) -verkkojen soveltuvuutta power-to-gas prosessin metanointireaktorin dynaamiseen mallinnukseen. Työssä selvitettiin surrogaattimallinnuksen vaiheet korvattavien yksikköoperaatiomallien ja siihen liittyvien muuttujien valinnasta datan generointiin ja koneoppimismallien implementointiin Aprosiin. Koneoppimismallien rakentamiseen tehtiin osana työtä Python-sovellus, joka hyödyntää Keras Python-kirjastoa neuroverkkomallien rakennuksessa. Keras on suosittu kirjasto, joka mahdollistaa nopean neuroverkkomallien kehitysprosessin. Työssä tehty sovellus hyödyntää neuroverkkomallien hyperparametrien optimoinnissa satunnaista hakua. Jokaisen optimoinnin aikana luodun mallin tarkkuutta dynaamisessa simuloinnissa mitataan erillistä aineistoa käyttäen. Jokaisen mallityypin paras malli valitaan NRMSE-arvon perusteella seuraaviin testeihin. Case-tutkimuksen tuloksien perusteella neuroverkoilla voidaan saavuttaa korkea tarkkuus dynaamisessa simuloinnissa. Ensimmäisessä case-tutkimuksessa lineaarinen ARX-malli oli hieman epälineaarista tarkempi, mutta molempien mallityyppien tarkkuus oli hyvä (NRMSE alle 0.1 %). Mallien yleistyskykyä mitattiin simuloimalla usealla aineistolla, joiden perusteella yleistyskyky oli hyvällä tasolla. Toisessa case-tutkimuksessa vastemuuttujien tarkkuuden välillä oli eroja lineaarisen ja epälineaaristen mallityyppien välillä. Tämä oli odotettu tulos, sillä joidenkin mallinnettujen vastemuuttujien käyttäytyminen on epälineaarista ja näin ollen lineaarinen ARX-malli suoriutui niiden mallintamisesta epälineaarisia malleja huonommin. Toisaalta lineaarinen ARX-malli oli tarkempi joidenkin vastemuuttujien mallinnuksessa. Kaiken kaikkiaan mallinnus onnistui hyvin myös toisessa case-tutkimuksessa, koska käytetyillä mallityypeillä saavutettiin 1.94–3.60 % NRMSE-arvo testidatalla simuloitaessa. Koneoppimismallit saatiin sisällytettyä Apros-malliin käyttäen Python-ominaisuutta, mutta prosessi vaatii lisäselvitystä, jotta mallit saadaan toimimaan yhdessä. Testien perusteella Keras-neuroverkkojen käyttäminen näytti hidastavan simulaatiota, vaikka neuroverkkomalli oli nopea Aprosin ulkopuolella. Aprosin Python-ominaisuus ei myöskään näytä itsessään aiheuttavan hitautta, jonka takia asiaa tulisi selvittää mallien implementoinnin mahdollistamiseksi. Koneoppimismallin tulee olla hyvin tarkka toimiakseen vuorovaikutuksessa fysiikkapohjaisen mallin kanssa. Jatkotutkimuksen ja Python-sovelluksen kehittämisen kannalta on tärkeää selvittää mikä on Aprosin koneoppimismalleille asettama tarkkuusvaatimus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Reis, Ana Flávia dos. „New Baseband Architectures Using Machine Learning and Deep Learning in the Presence of Nonlinearities and Dynamic Environment“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS023.

Der volle Inhalt der Quelle
Annotation:
La futur sixième génération (6G) de systèmes de communication sans fil devrait permettre un large éventail de nouvelles applications dans le domaine de la communication véhiculaire, ce qui s'accompagne d'un ensemble varié de défis et d'opportunités résultant des exigences de cette technologie de pointe. En particulier, ces défis découlent des conditions dynamiques des canaux, y compris les canaux variables dans le temps et les non-linéarités induites par les amplificateurs de puissance. Dans ce contexte complexe, l'estimation des canaux sans fil apparaît comme un élément essentiel pour établir une communication fiable. En outre, le potentiel de l'apprentissage automatique et de l'apprentissage profond dans la conception d'architectures de récepteurs adaptées aux réseaux de communication véhiculaires est évident, étant donné leurs capacités à exploiter de vastes ensembles de données, à modéliser des conditions de canal complexes et à optimiser la performance des récepteurs. Au long de cette recherche, nous avons tiré parti de ces outils potentiels pour faire progresser l'état de l'art en matière de conception de récepteurs pour les réseaux de communication véhiculaires. Ainsi, nous avons exploré les caractéristiques de l'estimation des canaux sans fil et de l'atténuation des distorsions non linéaires, en reconnaissant qu'il s'agit de facteurs importants pour la performance des systèmes de communication. À cette fin, nous proposons de nouvelles méthodes et des récepteurs flexibles, basés sur des approches hybrides qui combinent des modèles mathématiques et des techniques de l'apprentissage automatique, en tirant parti des caractéristiques uniques du canal véhiculaire pour promouvoir une estimation précise. Notre analyse couvre à la fois la forme d'onde des communications sans fil conventionnelles et une forme d'onde prometteuse de la 6G, ce qui démontre la complétude de notre approche. Les résultats des approches proposées sont prometteurs, caractérisés par des améliorations substantielles de la performance et des réductions notables de la complexité du système. Ces résultats offrent un potentiel pour des applications dans le monde réel, marquant un pas vers l'avenir dans le domaine des réseaux de communication véhiculaires
The forthcoming sixth generation (6G) of wireless communication systems is expected to enable a wide range of new applications in vehicular communication, which is accompanied by a diverse set of challenges and opportunities resulting from the demands of this cutting-edge technology. In particular, these challenges arise from dynamic channel conditions, including time-varying channels and nonlinearities induced by high-power amplifiers. In this complex context, wireless channel estimation emerges as an essential element in establishing reliable communication. Furthermore, the potential of machine learning and deep learning in the design of receiver architectures adapted to vehicular communication networks is evident, given their capabilities to harness vast datasets, model complex channel conditions, and optimize receiver performance. Throughout the course of this research, we leveraged these potential tools to advance the state-of-the-art in receiver design for vehicular communication networks. In this manner, we delved into the characteristics of wireless channel estimation and the mitigation of nonlinear distortions, recognizing these as significant factors in the communication system performance. To this end, we propose new methods and flexible receivers, based on hybrid approaches that combine mathematical models and machine learning techniques, taking advantage of the unique characteristics of the vehicular channel to favor accurate estimation. Our analysis covers both conventional wireless communications waveform and a promising 6G waveform, targeting the comprehensiveness of our approach. The results of the proposed approaches are promising, characterized by substantial enhancements in performance and noteworthy reductions in system complexity. These findings hold the potential for real-world applications, marking a step toward the future in the domain of vehicular communication networks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Lyubchyk, Leonid, Oleksy Galuza und Galina Grinberg. „Ranking Model Real-Time Adaptation via Preference Learning Based on Dynamic Clustering“. Thesis, ННК "IПСА" НТУУ "КПI iм. Iгоря Сiкорського", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36819.

Der volle Inhalt der Quelle
Annotation:
The proposed preference learning on clusters method allows to fully realizing the advantages of the kernel-based approach. While the dimension of the model is determined by a pre-selected number of clusters and its complexity do not grow with increasing number of observations. Thus real-time preference function identification algorithm based on training data stream includes successive estimates of cluster parameter as well as average cluster ranks updating and recurrent kernel-based nonparametric estimation of preference model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Nguyen, Dang Quang. „Multi-Agent Learning in Highly Dynamic and Uncertain Environments“. Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30020.

Der volle Inhalt der Quelle
Annotation:
Over the last decades, machine learning research has considerably contributed to important solutions to numerous decision-making problems. Machine learning systems demonstrate enormous potential to generate decisions beyond the performance of humans or hand-crafted engineering systems, by utilising available feedback from environment and the decision history. In spite of significant efforts, challenges remain in developing a generic and consistent learning framework for modelling and optimising decision-making policies, especially applicable to multi-agent environments with frequent agent interactions and uncertain action outcomes. This thesis examines modelling and optimisation of decisions in highly dynamic and uncertain multi-agent environments. Specifically, we develop a general Markov Decision Process (MDP) based learning framework incorporating complex delayed rewards, aimed to optimise adaptive policies in presence of noise and dynamic agent interactions. In developing methods for this optimisation, we address a number of significant challenges: (a) presence of long delays in observing the action outcomes; (b) policy optimisation over complex and/or decentralised behaviours spanning multiple time steps; and (c) low learning efficiency due to the large search-space size. Two domains are selected to examine and resolve these challenges: (i) a large-scale agent-based model of the COVID-19 pandemic and response, with the task of optimising cost-effectiveness of centralised non-pharmaceutical interventions; and (ii) a simulated two-dimensional multi-agent soccer environment (RoboCup Soccer 2D Simulation), with the task of optimising decentralised policies for teams of autonomous soccer agents. Our studies uncover and resolve several interdependencies in modelling and learning action policies for decision-maker(s) in multi-agent environments. Firstly, we develop a general MDP-based framework capable of modelling action decisions at both global level (centralised actions) and local level (decentralised actions), addressing the question of (i) centralised policies versus decentralised policies. Secondly, we propose methods formulating delayed rewards, including short-term (tactical) and long-term (strategic) outcomes, which are applicable for efficient policy optimisation for both centralised and decentralised action decisions, thus addressing the dichotomy of (ii) short-term versus long-term outcomes of action decisions. Finally, we develop heuristics for preserving modular hierarchical decision-making structure, which narrow the search-space size, thus improving learning efficiency and addressing the dilemma of (iii) learning efficiency versus the size of search space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Clark, Mark A. „Dynamic Voltage/Frequency Scaling and Power-Gating of Network-on-Chip with Machine Learning“. Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1544105215810566.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Almannaa, Mohammed Hamad. „Optimizing Bike Sharing Systems: Dynamic Prediction Using Machine Learning and Statistical Techniques and Rebalancing“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100737.

Der volle Inhalt der Quelle
Annotation:
The large increase in on-road vehicles over the years has resulted in cities facing challenges in providing high-quality transportation services. Traffic jams are a clear sign that cities are overwhelmed, and that current transportation networks and systems cannot accommodate the current demand without a change in policy, infrastructure, transportation modes, and commuter mode choice. In response to this problem, cities in a number of countries have started putting a threshold on the number of vehicles on the road by deploying a partial or complete ban on cars in the city center. For example, in Oslo, leaders have decided to completely ban privately-owned cars from its center by the end of 2019, making it the first European city to totally ban cars in the city center. Instead, public transit and cycling will be supported and encouraged in the banned-car zone, and hundreds of parking spaces in the city will be replaced by bike lanes. As a government effort to support bicycling and offer alternative transportation modes, bike-sharing systems (BSSs) have been introduced in over 50 countries. BSSs aim to encourage people to travel via bike by distributing bicycles at stations located across an area of service. Residents and visitors can borrow a bike from any station and then return it to any station near their destination. Bicycles are considered an affordable, easy-to-use, and, healthy transportation mode, and BSSs show significant transportation, environmental, and health benefits. As the use of BSSs have grown, imbalances in the system have become an issue and an obstacle for further growth. Imbalance occurs when bikers cannot drop off or pick-up a bike because the bike station is either full or empty. This problem has been investigated extensively by many researchers and policy makers, and several solutions have been proposed. There are three major ways to address the rebalancing issue: static, dynamic and incentivized. The incentivized approaches make use of the users in the balancing efforts, in which the operating company incentives them to change their destination in favor of keeping the system balanced. The other two approaches: static and dynamic, deal with the movement of bikes between stations either during or at the end of the day to overcome station imbalances. They both assume the location and number of bike stations are fixed and only the bikes can be moved. This is a realistic assumption given that current BSSs have only fixed stations. However, cities are dynamic and their geographical and economic growth affects the distribution of trips and thus constantly changing BSS user behavior. In addition, work-related bike trips cause certain stations to face a high-demand level during weekdays, while these same stations are at a low-demand level on weekends, and thus may be of little use. Moreover, fixed stations fail to accommodate big events such as football games, holidays, or sudden weather changes. This dissertation proposes a new generation of BSSs in which we assume some of the bike stations can be portable. This approach takes advantage of both types of BSSs: dock-based and dock-less. Towards this goal, a BSS optimization framework was developed at both the tactical and operational level. Specifically, the framework consists of two levels: predicting bike counts at stations using fast, online, and incremental learning approaches and then balancing the system using portable stations. The goal is to propose a framework to solve the dynamic bike sharing repositioning problem, aiming at minimizing the unmet demand, leading to increased user satisfaction and reducing repositioning/rebalancing operations. This dissertation contributes to the field in five ways. First, a multi-objective supervised clustering algorithm was developed to identify the similarity of bike-usage with respect to time events. Second, a dynamic, easy-to-interpret, rapid approach to predict bike counts at stations in a BSS was developed. Third, a univariate inventory model using a Markov chain process that provides an optimal range of bike levels at stations was created. Fourth, an investigation of the advantages of portable bike stations, using an agent-based simulation approach as a proof-of-concept was developed. Fifth, mathematical and heuristic approaches were proposed to balance bike stations.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rihani, Mohamad-Al-Fadl. „Management of Dynamic Reconfiguration in a Wireless Digital Communication Context“. Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0030/document.

Der volle Inhalt der Quelle
Annotation:
Aujourd'hui, les appareils sans fil disposent généralement de plusieurs technologies d'accès radio (LTE, WiFi,WiMax, ...) pour gérer une grande variété de normes ou de technologies. Ces appareils doivent être suffisamment intelligents et autonomes pour atteindre un niveau de performance donné ou sélectionne automatiquement la meilleure technologie sans fil disponible en fonction de la disponibilité des normes. Du point de vue matériel, les périphériques System on Chip (SoC) intègrent des processeurs et des structures logiques FPGA sur la même puce avec une interconnexion rapide. Cela permet de concevoir des systèmes logiciels / matériels et de mettre en oeuvre de nouvelles techniques et méthodologies qui améliorent considérablement les performances des systèmes de communication. Dans ces dispositifs, la reconfiguration partielle dynamique (DPR) constitue une technique bien connue pour reconfigurer seulement une zone spécifique dans le FPGA tandis que d'autres parties continuent à fonctionner indépendamment. Pour évaluer quand il est avantageux d'effectuer un DPR, des techniques adaptatives ont été proposées. Ils consistent à reconfigurer automatiquement des parties du système en fonction de paramètres spécifiques. Dans cette thèse, un système de communication sans fil intelligent visant à implémenter un émetteur OFDM adaptatif et à effectuer un transfert vertical dans des réseaux hétérogènes est présenté. Une couche physique unifiée pour les réseaux WiFi-WiMax est également proposée. Un algorithme de transfert vertical intelligent (VHA) basé sur les réseaux neuronaux (NN) a été proposé pour sélectionner le meilleur standard sans fil disponible dans un réseau hétérogène. Le système a été implémenté et testé sur un ZedBoard équipé d'un Xilinx Zynq-7000-SoC. La performance du système est décrite et des résultats de simulation sont présentés afin de valider l'architecture proposée. Des mesures de puissance en temps réel ont été appliquées pour calculer l'énergie de surcharge pour l'opération de RP. De plus, des démonstrations ont été effectuées pour tester et valider le système mis en place
Today, wireless devices generally feature multiple radio access technologies (LTE, WiFi, WiMax, ...) to handle a rich variety of standards or technologies. These devices should be intelligent and autonomous enough in order to either reach a given level of performance or automatically select the best available wireless standard. On the hardware side, System on Chip (SoC) devices integrate processors and FPGA logic fabrics on the same chip with fast inter-connection. This allows designing Software/Hardware systems. In these devices, Dynamic Partial Reconfiguration (DPR) constitutes a well-known technique for reconfiguring only a specific area within the FPGA while other parts continue to operate independently. To evaluate when it is advantageous to perform DPR, adaptive techniques have been proposed. They consist in reconfiguring parts of the system automatically according to specific parameters. In this thesis, an intelligent wireless communication system aiming at implementing an adaptive OFDM based transmitter is presented. An unified physical layer for WiFi-WiMax networks is also proposed. An intelligent Vertical Handover Algorithm (VHA) based on Neural Networks (NN) was proposed to select best available wireless standard in heterogeneous network. The system was implemented and tested on a ZedBoard which features a Xilinx Zynq-7000-SoC. The performance of the system is described and simulation results are presented in order to validate the proposed architecture. Real time power measurements have been applied to compute the overhead power for the PR operation. In addition demonstrations have been performed to test and validate the implemented system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tamascelli, Nicola. „A Machine Learning Approach to Predict Chattering Alarms“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Den vollen Inhalt der Quelle finden
Annotation:
The alarm system plays a vital role to grant safety and reliability in the process industry. Ideally, an alarm should inform the operator about critical conditions only; during alarm floods, the operator may be overwhelmed by several alarms in a short time span. Crucial alarms are more likely to be missed during these situations. Poor alarm management is one of the main causes of unintended plant shut down, incidents and near misses in the chemical industry. Most of the alarms triggered during a flood episode are nuisance alarms –i.e. alarms that do not communicate new information to the operator, or alarms that do not require an operator action. Chattering alarms –i.e. that repeat three or more times in a minute, and redundant alarms –i.e. duplicated alarms, are common forms of nuisance. Identifying nuisance alarms is a key step to improve the performance of the alarm system. Advanced techniques for alarm rationalization have been developed, proposing methods to quantify chattering, redundancy and correlation between alarms. Although very effective, these techniques produce static results. Machine Learning appears to be an interesting opportunity to retrieve further knowledge and support these techniques. This knowledge can be used to produce more flexible and dynamic models, as well as to predict alarm behaviour during floods. The aim of this study is to develop a machine learning-based algorithm for real-time alarm classification and rationalization, whose results can be used to support the operator decision-making procedure. Specifically, efforts have been directed towards chattering prediction during alarm floods. Advanced techniques for chattering, redundancy and correlation assessment have been performed on a real industrial alarm database. A modified approach has been developed to dynamically assess chattering, and the results have been used to train three different machine learning models, whose performance has been evaluated and discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Fang, Chunsheng. „Novel Frameworks for Mining Heterogeneous and Dynamic Networks“. University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1321369978.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Moffett, Jeffrey P. „Applying Causal Models to Dynamic Difficulty Adjustment in Video Games“. Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/320.

Der volle Inhalt der Quelle
Annotation:
We have developed a causal model of how various aspects of a computer game influence how much a player enjoys the experience, as well as how long the player will play. This model is organized into three layers: a generic layer that applies to any game, a refinement layer for a particular game genre, and an instantiation layer for a specific game. Two experiments using different games were performed to validate the model. The model was used to design and implement a system and API for Dynamic Difficulty Adjustment(DDA). This DDA system and API uses machine learning techniques to make changes to a game in real time in the hopes of improving the experience of the user and making them play longer. A final experiment is presented that shows the effectiveness of the designed system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

AlShammeri, Mohammed. „Dynamic Committees for Handling Concept Drift in Databases (DCCD)“. Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23498.

Der volle Inhalt der Quelle
Annotation:
Concept drift refers to a problem that is caused by a change in the data distribution in data mining. This leads to reduction in the accuracy of the current model that is used to examine the underlying data distribution of the concept to be discovered. A number of techniques have been introduced to address this issue, in a supervised learning (or classification) setting. In a classification setting, the target concept (or class) to be learned is known. One of these techniques is called “Ensemble learning”, which refers to using multiple trained classifiers in order to get better predictions by using some voting scheme. In a traditional ensemble, the underlying base classifiers are all of the same type. Recent research extends the idea of ensemble learning to the idea of using committees, where a committee consists of diverse classifiers. This is the main difference between the regular ensemble classifiers and the committee learning algorithms. Committees are able to use diverse learning methods simultaneously and dynamically take advantage of the most accurate classifiers as the data change. In addition, some committees are able to replace their members when they perform poorly. This thesis presents two new algorithms that address concept drifts. The first algorithm has been designed to systematically introduce gradual and sudden concept drift scenarios into datasets. In order to save time and avoid memory consumption, the Concept Drift Introducer (CDI) algorithm divides the number of drift scenarios into phases. The main advantage of using phases is that it allows us to produce a highly scalable concept drift detector that evaluates each phase, instead of evaluating each individual drift scenario. We further designed a novel algorithm to handle concept drift. Our Dynamic Committee for Concept Drift (DCCD) algorithm uses a voted committee of hypotheses that vote on the best base classifier, based on its predictive accuracy. The novelty of DCCD lies in the fact that we employ diverse heterogeneous classifiers in one committee in an attempt to maximize diversity. DCCD detects concept drifts by using the accuracy and by weighing the committee members by adding one point to the most accurate member. The total loss in accuracy for each member is calculated at the end of each point of measurement, or phase. The performance of the committee members are evaluated to decide whether a member needs to be replaced or not. Moreover, DCCD detects the worst member in the committee and then eliminates this member by using a weighting mechanism. Our experimental evaluation centers on evaluating the performance of DCCD on various datasets of different sizes, with different levels of gradual and sudden concept drift. We further compare our algorithm to another state-of-the-art algorithm, namely the MultiScheme approach. The experiments indicate the effectiveness of our DCCD method under a number of diverse circumstances. The DCCD algorithm generally generates high performance results, especially when the number of concept drifts is large in a dataset. For the size of the datasets used, our results showed that DCCD produced a steady improvement in performance when applied to small datasets. Further, in large and medium datasets, our DCCD method has a comparable, and often slightly higher, performance than the MultiScheme technique. The experimental results also show that the DCCD algorithm limits the loss in accuracy over time, regardless of the size of the dataset.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Fent, Thomas. „Using genetics based machine learning to find strategies for product placement in a dynamic market“. SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/694/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
In this paper we discuss the necessity of models including complex adaptive systems in order to eliminate the shortcomings of neoclassical models based on equilibrium theory. A simulation model containing artificial adaptive agents is used to explore the dynamics of a market of highly replaceable products. A population consisting of two classes of agents is implemented to observe if methods provided by modern computational intelligence can help finding a meaningful strategy for product placement. During several simulation runs it turned out that the agents using CI-methods outperformed their competitors. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Djebra, Yanis. „Accelerated Dynamic MR Imaging Using Linear And Non-Linear Machine Learning-Based Image Reconstruction Models“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT011.

Der volle Inhalt der Quelle
Annotation:
L'imagerie par Résonance Magnétique (IRM) dynamique est d'une grande valeur pour le diagnostic médical grâce à sa polyvalence en termes de contraste, sa haute résolution spatiale, son rapport signal/bruit élevé et permet l'obtention non invasives d'images multi-planaires. Elle peut être utile pour l'imagerie du cerveau et du cœur entre autres, ainsi que pour la détection d'anomalies. De plus, la disponibilité croissante de machines de Tomographie par Émission de Positrons (TEP) / IRM permet l'acquisition simultanée de données de TEP et IRM pour des informations complémentaires. Cependant, un défi majeur en IRM dynamique est la reconstruction d'images à partir de données d'espace-k échantillonnées en dessous de la fréquence de Nyquist. De nombreuses méthodes ont été proposées pour l'imagerie IRM sous-échantillonnée, notamment l'imagerie parallèle et le compressed sensing.Le premier objectif de cette thèse est de montrer le potentiel et l'utilité du modèle de sous-espace linéaire pour l'imagerie IRM sous respiration libre. Ce modèle peut théoriquement capturer des mouvements respiratoires et cardiaques réguliers. Cependant, des mouvements irréguliers peuvent survenir, tels qu'une respiration erratique ou un mouvement global causé par l'inconfort du patient. Une première question se pose donc naturellement : un tel modèle peut-il capturer ces types de mouvement et, si oui, peut-il reconstruire les images IRM sans artefacts ? Nous démontrons dans cette thèse comment le modèle de sous-espace peut efficacement reconstruire des images à partir de données d'espace-k fortement sous-échantillonnées. Une première application est présentée où nous reconstruisons des images IRM dynamiques avec haute résolution spatiale et temporelle et les utilisons pour corriger le mouvement des données TEP. Une deuxième application sur la cartographie T1 cardiaque est présentée. Des données sous-échantillonnées ont été acquises à l'aide d'une séquence inversion-récupération sous respiration libre, et des images IRM 3D dynamiques du cœur entier ont été reconstruites.Le deuxième objectif de cette thèse est de comprendre les limites du modèle de sous-espace linéaire et de développer un nouveau modèle qui pallie ces limitations. Le modèle de sous-espace suppose que les données de haute dimension résident dans un sous-espace linéaire qui capture les corrélations spatiotemporelles des images dynamiques. Ceci repose sur un modèle de réduction de dimension linéaire et ne prend pas en compte les caractéristiques intrinsèquement non linéaires du signal. Des modèles basés sur l'apprentissage de variétés ont donc été explorés et visent à apprendre la structure intrinsèque du signal en résolvant des problèmes de réduction de dimensionnalité non linéaires. Nous présentons dans cette thèse une stratégie alternative pour la reconstruction d'images IRM basée sur l'apprentissage de variétés. La méthode proposée apprend la structure des variétés via un alignement linéaire des espaces tangents (LTSA) et peut être interprétée comme une généralisation non linéaire du modèle de sous-espace. Des validations ont été effectuées sur des études de simulation numérique ainsi que sur des expériences d'imagerie cardiaque 2D et 3D in vivo, démontrant des performances améliorées par rapport à l'état-de-l'art.Les deux premiers objectifs présentent respectivement des modèles linéaires et non linéaires, mais ces méthodes utilisent des techniques d'optimisation linéaire conventionnelles pour résoudre le problème de reconstruction. L'utilisation de réseaux de neurones profonds pour l'optimisation peut procurer une meilleure puissance de représentation non linéaire. Des premiers résultats sur les approches basées sur l'apprentissage profond sont présentés dans cette thèse et l'état-de-l'art est discuté. Le dernier chapitre présente les conclusions, discute des contributions de l'auteur et détaille les perspectives de recherche potentielles ouvertes par le travail effectué dans cette thèse
Dynamic Magnetic Resonance (MR) imaging is of high value in medical diagnosis thanks to its contrast versatility, high spatial resolution, high Signal-to-Noise Ratio (SNR), and allows for non-invasive multi-planar images of the body. It can be particularly useful for imaging the brain, heart, spine, and joints, as well as for detecting abnormalities. In addition, the increasing availability of Positron Emission Tomography (PET)/MR machines enables simultaneous acquisition of PET and MR data for better reconstruction and complementary information. However, a key challenge in dynamic MRI is reconstructing high-dimensional images from sparse k-space data sampled below the Nyquist sampling rate. Many methods have been proposed for accelerated imaging with sparse sampling, including parallel imaging and compressed sensing.The first objective of this thesis is to show the potential and usefulness of the linear subspace model for free-breathing MR imaging. Such a model can in principle capture regular respiratory and cardiac motion. However, when dealing with lengthy scans, irregular motion patterns can occur, such as erratic breathing or bulk motion caused by patient discomfort. A first question thus naturally arises: can such a model capture irregular types of motion and, if so, can it reconstruct images from a dynamic MR scan presenting bulk motion and irregular respiratory motion? We demonstrate in this thesis how the subspace model can efficiently reconstruct artifact-free images from highly undersampled k-space data with various motion patterns. A first application is presented where we reconstruct high-resolution, high frame-rate dynamic MR images from a PET/MR scanner and use them to correct motion in PET data, capturing complex motion patterns such as irregular respiratory patterns and bulk motion. A second application on cardiac T1 mapping is presented. Undersampled k-space data were acquired using a free-breathing, ECG-gated inversion recovery sequence, and dynamic 3D MR images of the whole heart were reconstructed leveraging the linear subspace model.The second objective of this thesis is to understand the limits of the linear subspace model and develop a novel dynamic MR reconstruction scheme that palliates these limitations. More specifically, the subspace model assumes that high-dimensional data reside in a low-dimensional linear subspace that captures the spatiotemporal correlations of dynamic MR images. This model relies on a linear dimensionality reduction model and does not account for intrinsic non-linear features of the signal, which may show its limits with higher undersampling rates. Manifold learning-based models have therefore been explored for image reconstruction in dynamic MRI and aim at learning the intrinsic structure of the input data that are embedded in a high-dimensional signal space by solving non-linear dimensionality reduction problems. We present in this thesis an alternative strategy for manifold learning-based MR image reconstruction. The proposed method learns the manifold structure via linear tangent space alignment (LTSA) and can be interpreted as a non-linear generalization of the subspace model. Validation on numerical simulation studies as well as in vivo 2D and 3D cardiac imaging experiments were performed, demonstrating improved performances compared to state-of-the-art techniques.The two first objectives present respectively linear and non-linear models yet both methods use conventional linear optimization techniques to solve the reconstruction problem. In contrast, using deep neural networks for optimization may procure non-linear and better representation power. Early results on deep learning-based approaches are presented in this thesis and state-of-the-art techniques are discussed. The last chapter then presents conclusions, discusses the author's contributions, and considers the potential research perspectives that have been opened up by the work presented in this thesis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Souriau, Rémi. „machine learning for modeling dynamic stochastic systems : application to adaptive control on deep-brain stimulation“. Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG004.

Der volle Inhalt der Quelle
Annotation:
Ces dernières années ont été marquées par l'émergence d'un grand nombre de base données dans de nombreux domaines comme la médecine par exemple. La création de ces bases données a ouvert la voie à de nouvelles applications. Les propriétés des données sont parfois complexes (non linéarité, dynamique, grande dimension ou encore absence d'étiquette) et nécessite des modèles d'apprentissage performants. Parmi les modèles d'apprentissage existant, les réseaux de neurones artificiels ont connu un large succès ces dernières décennies. Le succès de ces modèles repose sur la non linéarité des neurones, l'utilisation de variables latentes et leur grande flexibilité leur permettant de s'adapter à de nombreux problèmes. Les machines de Boltzmann présentées dans cette thèse sont une famille de réseaux de neurones non supervisés. Introduite par Hinton dans les années 80, cette famille de modèle a connu un grand intérêt dans le début du 21e siècle et de nouvelles extensions sont proposées régulièrement.Cette thèse est découpée en deux parties. Une partie exploratoire sur la famille des machines de Boltzmann et une partie applicative. L'application étudiée est l'apprentissage non supervisé des signaux électroencéphalogramme intracrânien chez les rats Parkinsonien pour le contrôle des symptômes de la maladie de Parkinson.Les machines de Boltzmann ont donné naissance aux réseaux de diffusion. Il s'agit de modèles non supervisés qui reposent sur l'apprentissage d'une équation différentielle stochastique pour des données dynamiques et stochastiques. Ce réseau fait l'objet d'un développement particulier dans cette thèse et un nouvel algorithme d'apprentissage est proposé. Son utilisation est ensuite testée sur des données jouet ainsi que sur des données réelles
The past recent years have been marked by the emergence of a large amount of database in many fields like health. The creation of many databases paves the way to new applications. Properties of data are sometimes complex (non linearity, dynamic, high dimensions) and require to perform machine learning models. Belong existing machine learning models, artificial neural network got a large success since the last decades. The success of these models lies on the non linearity behavior of neurons, the use of latent units and the flexibility of these models to adapt to many different problems. Boltzmann machines presented in this thesis are a family of generative neural networks. Introduced by Hinton in the 80's, this family have got a large interest at the beginning of the 21st century and new extensions are regularly proposed.This thesis is divided into two parts. A first part exploring Boltzmann machines and their applications. In this thesis the unsupervised learning of intracranial electroencephalogram signals on rats with Parkinson's disease for the control of the symptoms is studied.Boltzmann machines gave birth to Diffusion networks which are also generative model based on the learning of a stochastic differential equation for dynamic and stochastic data. This model is studied again in this thesis and a new training algorithm is proposed. Its use is tested on toy data as well as on real database
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

SUMMA, SERENA. „Energy efficiency of buildings: Dynamic simulations and experimental analyses“. Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/299081.

Der volle Inhalt der Quelle
Annotation:
Gli studi riportati in questa tesi aggiungono all'attuale corpus di conoscenze un contributo riguardante sia i nuovi modelli di calcolo dinamico orario, utili per una valutazione affidabile del fabbisogno energetico degli edifici, sia le soluzioni costruttive innovative per migliorare l'efficienza energetica degli edifici e quindi decarbonizzare il settore delle costruzioni attualmente responsabile di circa il 40% delle emissioni globali di gas climalteranti. Vengono analizzati i nuovi modelli di calcolo contenuti nelle recenti norme pubblicate dal CEN, ovvero la EN ISO 52016-1:2017 "Fabbisogno energetico per il riscaldamento e il raffreddamento, temperature interne e carichi di calore sensibile e latente - Parte 1: Procedure di calcolo" e la relativa EN ISO 52010-1:2017 "Condizioni climatiche esterne - Parte 1: Conversione dei dati climatici per i calcoli energetici". Tali norme offrono la possibilità di valutare il fabbisogno energetico e le temperature operative con un’accuratezza simile a quella dei principali software di simulazione (come Trnsys o Energy Plus), ma in modo meno oneroso. Essendo entrambi gli standard di recente pubblicazione, non esistono in letteratura studi sufficienti ad identificare l'effettiva validità dei metodi e i campi di applicazione. Per questo motivo, utilizzando Tnsys come base, è stata effettuata un'analisi comparativa e di sensibilità, sono state individuate le principali criticità e proposti metodi di calcolo alternativi che, opportunamente integrati nelle norme, ne hanno migliorato l’accuratezza. A livello sperimentale sono state proposte soluzioni costruttive innovative per migliorare il fabbisogno energetico invernale ed estivo, rispettivamente con lo studio di un edificio iperisolato integrato ad una serra solare dotata di ventilazione meccanica controllata e con lo studio di tre diverse facciate ventilate, anch'esse integrate a ventilazione meccanica controllata, ottimizzate tramite tecniche di machine learning. Infine, è stato valutato l'impatto del cambiamento climatico sugli attuali NZEB in termini di fabbisogni e comfort, secondo due scenari proposti dall'IPCC (Intergovernmental Panel on Climate Change): RCP4.5, che prevede un'inversione delle emissioni di CO2 entro il 2070 e un aumento massimo della temperatura di 2°C, e RCP8.5, che utilizza un approccio "business-as-usual" e prevede concentrazioni di CO2 quadruple entro il 2100, con un aumento della temperatura di oltre 4°C.
The studies reported in this thesis add to the current body of knowledge a contribution concerning both new dynamic hourly calculation models, useful for a reliable assessment of the energy needs of buildings, and innovative construction solutions to improve the energy efficiency of buildings and thus decarbonise the construction sector currently responsible for about 40% of global climate-changing gas emissions. The new calculation models contained in the recent standards published by CEN are analysed, namely EN ISO 52016-1:2017 "Energy demand for heating and cooling, indoor temperatures and sensible and latent heat loads - Part 1: Calculation procedures" and the related EN ISO 52010-1:2017 "Outdoor climatic conditions - Part 1: Conversion of climate data for energy calculations". These standards offer the possibility to estimate energy requirements and operative temperatures with similar accuracy to that of major simulation software (such as Trnsys or Energy Plus), but in a less onerous way. As both standards are recently published, there are not enough studies in the literature to identify the actual validity of the methods and the fields of application. For this reason, using Tnsys as a basis, a comparative and sensitivity analysis was carried out, the main criticalities were identified and alternative calculation methods were proposed which, appropriately integrated into the standards, improved their accuracy. At an experimental level, innovative construction solutions were proposed to improve winter and summer energy requirements, respectively with the study of a hyper-insulated building integrated with a solar greenhouse equipped with controlled mechanical ventilation and with the study of three different ventilated facades, also integrated with controlled mechanical ventilation, optimised using machine learning techniques. Finally, the impact of climate change on current NZEBs in terms of needs and comfort was assessed, according to two scenarios proposed by the IPCC (Intergovernmental Panel on Climate Change): RCP4.5, which foresees a reversal of CO2 emissions by 2070 and a maximum temperature increase of 2°C, and RCP8.5, which uses a "business-as-usual" approach and foresees quadruple CO2 concentrations by 2100, with a temperature increase of more than 4°C.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Liu, Pengyu. „Extracting Rules from Trained Machine Learning Models with Applications in Bioinformatics“. Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/264678.

Der volle Inhalt der Quelle
Annotation:
京都大学
新制・課程博士
博士(情報学)
甲第23397号
情博第766号
新制||情||131(附属図書館)
京都大学大学院情報学研究科知能情報学専攻
(主査)教授 阿久津 達也, 教授 山本 章博, 教授 鹿島 久嗣
学位規則第4条第1項該当
Doctor of Informatics
Kyoto University
DFAM
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Razavian, Narges Sharif. „Continuous Graphical Models for Static and Dynamic Distributions: Application to Structural Biology“. Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/340.

Der volle Inhalt der Quelle
Annotation:
Generative models of protein structure enable researchers to predict the behavior of proteins under different conditions. Continuous graphical models are powerful and efficient tools for modeling static and dynamic distributions, which can be used for learning generative models of molecular dynamics. In this thesis, we develop new and improved continuous graphical models, to be used in modeling of protein structure. We first present von Mises graphical models, and develop consistent and efficient algorithms for sparse structure learning and parameter estimation, and inference. We compare our model to sparse Gaussian graphical model and show it outperforms GGMs on synthetic and Engrailed protein molecular dynamics datasets. Next, we develop algorithms to estimate Mixture of von Mises graphical models using Expectation Maximization, and show that these models outperform Von Mises, Gaussian and mixture of Gaussian graphical models in terms of accuracy of prediction in imputation test of non-redundant protein structure datasets. We then use non-paranormal and nonparametric graphical models, which have extensive representation power, and compare several state of the art structure learning methods that can be used prior to nonparametric inference in reproducing kernel Hilbert space embedded graphical models. To be able to take advantage of the nonparametric models, we also propose feature space embedded belief propagation, and use random Fourier based feature approximation in our proposed feature belief propagation, to scale the inference algorithm to larger datasets. To improve the scalability further, we also show the integration of Coreset selection algorithm with the nonparametric inference, and show that the combined model scales to large datasets with very small adverse effect on the quality of predictions. Finally, we present time varying sparse Gaussian graphical models, to learn smoothly varying graphical models of molecular dynamics simulation data, and present results on CypA protein
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Higson, Edward John. „Bayesian methods and machine learning in astrophysics“. Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289728.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Kerfs, Jeremy N. „Models for Pedestrian Trajectory Prediction and Navigation in Dynamic Environments“. DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1716.

Der volle Inhalt der Quelle
Annotation:
Robots are no longer constrained to cages in factories and are increasingly taking on roles alongside humans. Before robots can accomplish their tasks in these dynamic environments, they must be able to navigate while avoiding collisions with pedestrians or other robots. Humans are able to move through crowds by anticipating the movements of other pedestrians and how their actions will influence others; developing a method for predicting pedestrian trajectories is a critical component of a robust robot navigation system. A current state-of-the-art approach for predicting pedestrian trajectories is Social-LSTM, which is a recurrent neural network that incorporates information about neighboring pedestrians to learn how people move cooperatively around each other. This thesis extends and modifies that model to output parameters for a multimodal distribution, which better captures the uncertainty inherent in pedestrian movements. Additionally, four novel architectures for representing neighboring pedestrians are proposed; these models are more general than current trajectory prediction systems and have fewer hyper-parameters. In both simulations and real-world datasets, the multimodal extension significantly increases the accuracy of trajectory prediction. One of the new neighbor representation architectures achieves state-of-the-art results while reducing the number of both parameters and hyper-parameters compared to existing solutions. Two techniques for incorporating the trajectory predictions into a planning system are also developed and evaluated on a real-world dataset. Both techniques plan routes that include fewer near-collisions than algorithms that do not use trajectory predictions. Finally, a Python library for Agent-Based-Modeling and crowd simulation is presented to aid in future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Correia, Maria Inês Costa. „Cluster analysis of financial time series“. Master's thesis, Instituto Superior de Economia e Gestão, 2020. http://hdl.handle.net/10400.5/21016.

Der volle Inhalt der Quelle
Annotation:
Mestrado em Mathematical Finance
Esta dissertação aplica o método da Signature como medida de similaridade entre dois objetos de séries temporais usando as propriedades de ordem 2 da Signature e aplicando-as a um método de Clustering Asimétrico. O método é comparado com uma abordagem de Clustering mais tradicional, onde a similaridade é medida usando Dynamic Time Warping, desenvolvido para trabalhar com séries temporais. O intuito é considerar a abordagem tradicional como benchmark e compará-la ao método da Signature através do tempo de computação, desempenho e algumas aplicações. Estes métodos são aplicados num conjunto de dados de séries temporais financeiras de Fundos Mútuos do Luxemburgo. Após a revisão da literatura, apresentamos o método Dynamic Time Warping e o método da Signature. Prossegue-se com a explicação das abordagens de Clustering Tradicional, nomeadamente k-Means, e Clustering Espectral Assimétrico, nomeadamente k-Axes, desenvolvido por Atev (2011). O último capítulo é dedicado à Investigação Prática onde os métodos anteriores são aplicados ao conjunto de dados. Os resultados confirmam que o método da Signature têm efectivamente potencial para Machine Learning e previsão, como sugerido por Levin, Lyons and Ni (2013).
This thesis applies the Signature method as a measurement of similarities between two time-series objects, using the Signature properties of order 2, and its application to Asymmetric Spectral Clustering. The method is compared with a more Traditional Clustering approach where similarities are measured using Dynamic Time Warping, developed to work with time-series data. The intention for this is to consider the traditional approach as a benchmark and compare it to the Signature method through computation times, performance, and applications. These methods are applied to a financial time series data set of Mutual Exchange Funds from Luxembourg. After the literature review, we introduce the Dynamic Time Warping method and the Signature method. We continue with the explanation of Traditional Clustering approaches, namely k-Means, and Asymmetric Clustering techniques, namely the k-Axes algorithm, developed by Atev (2011). The last chapter is dedicated to Practical Research where the previous methods are applied to the data set. Results confirm that the Signature method has indeed potential for machine learning and prediction, as suggested by Levin, Lyons, and Ni (2013).
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Wang, Lei. „Personalized Dynamic Hand Gesture Recognition“. Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231345.

Der volle Inhalt der Quelle
Annotation:
Human gestures, with the spatial-temporal variability, are difficult to be recognized by a generic model or classifier that are applicable for everyone. To address the problem, in this thesis, personalized dynamic gesture recognition approaches are proposed. Specifically, based on Dynamic Time Warping(DTW), a novel concept of Subject Relation Network is introduced to describe the similarity of subjects in performing dynamic gestures, which offers a brand new view for gesture recognition. By clustering or arranging training subjects based on the network, two personalization algorithms are proposed respectively for generative models and discriminative models. Moreover, three basic recognition methods, DTW-based template matching, Hidden Markov Model(HMM) and Fisher Vector combining classification, are compared and integrated into the proposed personalized gesture recognition. The proposed approaches are evaluated on a challenging dynamic hand gesture recognition dataset DHG14/28, which contains the depth images and skeleton coordinates returned by the Intel RealSense depth camera. Experimental results show that the proposed personalized algorithms can significantly improve the performance of basic generative&discriminative models and achieve the state-of-the-art accuracy of 86.2%.
Människliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie