Dissertations / Theses on the topic 'Adaptive machine learning'

To see the other types of publications on this topic, follow the link: Adaptive machine learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Adaptive machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jelfs, Beth. "Collaborative adaptive filtering for machine learning." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/5598.

Full text
Abstract:
Quantitative performance criteria for the analysis of machine learning architectures and algorithms have long been established. However, qualitative performance criteria, which identify fundamental signal properties and ensure any processing preserves the desired properties, are still emerging. In many cases, whilst offline statistical tests exist such as assessment of nonlinearity or stochasticity, online tests which not only characterise but also track changes in the nature of the signal are lacking. To that end, by employing recent developments in signal characterisation, criteria are derived for the assessment of the changes in the nature of the processed signal. Through the fusion of the outputs of adaptive filters a single collaborative hybrid filter is produced. By tracking the dynamics of the mixing parameter of this filter, rather than the actual filter performance, a clear indication as to the current nature of the signal is given. Implementations of the proposed method show that it is possible to quantify the degree of nonlinearity within both real- and complex-valued data. This is then extended (in the real domain) from dealing with nonlinearity in general, to a more specific example, namely sparsity. Extensions of adaptive filters from the real to the complex domain are non-trivial and the differences between the statistics in the real and complex domains need to be taken into account. In terms of signal characteristics, nonlinearity can be both split- and fully-complex and complex-valued data can be considered circular or noncircular. Furthermore, by combining the information obtained from hybrid filters of different natures it is possible to use this method to gain a more complete understanding of the nature of the nonlinearity within a signal. This also paves the way for building multidimensional feature spaces and their application in data/information fusion. To produce online tests for sparsity, adaptive filters for sparse environments are investigated and a unifying framework for the derivation of proportionate normalised least mean square (PNLMS) algorithms is presented. This is then extended to derive variants with an adaptive step-size. In order to create an online test for noncircularity, a study of widely linear autoregressive modelling is presented, from which a proof of the convergence of the test for noncircularity can be given. Applications of this method are illustrated on examples such as biomedical signals, speech and wind data.
APA, Harvard, Vancouver, ISO, and other styles
2

Miles, Jonathan David. "Machine Learning for Adaptive Computer Game Opponents." The University of Waikato, 2009. http://hdl.handle.net/10289/2779.

Full text
Abstract:
This thesis investigates the use of machine learning techniques in computer games to create a computer player that adapts to its opponent's game-play. This includes first confirming that machine learning algorithms can be integrated into a modern computer game without have a detrimental effect on game performance, then experimenting with different machine learning techniques to maximize the computer player's performance. Experiments use three machine learning techniques; static prediction models, continuous learning, and reinforcement learning. Static models show the highest initial performance but are not able to beat a simple opponent. Continuous learning is able to improve the performance achieved with static models but the rate of improvement drops over time and the computer player is still unable to beat the opponent. Reinforcement learning methods have the highest rate of improvement but the lowest initial performance. This limits the effectiveness of reinforcement learning because a large number of episodes are required before performance becomes sufficient to match the opponent.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Shun. "Adaptive Java optimisation using machine learning techniques." Thesis, University of Edinburgh, 2004. http://hdl.handle.net/1842/567.

Full text
Abstract:
There is a continuing demand for higher performance, particularly in the area of scientific and engineering computation. In order to achieve high performance in the context of frequent hardware upgrading, software must be adaptable for portable performance. What is required is an optimising compiler that evolves and adapts itself to environmental change without sacrificing performance. Java has emerged as a dominant programming language widely used in a variety of application areas. However, its architectural independant design means that it is frequently unable to deliver high performance especially when compared to other imperative languages such as Fortran and C/C++. This thesis presents a language- and architecture-independant approach to achieve portable high performance. It uses the mapping notation introduced in the Unified Transformation Framework to specify a large optimisation space. A heuristic random search algorithm is introduced to explore this space in a feedback-directed iterative optimisation manner. It is then extended using a machine learning approach which enables the compiler to learn from its previous optimisations and apply the knowledge when necessary. Both the heuristic random search algorithm and the learning optimisation approach are implemented in a prototype Adaptive Optimisation Framework for Java (AOF-Java). The experimental results show that the heuristic random search algorithm can find, within a relatively small number of atttempts, good points in the large optimisation space. In addition, the learning optimisation approach is capable of finding good transformations for a given program from its prior experience with other programs.
APA, Harvard, Vancouver, ISO, and other styles
4

Ar, Rosyid Harits. "Adaptive serious educational games using machine learning." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/adaptive-serious-educational-games-using-machine-learning(b5f5024b-c7fd-4660-997c-9fd22e140a8f).html.

Full text
Abstract:
The ultimate goals of adaptive serious educational games (adaptive SEG) are to promote effective learning and maximising enjoyment for players. Firstly, we develop the SEG by combining knowledge space (learning materials) and game content space to be used to convey learning materials. We propose a novel approach that serves toward minimising experts' involvement in mapping learning materials to game content space. We categorise both content spaces using known procedures and apply BIRCH clustering algorithm to categorise the similarity of the game content. Then, we map both content spaces based on the statistical properties and/or by the knowledge learning handout. Secondly, we construct a predictive model by learning data sets constructed through a survey on public testers who labelled their in-game data with their reported experiences. A Random Forest algorithm non-intrusively predicts experiences via the game data. Lastly, it is not feasible to manually select or adapt the content from both spaces because of the immense amount of options available. Therefore, we apply reinforcement learning technique to generate a series of learning goals that promote an efficient learning for the player. Subsequently, a combination of conditional branching and agglomerative hierarchical clustering select the most appropriate game content for each selected education material. For a proof-of-concept, we apply the proposed approach to producing the SEG, named Chem Dungeon, as a case study to demonstrate the effectiveness of our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Dal, Pozzolo Andrea. "Adaptive Machine Learning for Credit Card Fraud Detection." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/221654.

Full text
Abstract:
Billions of dollars of loss are caused every year by fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to the non-stationary distribution of the data, the highly unbalanced classes distributions and the availability of few transactions labeled by fraud investigators. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about what is the best strategy. In this thesis we aim to provide some answers by focusing on crucial issues such as: i) why and how undersampling is useful in the presence of class imbalance (i.e. frauds are a small percentage of the transactions), ii) how to deal with unbalanced and evolving data streams (non-stationarity due to fraud evolution and change of spending behavior), iii) how to assess performances in a way which is relevant for detection and iv) how to use feedbacks provided by investigators on the fraud alerts generated. Finally, we design and assess a prototype of a Fraud Detection System able to meet real-world working conditions and that is able to integrate investigators’ feedback to generate accurate alerts.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
6

Clement, Benjamin. "Adaptive Personalization of Pedagogical Sequences using Machine Learning." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0373/document.

Full text
Abstract:
Les ordinateurs peuvent-ils enseigner ? Pour répondre à cette question, la recherche dans les Systèmes Tuteurs Intelligents est en pleine expansion parmi la communauté travaillant sur les Technologies de l'Information et de la Communication pour l'Enseignement (TICE). C'est un domaine qui rassemble différentes problématiques et réunit des chercheurs venant de domaines variés, tels que la psychologie, la didactique, les neurosciences et, plus particulièrement, le machine learning. Les technologies numériques deviennent de plus en plus présentes dans la vie quotidienne avec le développement des tablettes et des smartphones. Il semble naturel d'utiliser ces technologies dans un but éducatif. Cela amène de nombreuses problématiques, telles que comment faire des interfaces accessibles à tous, comment rendre des contenus pédagogiques motivants ou encore comment personnaliser les activités afin d'adapter le contenu à chacun. Au cours de cette thèse, nous avons développé des méthodes, regroupées dans un framework nommé HMABITS, afin d'adapter des séquences d'activités pédagogiques en fonction des performances et des préférences des apprenants, dans le but de maximiser leur vitesse d'apprentissage et leur motivation. Ces méthodes utilisent des modèles computationnels de motivation intrinsèque pour identifier les activités offrant les plus grands progrès d'apprentissage, et utilisent des algorithmes de Bandits Multi-Bras pour gérer le compromis exploration/exploitation à l'intérieur de l'espace d'activité. Les activités présentant un intérêt optimal sont ainsi privilégiées afin de maintenir l'apprenant dans un état de Flow ou dans sa Zone de Développement Proximal. De plus, certaines de nos méthodes permettent à l'apprenant de faire des choix sur des caractéristiques contextuelles ou le contenu pédagogique de l'application, ce qui est un vecteur d'autodétermination et de motivation. Afin d'évaluer l'efficacité et la pertinence de nos algorithmes, nous avons mené plusieurs types d'expérimentation. Nos méthodes ont d'abord été testées en simulation afin d'évaluer leur fonctionnement avant de les utiliser dans d'actuelles applications d'apprentissage. Pour ce faire, nous avons développé différents modèles d'apprenants, afin de pouvoir éprouver nos méthodes selon différentes approches, un modèle d'apprenant virtuel ne reflétant jamais le comportement d'un apprenant réel. Les résultats des simulations montrent que le framework HMABITS permet d'obtenir des résultats d'apprentissage comparables et, dans certains cas, meilleurs qu'une solution optimale ou qu'une séquence experte. Nous avons ensuite développé notre propre scénario pédagogique et notre propre serious game afin de tester nos algorithmes en situation réelle avec de vrais élèves. Nous avons donc développé un jeu sur la thématique de la décomposition des nombres, au travers de la manipulation de la monnaie, pour les enfants de 6 à 8 ans. Nous avons ensuite travaillé avec le rectorat et différentes écoles de l'académie de bordeaux. Sur l'ensemble des expérimentations, environ 1000 élèves ont travaillé sur l'application sur tablette. Les résultats des études en situation réelle montrent que le framework HMABITS permet aux élèves d'accéder à des activités plus diverses et plus difficiles, d'avoir un meilleure apprentissage et d'être plus motivés qu'avec une séquence experte. Les résultats montrent même que ces effets sont encore plus marqués lorsque les élèves ont la possibilité de faire des choix
Can computers teach people? To answer this question, Intelligent Tutoring Systems are a rapidly expanding field of research among the Information and Communication Technologies for the Education community. This subject brings together different issues and researchers from various fields, such as psychology, didactics, neurosciences and, particularly, machine learning. Digital technologies are becoming more and more a part of everyday life with the development of tablets and smartphones. It seems natural to consider using these technologies for educational purposes. This raises several questions, such as how to make user interfaces accessible to everyone, how to make educational content motivating and how to customize it to individual learners. In this PhD, we developed methods, grouped in the aptly-named HMABITS framework, to adapt pedagogical activity sequences based on learners' performances and preferences to maximize their learning speed and motivation. These methods use computational models of intrinsic motivation and curiosity-driven learning to identify the activities providing the highest learning progress and use Multi-Armed Bandit algorithms to manage the exploration/exploitation trade-off inside the activity space. Activities of optimal interest are thus privileged with the target to keep the learner in a state of Flow or in his or her Zone of Proximal Development. Moreover, some of our methods allow the student to make choices about contextual features or pedagogical content, which is a vector of self-determination and motivation. To evaluate the effectiveness and relevance of our algorithms, we carried out several types of experiments. We first evaluated these methods with numerical simulations before applying them to real teaching conditions. To do this, we developed multiple models of learners, since a single model never exactly replicates the behavior of a real learner. The simulation results show the HMABITS framework achieves comparable, and in some cases better, learning results than an optimal solution or an expert sequence. We then developed our own pedagogical scenario and serious game to test our algorithms in classrooms with real students. We developed a game on the theme of number decomposition, through the manipulation of money, for children aged 6 to 8. We then worked with the educational institutions and several schools in the Bordeaux school district. Overall, about 1000 students participated in trial lessons using the tablet application. The results of the real-world studies show that the HMABITS framework allows the students to do more diverse and difficult activities, to achieve better learning and to be more motivated than with an Expert Sequence. The results show that this effect is even greater when the students have the possibility to make choices
APA, Harvard, Vancouver, ISO, and other styles
7

Wiens, Jenna Marleau. "Machine learning for patient-adaptive ectopic beat classification." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60823.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 83-85).
Physicians require automated techniques to accurately analyze the vast amount of physiological data collected by continuous monitoring devices. In this thesis, we consider one analysis task in particular, the classification of heartbeats from electrocardiographic recordings (ECG). This problem is made challenging by the inter-patient differences present in ECG morphology and timing characteristics. Supervised classifiers trained on a collection of patients can have unpredictable results when applied to a new patient. To reduce the effect of inter-patient differences, researchers have suggested training patient-adative classifiers by training on labeled data from the test patient. However, patient-adaptive classifiers have not been integrated in practice because they require an impractical amount of patient-specific expert knowledge. We present two approaches based on machine learning for building accurate patientadaptive beat classifiers that use little or no patient-specific expert knowledge. First, we present a method to transfer and adapt knowledge from a collection of patients to a test-patient. This first approach, based on transductive transfer learning, requires no patient-specific labeled data, only labeled data from other patients. Second, we consider the scenario where patient-specific expert knowledge is available, but comes at a high cost. We present a novel algorithm for SVM active learning. By intelligently selecting the training set we show how one can build highly accurate patient-adaptive classifiers using only a small number of cardiologist supplied labels. Our results show the gains in performance possible when using patient-adaptive classifiers in place of global classifiers. Furthermore, the effectiveness of our techniques, which use little or no patient-specific expert knowledge, suggest that it is also practical to use patient-adaptive techniques in a clinical setting.
by Jenna Marleau Wiens.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
8

Drolia, Utsav. "Adaptive Distributed Caching for Scalable Machine Learning Services." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1004.

Full text
Abstract:
Applications for Internet-enabled devices use machine learning to process captured data to make intelligent decisions or provide information to users. Typically, the computation to process the data is executed in cloud-based backends. The devices are used for sensing data, offloading it to the cloud, receiving responses and acting upon them. However, this approach leads to high end-to-end latency due to communication over the Internet. This dissertation proposes reducing this response time by minimizing offloading, and pushing computation close to the source of the data, i.e. to edge servers and devices themselves. To adapt to the resource constrained environment at the edge, it presents an approach that leverages spatiotemporal locality to push subparts of the model to the edge. This approach is embodied in a distributed caching framework, Cachier. Cachier is built upon a novel caching model for recognition, and is distributed across edge servers and devices. The analytical caching model for recognition provides a formulation for expected latency for recognition requests in Cachier. The formulation incorporates the effects of compute time and accuracy. It also incorporates network conditions, thus providing a method to compute expected response times under various conditions. This is utilized as a cost function by Cachier, at edge servers and devices. By analyzing requests at the edge server, Cachier caches relevant parts of the trained model at edge servers, which is used to respond to requests, minimizing the number of requests that go to the cloud. Then, Cachier uses context-aware prediction to prefetch parts of the trained model onto devices. The requests can then be processed on the devices, thus minimizing the number of offloaded requests. Finally, Cachier enables cooperation between nearby devices to allow exchanging prefetched data, reducing the dependence on remote servers even further. The efficacy of Cachier is evaluated by using it with an art recognition application. The application is driven using real world traces gathered at museums. By conducting a large-scale study with different control variables, we show that Cachier can lower latency, increase scalability and decrease infrastructure resource usage, while maintaining high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Wenjie. "Machine Learning for Adaptive Cruise Control Target Selection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264918.

Full text
Abstract:
Vehicles will be more complex, safe, and intelligent in the future. For instance, with the support of the advanced driver assistance system (ADAS), the safety and comfort of the driver and the passengers can be significantly improved. This degree project proposes data-driven solutions for adaptive cruise control (ACC) target selection that can be used to select one of the preceding vehicles as the primary target that similar to the choice of human drivers. This master degree project was carried out at Scania CV AB. A shared-network and a shared-LSTM network were used to select the primary target. Besides, A novel machine learning based target selection model (compare-target model) was designed, which can consider all neighboring vehicles together by comparing vehicles. A compare-target network and a compare-target XGBoost are developed based on the compare-target model. In total, four different machine learning methods were adopted to select the primary target for ACC, includ- ing a shared network, a shared-LSTM network, a compare-target network, and a compare-target XGBoost model. These methods were compared and analyzed. Fine-tuning was adopted to overcome the data imbalance problem of rare situations. The compare-target XGBoost can achieve 94.85% accuracy on the test set.
Fordon kommer att vara mer komplexa, säkra och intelligenta i framtiden. Till exempel, med stöd av det avancerade förarassistanssystemet (ADAS) kan föraren och passagerarnas säkerhet och komfort förbättras avsevärt. Detta examensarbete föreslår datastyrda lösningar för målval för adaptivt fartreglering (ACC) för att välja ett av föregående fordon som det primära målet. Valet liknar det som människor gör. Arbetet genomfördes i samarbete med Scania CV AB. Ett delat nätverk och ett gemensamt LSTM-nätverk användes för att välja det primära målet. Dessutom har en ny maskinbaserad målvalsmodell (jämförelse-målmodell) utformats, vilken kan överväga alla närliggande fordon tillsammans genom att jämföra fordon. Ett jämför-mål-nätverk och ett jämförbart mål XGBoost utvecklas baserat på jämförelsemodellen. Totalt användes fyra olika maskininlärningsmetoder för att välja det primara målet för ACC, inklusive ett delat nätverk, ett gemensamt LSTM-nätverk, ett jämförelsemål-nätverk och en jämförbar XGBoost-modell. Dessa meto- der jämfördes och analyserades. Finjustering antogs för att motverka dataobalansproblemet för sällsynta situationer. Jämförelse-målet XGBoost kan uppnå 94.85% noggrannhet på testuppsättningen.
APA, Harvard, Vancouver, ISO, and other styles
10

Grundtman, Per. "Adaptive Learning." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-61648.

Full text
Abstract:
The purpose of this project is to develop a novel proof-of-concept system in attempt to measure affective states during learning-tasks and investigate whether machine learning models trained with this data has the potential to enhance the learning experience for an individual. By considering biometric signals from a user during a learning session, the affective states anxiety, engagement and boredom will be classified using different signal transformation methods and finally using machine-learning models from the Weka Java API. Data is collected using an Empatica E4 Wristband which gathers skin- and heart related biometric data which is streamed to an Android application via Bluetooth for processing. Several machine-learning algorithms and features were evaluated for best performance.
APA, Harvard, Vancouver, ISO, and other styles
11

Emani, Murali Krishna. "Adaptive parallelism mapping in dynamic environments using machine learning." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10469.

Full text
Abstract:
Modern day hardware platforms are parallel and diverse, ranging from mobiles to data centers. Mainstream parallel applications execute in the same system competing for resources. This resource contention may lead to a drastic degradation in a program’s performance. In addition, the execution environment composed of workloads and hardware resources, is dynamic and unpredictable. Efficient matching of program parallelism to machine parallelism under uncertainty is hard. The mapping policies that determine the optimal allocation of work to threads should anticipate these variations. This thesis proposes solutions to the mapping of parallel programs in dynamic environments. It employs predictive modelling techniques to determine the best degree of parallelism. Firstly, this thesis proposes a machine learning-based model to determine the optimal thread number for a target program co-executing with varying workloads. For this purpose, this offline trained model uses static code features and dynamic runtime information as input. Next, this thesis proposes a novel solution to monitor the proposed offline model and adjust its decisions in response to the environment changes. It develops a second predictive model for determining how the future environment should be, if the current thread prediction was optimal. Depending on how close this prediction was to the actual environment, the predicted thread numbers are adjusted. Furthermore, considering the multitude of potential execution scenarios where no single policy is best suited in all cases, this work proposes an approach based on the idea of mixture of experts. It considers a number of offline experts or mapping policies, each specialized for a given scenario, and learns online the best expert that is optimal for the current execution. When evaluated on highly dynamic executions, these solutions are proven to surpass default, state-of-art adaptive and analytic approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Spronck, Pieter Hubert Marie. "Adaptive game AI." [Maastricht] : Maastricht : UPM, Universitaire Pers Maastricht ; University Library, Maastricht University [Host], 2005. http://arno.unimaas.nl/show.cgi?fid=5330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Perumalla, Calvin A. "Machine Learning and Adaptive Signal Processing Methods for Electrocardiography Applications." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6926.

Full text
Abstract:
This dissertation is directed towards improving the state of art cardiac monitoring methods and automatic diagnosis of cardiac anomalies through modern engineering approaches such as adaptive signal processing, and machine learning methods. The dissertation will describe the invention and associated methods of a cardiac rhythm monitor dubbed the Integrated Vectorcardiogram (iVCG). In addition, novel machine learning approaches are discussed to improve diagnoses and prediction accuracy of cardiac diseases. It is estimated that around 17 million people in the world die from cardiac related events each year. It has also been shown that many of such deaths can be averted with long-term continuous monitoring and actuation. Hence, there is a growing need for better cardiac monitoring solutions. Leveraging the improvements in computational power, communication bandwidth, energy efficiency and electronic chip size in recent years, the Integrated Vectorcardiogram (iVCG) was invented as an answer to this problem. The iVCG is a miniaturized, integrated version of the Vectorcardiogram that was invented in the 1930s. The Vectorcardiogram provides full diagnostic quality cardiac information equivalent to that of the gold standard, 12-lead ECG, which is restricted to in-office use due to its bulky, obtrusive form. With the iVCG, it is possible to provide continuous, long-term, full diagnostic quality information, while being portable and unobtrusive to the patient. Moreover, it is possible to leverage this ‘Big Data’ and create machine learning algorithms to deliver better patient outcomes in the form of patient specific machine diagnosis and timely alerts. First, we present a proof-of-concept investigation for a miniaturized vectorcardiogram, the iVCG system for ambulatory on-body applications that continuously monitors the electrical activity of the heart in three dimensions. We investigate the minimum distance between a pair of leads in the X, Y and Z axes such that the signals are distinguishable from the noise. The target dimensions for our prototype iVCG are 3x3x2 cm and based on our experimental results we show that it is possible to achieve these dimensions. Following this, we present a solution to the problem of transforming the three VCG component signals to the familiar 12-lead ECG for the convenience of cardiologists. The least squares (LS) method is employed on the VCG signals and the reference (training) 12-lead ECG to obtain a 12x3 transformation matrix to generate the real-time ECG signals from the VCG signals. The iVCG is portable and worn on the chest of the patient and although a physician or trained technician will initially install it in the appropriate position, it is prone to subsequent rotation and displacement errors introduced by the patient placement of the device. We characterize these errors and present a software solution to correct the effect of the errors on the iVCG signals. We also describe the design of machine learning methods to improve automatic diagnosis and prediction of various heart conditions. Methods very similar to the ones described in this dissertation can be used on the long term, full diagnostic quality ‘Big Data’ such that the iVCG will be able to provide further insights into the health of patients. The iVCG system is potentially breakthrough and disruptive technology allowing long term and continuous remote monitoring of patient’s electrical heart activity. The implications are profound and include 1) providing a less expensive device compared to the 12-lead ECG system (the “gold standard”); 2) providing continuous, remote tele-monitoring of patients; 3) the replacement of current Holter shortterm monitoring system; 4) Improved and economic ICU cardiac monitoring; 5) The ability for patients to be sent home earlier from a hospital since physicians will have continuous remote monitoring of the patients.
APA, Harvard, Vancouver, ISO, and other styles
14

Holley, Julian. "Oneiric Machine Learning : The Foundations of Dream Inspired Adaptive Systems." Thesis, University of the West of England, Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.495516.

Full text
Abstract:
Artificial adaptive systems inspired or derived from neuro-biological components and processes have shown great promise at several levels. One behaviour required for the continuous functional operation of advanced neuro-biological systems is sleep. A definitive function or purpose for sleep and of the associated phenomenology such as dreaming, remains elusive. Correspondingly there remain many unresolved issues within the domain of artificial learning systems. One such aspect that largely remains intractable is the management of experiences once learned and encoded. This is the general problem of developing a persuasive explanation or scalable strategy for the contiguous organisation of internal representation and memory within finite resources; it is from this parallel perspective in which this research is set. This research is an exploration into the cognition of sleep and dreaming in humans and animals. Positioned between sleep & dreaming research and the machine learning domain, this thesis reports on an approach to improve the latter by formulating theories emerging from the former. Recent research investigating the responsibility of sleep processes in modifying memory have shown that for the avian and mammalian brain sleep plays an important role in long term cognitive development. A set of observations are created from the current understanding of both the benefits of sleep and the processes involved, including dreaming. From these observations the first contribution of this thesis is presented; several proposals for the cognitive benefits of sleep and dreaming in aspects of perception, consolidation, scalability, generalisation and representational conceptualisation. Previous research has investigated some aspects of sleep and dreaming in relation to machine learning. These have been positioned at two extremes of the machine learning paradigm; low level, emergent behaviour of artificial neural networks or high level, directed behaviour of symbolic artificial intelligence. This is the first report of direct research into the translation of the benefits by analogous mechanisms of sleep and dreaming at a level in-between earlier research. This combination is characterised by creating a foundation for a new genre of artificial learning strategies derived directly from sleep and dream phenomenology, Oneiric Machine Learning.! Anticipatory classifier systems (ACS) represent a niche group of machine learning systems derived from the established machine learning field of learning classifier systems (LCS). ACS are capable of latent learning; learning for the reward of learning and subsequently creating an internal generalised model of the environment. This feature aligned within the LCS framework provides an ideal developmental template. A review of the latent learning background and ACS algorithmi~. detail sets the basis for several applications illustrative of the Oneiric Machine Learning approach. Empirical evidence demonstrates how an adapted ACS system can exploit a dreamlike emergent thread based on an incomplete, generalised model of the environment to reduce the number of real actions required to reach model competency. Conceptual solutions to restrictions limiting the role to which ACS/LCS systems can represent some aspects advocated by Oneiric Machine 'Learning are presented. In mitigation of these restrictions, two novel prototype systems are described; the first introduces a method of implicitly managing state generalisation by the building of concept links into the classifier rule. The second illustrates automatic state alias triggered state augmentation and off-line resolution. Although remaining under development 1 Oneiric: of or relating to dreams or dreaming. Adapted from Oneiric Behaviour (Jouvet, 1979) used to describe rapid eye movement (REM) sleep re-animation. results in these new directions present plausible systems level architectures that are in part experimentally demonstrated. Novel solutions are presented to structural and procedural problems that promote the future development of cognitive systems within the LeS framework setting a direction for future studies.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhou, Lily M. Eng Massachusetts Institute of Technology. "Paper Dreams : an adaptive drawing canvas supported by machine learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122990.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 55-58).
Despite numerous recent advances in the field of deep learning for artistic purposes, the integration of these state-of-the-art machine learning tools into applications for drawing and visual expression has been an underexplored field. Bridging this gap has the potential to empower a large subset of the population, from children to the elderly, with a new medium to represent and visualize their ideas. Paper Dreams is a web-based canvas for sketching and storyboarding, with a multimodal user interface integrated with a variety of machine learning models. By using sketch recognition, style transfer, and natural language processing, the system can contextualize what the user is drawing; it then can color the sketch appropriately, suggest related objects for the user to draw, and allow the user to pull from a database of related images to add onto the canvas. Furthermore, the user can influence the output of the models via a serendipity dial that affects how "wacky" the system's outputs are. By processing a variety of multimodal inputs and automating artistic processes, Paper Dreams becomes an efficient tool for quickly generating vibrant and complex artistic scenes.
by Lily Zhou.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Negus, Andra Stefania. "Adaptive Anomaly Detection for Large IoT Datasets with Machine Learning and Transfer Learning." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-426257.

Full text
Abstract:
As more IoT devices enter the market it becomes increasingly important to develop reliable and adaptive ways of dealing with the data they generate. These must address data quality and reliability. Such solutions could benefit both the device producers and their customers who, as a result, could receive faster and better customer support services. Thus, this project's goal is twofold. First, it is to identify faulty data points generated by such devices. Second, it is to evaluate whether the knowledge gained from available/known sensors and appliances is transferable to other sensors on similar devices. This would make it possible to evaluate the behaviour of new appliances as soon as they are first switched on, rather than after sufficient data from them has been collected. This project uses time series data from three appliances: washing machine, washer&dryer and refrigerator. For these, two solutions are developed and tested: one for categorical and another for numerical variables. Categorical variables are analysed using the Average Value Frequency and the pure frequency of state-transition methods. Due to the limited number of possible states, the pure frequency proves to be the better solution, and the knowledge gained is transferred from the source device to the target one, with moderate success. Numerical variables are analysed using a One-class Support Vector Machine pipeline, with very promising results. Further, learning and forgetting mechanisms are developed to allow for the pipelines to adapt to changes in appliance patterns of behaviour. This includes a decay function for the numerical variables solution. Interestingly, the different weights for the source and target have little to no impact on the quality of the classification.
Nya IoT-enheter träder in på marknaden så det blir allt viktigare att utveckla tillförlitliga och anpassningsbara sätt att hantera de data de genererar. Dessa bör hantera datakvalitet och tillförlitlig- het. Sådana lösningar kan gynna båda tillverkarna av apparater och deras kunder som som ett resultat kan dra nytta av snabbare och bättre kundsupport / tjänster. Således har detta projekt två mål. Det första är att identifiera felaktiga datapunkter som genereras av sådana enheter. För det andra är det att utvärdera om kunskapen från tillgängliga / kända sensorer och apparater kan överföras till andra sensorer på liknande enheter. Detta skulle göra det möjligt att utvärdera beteendet hos nya apparater så snart de slås på första gången, snarare än efter att tillräcklig information från dem har samlats in. Detta projekt använder tidsseriedata från tre apparater: tvättmaskin, tvättmaskin och torktumlare och kylskåp. För dessa utvecklas och testas två lösningar: en för kategoriska variabler och en annan för numeriska variabler. De kategoriska variablerna analyseras med två metoder: Average Value Frequency och den rena frekvensen för tillståndsövergång. På grund av det begränsade antalet möjliga tillstånd visar sig den rena frekvensen vara den bättre lösningen, och kunskapen som erhålls överförs från källanordningen till målet, med måttlig framgång. De numeriska variablerna analyseras med hjälp av en One-class Support Vector Machine-pipeline, med mycket lovande resultat. Vidare utvecklas inlärnings- och glömningsmekanismer för att möjliggöra för rörledningarna att anpassa sig till förändringar i apparatens beteendemönster. Detta inkluderar en sönderfallningsfunktion för den numeriska variabellösningen. Intressant är att de olika vikterna för källan och målet har liten eller ingen inverkan på kvaliteten på klassificeringen.
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Adalet Serengül Güven. "Application of machine learning algorithms in adaptive web-based information systems." Thesis, Middlesex University, 1999. http://eprints.mdx.ac.uk/6743/.

Full text
Abstract:
Hypertext users often face the difficulty of identifying pages of .information most relevant to their current goals or interests, and are forced to wade through irrelevant pages, even though they know precisely what they are looking for. In order to address this issue, this research has investigated the Technical feasibility and also the utility of applying machine learning algorithms to generate personalised adaptation on the basis of browsing history in hypertext. A Web-based information system called MLTutor has been developed to determine the viability of this approach. The MLTutor has been implemented,tested, and evaluated. The design of MLTutor aims to remove the need for pre-defined user profiles and replace them with a dynamic user profile building scheme in order to provide individual adaptation. This is achieved by a combination o f conceptual clustering and inductive machine learning algorithms. This integration of two machine learning algorithms is a novel approach in the field of machine learning. In the initial prototype of MLTutor, a simple attribute based conceptual clustering algorithm and the ID3 algorithm were implemented. An assessment of the initial prototype highlighted the need for an in-depth investigation into the machine learning component of the prototype. This investigation led to the development of a multiple decision learning algorithm named SG-1 and a scheme for attribute encoding within the system. In order to assess these enhancements a comparative study was conducted with four adaptive variants of MLTutor along with the non-adaptive control. The adaptive variants were developed to allow alternative approaches within the machine learning component of the system to be compared. Two of the variants applied the clustering algorithm dynamically and used two different Cluster selection strategies. These strategies were based on the last page visited and a weighting of recently visited pages. The other adaptive variants used pre-clustered data with the same cluster selection strategies. The comparative evaluation undertaken on the variants used a number of established evaluation criteria and also introduced an original cross analysis scheme to determine how the adaptive component of MLTutor was utilised to complete a set of tasks. This cross analysis scheme highlights a number of weaknesses related to the evaluation methods commonly used in the field of adaptive hypermedia. The results have also highlighted a technical limitation with the particular clustering algorithm employed, specifically the generation of a heterogeneous cluster that results in poor suggestions in some circumstances. The results of the evaluation show that the MLTutor is a functional and robust system. Although the utility of using machine learning algorithms to analyse browsing activity in a hypertext system is unproven, the technical feasibility has been established.
APA, Harvard, Vancouver, ISO, and other styles
18

Ewert, Kevin. "An Adaptive Machine Learning Approach to Knowledge Discovery in Large Datasets." NSUWorks, 2006. http://nsuworks.nova.edu/gscis_etd/510.

Full text
Abstract:
Large text databases, such as medical records, on-line journals, or the Internet, potentially contain a great wealth of data and knowledge. However, text representation of factual information and knowledge is difficult to process. Analyzing these large text databases often rely upon time consuming human resources for data mining. Since a textual format is a very flexible way to describe and store various types of information, large amounts of information are often retained and distributed as text. 'The amount of accessible textual data has been increasing rapidly. Such data may potentially contain a great wealth of knowledge. However, analyzing huge amounts of textual data requires a tremendous amount of work in reading al l of the text and organizing the content. Thus, the increase in accessible textual data has caused an information flood in spite of hope of becoming knowledgeable about various topics" (Nasukawa and Nagano, 2001). Preliminary research focused on key concepts and techniques derived from clustering methodology, machine learning, and other communities within the arena of data mining. The research was based on a two-stage machine-intelligence system that clustered and filtered large datasets. The overall objective was to optimize response time through parallel processing while attempting to reduce potential errors due to knowledge manipulation. The results generated by the two-stage system were reviewed by domain experts and tested using traditional methods that included multi variable regression analysis and logic testing for accuracy. The two-stage prototype developed a model that was 85 to 90% accurate in determining childhood asthma and disproved existing stereotypes related to sleep breathing disorders. Detail results will be discussed in the proposed dissertation. While the initial research demonstrated positive results in processing large text datasets limitations were identified. These limitations included processing de lays resulting from equal distribution of processing in a heterogeneous client environment and utilizing the results derived from the second-stage as inputs for the first-stage. To address these limitations the proposed doctoral research will investigate the dynamic distribution of processing in heterogeneous environment and cyclical learning involving the second stage neural network clients modifying the first-stage expert systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Guoyu. "PAILAC: Power and Inference Latency Adaptive Control for Machine Learning Services." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu160608666572472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Castaño-Candamil, Sebastián [Verfasser], and Michael W. [Akademischer Betreuer] Tangermann. "Machine learning methods for motor performance decoding in adaptive deep brain stimulation." Freiburg : Universität, 2020. http://d-nb.info/1224808762/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bawaskar, Neerja Pramod. "Analog Implicit Functional Testing using Supervised Machine Learning." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/2099.

Full text
Abstract:
Testing analog circuits is more difficult than digital circuits. The reasons for this difficulty include continuous time and amplitude signals, lack of well-accepted testing techniques and time and cost required for its realization. The traditional method for testing analog circuits involves measuring all the performance parameters and comparing the measured parameters with the limits of the data-sheet specifications. Because of the large number of data-sheet specifications, the test generation and application requires long test times and expensive test equipment. This thesis proposes an implicit functional testing technique for analog circuits that can be easily implemented in BIST circuitry. The proposed technique does not require measuring data-sheet performance parameters. To simplify the testing only time domain digital input is required. For each circuit under test (CUT) a cross-covariance signature is computed from the test input and CUT's output. The proposed method requires a training sample of the CUT to be binned to the data-sheet specifications. The binned CUT sample cross-covariance signatures are mapped with a supervised machine learning classifier. For each bin, the classifiers select unique sub-sets of the cross-covariance signature. The trained classifier is then used to bin newly manufactured copies of the CUT. The proposed technique is evaluated on synthetic data generated from the Monte Carlo simulation of the nominal circuit. Results show the machine learning classifier must be chosen to match the imbalanced bin populations common in analog circuit testing. For sample sizes of 700+ and training for individual bins, classifier test escape rates ranged from 1000 DPM to 10,000 DPM.
APA, Harvard, Vancouver, ISO, and other styles
22

Stackhouse, Christian Paul 1960. "AN ADAPTIVE RULE-BASED SYSTEM." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276534.

Full text
Abstract:
Adaptive systems are systems whose characteristics evolve over time to improve their performance at a task. A fairly new area of study is that of adaptive rule-based systems. The system studied for this thesis uses meta-knowledge about rules, rulesets, rule performance, and system performance in order to improve its overall performance in a problem domain. An interesting and potentially important phenomenon which emerged is that the performance the system learns while solving a problem appears to be limited by an inherent break-even level of complexity. That is, the cost to the system of acquiring complexity does not exceed its benefit for that problem. If the problem is made more difficult, however, more complexity is required, the benefit of complexity becomes greater than its cost, and the system complexity begins increasing, ultimately to the new break-even point. There is no apparent ultimate limit to the complexity attainable.
APA, Harvard, Vancouver, ISO, and other styles
23

Di, Yuan. "Enhanced System Health Assessment using Adaptive Self-Learning Techniques." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1522420412871182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Olivier. "Adaptive Rules Model : Statistical Learning for Rule-Based Systems." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX037/document.

Full text
Abstract:
Les Règles Métiers (Business Rules en anglais, ou BRs) sont un outil communément utilisé dans l’industrie pour automatiser des prises de décisions répétitives. Le problème de l’adaptation de bases de règles existantes à un environnement en constante évolution est celui qui motive cette thèse. Des techniques existantes d’Apprentissage Automatique Supervisé peuvent être utilisées lorsque cette adaptation se fait en toute connaissance de la décision correcte à prendre en toute circonstance. En revanche, il n’existe actuellement aucun algorithme, qu’il soit théorique ou pratique, qui puisse résoudre ce problème lorsque l’information connue est de nature statistique, comme c’est le cas pour une banque qui souhaite contrôler la proportion de demandes de prêt que son service de décision automatique fait passer à des experts humains. Nous étudions spécifiquement le problème d’apprentissage qui a pour objectif d’ajuster les BRs de façon à ce que les décisions prises aient une valeur moyenne donnée.Pour ce faire, nous considérons les bases de Règles Métiers en tant que programmes. Après avoir formalisé quelques définitions et notations dans le Chapitre 2, le langage de programmation BR ainsi défini est étudié dans le Chapitre 4, qui prouve qu’il n’existe pas d’algorithme pour apprendre des Règles Métiers avec un objectif statistique dans le cas général. Nous limitons ensuite le champ d’étude à deux cas communs où les BRs sont limités d’une certaine façon : le cas Borné en Itérations dans lequel, quelles que soit les données d’entrée, le nombre de règles exécutées en prenant la décision est inférieur à une borne donnée ; et le cas Linéaire Borné en Itérations dans lequel les règles sont de plus écrite sous forme Linéaire. Dans ces deux cas, nous produisons par la suite un algorithme d’apprentissage basé sur la Programmation Mathématique qui peut résoudre ce problème. Nous étendons brièvement cette formalisation et cet algorithme à d’autres problèmes d’apprentissage à objectif statistique dans le Chapitre 5, avant de présenter les résultats expérimentaux de cette thèse dans le Chapitre 6
Business Rules (BRs) are a commonly used tool in industry for the automation of repetitive decisions. The emerging problem of adapting existing sets of BRs to an ever-changing environment is the motivation for this thesis. Existing Supervised Machine Learning techniques can be used when the adaptation is done knowing in detail which is the correct decision for each circumstance. However, there is currently no algorithm, theoretical or practical, which can solve this problem when the known information is statistical in nature, as is the case for a bank wishing to control the proportion of loan requests its automated decision service forwards to human experts. We study the specific learning problem where the aim is to adjust the BRs so that the decisions are close to a given average value.To do so, we consider sets of Business Rules as programs. After formalizing some definitions and notations in Chapter 2, the BR programming language defined this way is studied in Chapter 3, which proves that there exists no algorithm to learn Business Rules with a statistical goal in the general case. We then restrain the scope to two common cases where BRs are limited in some way: the Iteration Bounded case in which no matter the input, the number of rules executed when taking the decision is less than a given bound; and the Linear Iteration Bounded case in which rules are also all written in Linear form. In those two cases, we later produce a learning algorithm based on Mathematical Programming which can solve this problem. We briefly extend this theory and algorithm to other statistical goal learning problems in Chapter 5, before presenting the experimental results of this thesis in Chapter 6. The last includes a proof of concept to automate the main part of the learning algorithm which does not consist in solving a Mathematical Programming problem, as well as some experimental evidence of the computational complexity of the algorithm
APA, Harvard, Vancouver, ISO, and other styles
25

Vartak, Aniket. "BIOSIGNAL PROCESSING CHALLENGES IN EMOTION RECOGNITIONFOR ADAPTIVE LEARNING." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2667.

Full text
Abstract:
User-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions the promise to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of the cognitive and emotional state of the learner using such systems. This capability moves development beyond the use of traditional user performance metrics to include system intelligence measures that are based on current neuroscience theories. These advances are of paramount importance in the success and wide spread use of learning systems that are automated and intelligent. Emotion is considered an important aspect of how learning occurs, and yet estimating it and making adaptive adjustments are not part of most learning systems. In this research we focus on one specific aspect of constructing an adaptive and intelligent learning system, that is, estimation of the emotion of the learner as he/she is using the automated training system. The challenge starts with the definition of the emotion and the utility of it in human life. The next challenge is to measure the co-varying factors of the emotions in a non-invasive way, and find consistent features from these measures that are valid across wide population. In this research we use four physiological sensors that are non-invasive, and establish a methodology of utilizing the data from these sensors using different signal processing tools. A validated set of visual stimuli used worldwide in the research of emotion and attention, called International Affective Picture System (IAPS), is used. A dataset is collected from the sensors in an experiment designed to elicit emotions from these validated visual stimuli. We describe a novel wavelet method to calculate hemispheric asymmetry metric using electroencephalography data. This method is tested against typically used power spectral density method. We show overall improvement in accuracy in classifying specific emotions using the novel method. We also show distinctions between different discrete emotions from the autonomic nervous system activity using electrocardiography, electrodermal activity and pupil diameter changes. Findings from different features from these sensors are used to give guidelines to use each of the individual sensors in the adaptive learning environment.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
26

Kochenderfer, Mykel J. "Adaptive modelling and planning for learning intelligent behaviour." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/1408.

Full text
Abstract:
An intelligent agent must be capable of using its past experience to develop an understanding of how its actions affect the world in which it is situated. Given some objective, the agent must be able to effectively use its understanding of the world to produce a plan that is robust to the uncertainty present in the world. This thesis presents a novel computational framework called the Adaptive Modelling and Planning System (AMPS) that aims to meet these requirements for intelligence. The challenge of the agent is to use its experience in the world to generate a model. In problems with large state and action spaces, the agent can generalise from limited experience by grouping together similar states and actions, effectively partitioning the state and action spaces into finite sets of regions. This process is called abstraction. Several different abstraction approaches have been proposed in the literature, but the existing algorithms have many limitations. They generally only increase resolution, require a large amount of data before changing the abstraction, do not generalise over actions, and are computationally expensive. AMPS aims to solve these problems using a new kind of approach. AMPS splits and merges existing regions in its abstraction according to a set of heuristics. The system introduces splits using a mechanism related to supervised learning and is defined in a general way, allowing AMPS to leverage a wide variety of representations. The system merges existing regions when an analysis of the current plan indicates that doing so could be useful. Because several different regions may require revision at any given time, AMPS prioritises revision to best utilise whatever computational resources are available. Changes in the abstraction lead to changes in the model, requiring changes to the plan. AMPS prioritises the planning process, and when the agent has time, it replans in high-priority regions. This thesis demonstrates the flexibility and strength of this approach in learning intelligent behaviour from limited experience.
APA, Harvard, Vancouver, ISO, and other styles
27

Mena-Yedra, Rafael. "An adaptive, fault-tolerant system for road network traffic prediction using machine learning." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669802.

Full text
Abstract:
This thesis has addressed the design and development of an integrated system for real-time traffic forecasting based on machine learning methods. Although traffic prediction has been the driving motivation for the thesis development, a great part of the proposed ideas and scientific contributions in this thesis are generic enough to be applied in any other problem where, ideally, their definition is that of the flow of information in a graph-like structure. Such application is of special interest in environments susceptible to changes in the underlying data generation process. Moreover, the modular architecture of the proposed solution facilitates the adoption of small changes to the components that allow it to be adapted to a broader range of problems. On the other hand, certain specific parts of this thesis are strongly tied to the traffic flow theory. The focus in this thesis is on a macroscopic perspective of the traffic flow where the individual road traffic flows are correlated to the underlying traffic demand. These short-term forecasts include the road network characterization in terms of the corresponding traffic measurements –traffic flow, density and/or speed–, the traffic state –whether a road is congested or not, and its severity–, and anomalous road conditions –incidents or other non-recurrent events–. The main traffic data used in this thesis is data coming from detectors installed along the road networks. Nevertheless, other kinds of traffic data sources could be equally suitable with the appropriate preprocessing. This thesis has been developed in the context of Aimsun Live –a simulation-based traffic solution for real-time traffic prediction developed by Aimsun–. The methods proposed here is planned to be linked to it in a mutually beneficial relationship where they cooperate and assist each other. An example is when an incident or non-recurrent event is detected with the proposed methods in this thesis, then the simulation-based forecasting module can simulate different strategies to measure their impact. Part of this thesis has been also developed in the context of the EU research project "SETA" (H2020-ICT-2015). The main motivation that has guided the development of this thesis is enhancing those weak points and limitations previously identified in Aimsun Live, and whose research found in literature has not been especially extensive. These include: • Autonomy, both in the preparation and real-time stages. • Adaptation, to gradual or abrupt changes in traffic demand or supply. • Informativeness, about anomalous road conditions. • Forecasting accuracy improved with respect to previous methodology at Aimsun and a typical forecasting baseline. • Robustness, to deal with faulty or missing data in real-time. • Interpretability, adopting modelling choices towards a more transparent reasoning and understanding of the underlying data-driven decisions. • Scalable, using a modular architecture with emphasis on a parallelizable exploitation of large amounts of data. The result of this thesis is an integrated system –Adarules– for real-time forecasting which is able to make the best of the available historical data, while at the same time it also leverages the theoretical unbounded size of data in a continuously streaming scenario. This is achieved through the online learning and change detection features along with the automatic finding and maintenance of patterns in the network graph. In addition to the Adarules system, another result is a probabilistic model that characterizes a set of interpretable latent variables related to the traffic state based on the traffic data provided by the sensors along with optional prior knowledge provided by the traffic expert following a Bayesian approach. On top of this traffic state model, it is built the probabilistic spatiotemporal model that learns the dynamics of the transition of traffic states in the network, and whose objectives include the automatic incident detection.
Esta tesis ha abordado el diseño y desarrollo de un sistema integrado para la predicción de tráfico en tiempo real basándose en métodos de aprendizaje automático. Aunque la predicción de tráfico ha sido la motivación que ha guiado el desarrollo de la tesis, gran parte de las ideas y aportaciones científicas propuestas en esta tesis son lo suficientemente genéricas como para ser aplicadas en cualquier otro problema en el que, idealmente, su definición sea la del flujo de información en una estructura de grafo. Esta aplicación es de especial interés en entornos susceptibles a cambios en el proceso de generación de datos. Además, la arquitectura modular facilita la adaptación a una gama más amplia de problemas. Por otra parte, ciertas partes específicas de esta tesis están fuertemente ligadas a la teoría del flujo de tráfico. El enfoque de esta tesis se centra en una perspectiva macroscópica del flujo de tráfico en la que los flujos individuales están ligados a la demanda de tráfico subyacente. Las predicciones a corto plazo incluyen la caracterización de las carreteras en base a las medidas de tráfico -flujo, densidad y/o velocidad-, el estado del tráfico -si la carretera está congestionada o no, y su severidad-, y la detección de condiciones anómalas -incidentes u otros eventos no recurrentes-. Los datos utilizados en esta tesis proceden de detectores instalados a lo largo de las redes de carreteras. No obstante, otros tipos de fuentes de datos podrían ser igualmente empleados con el preprocesamiento apropiado. Esta tesis ha sido desarrollada en el contexto de Aimsun Live -software desarrollado por Aimsun, basado en simulación para la predicción en tiempo real de tráfico-. Los métodos aquí propuestos cooperarán con este. Un ejemplo es cuando se detecta un incidente o un evento no recurrente, entonces pueden simularse diferentes estrategias para medir su impacto. Parte de esta tesis también ha sido desarrollada en el marco del proyecto de la UE "SETA" (H2020-ICT-2015). La principal motivación que ha guiado el desarrollo de esta tesis es mejorar aquellas limitaciones previamente identificadas en Aimsun Live, y cuya investigación encontrada en la literatura no ha sido muy extensa. Estos incluyen: -Autonomía, tanto en la etapa de preparación como en la de tiempo real. -Adaptación, a los cambios graduales o abruptos de la demanda u oferta de tráfico. -Sistema informativo, sobre las condiciones anómalas de la carretera. -Mejora en la precisión de las predicciones con respecto a la metodología anterior de Aimsun y a un método típico usado como referencia. -Robustez, para hacer frente a datos defectuosos o faltantes en tiempo real. -Interpretabilidad, adoptando criterios de modelización hacia un razonamiento más transparente para un humano. -Escalable, utilizando una arquitectura modular con énfasis en una explotación paralela de grandes cantidades de datos. El resultado de esta tesis es un sistema integrado –Adarules- para la predicción en tiempo real que sabe maximizar el provecho de los datos históricos disponibles, mientras que al mismo tiempo también sabe aprovechar el tamaño teórico ilimitado de los datos en un escenario de streaming. Esto se logra a través del aprendizaje en línea y la capacidad de detección de cambios junto con la búsqueda automática y el mantenimiento de los patrones en la estructura de grafo de la red. Además del sistema Adarules, otro resultado de la tesis es un modelo probabilístico que caracteriza un conjunto de variables latentes interpretables relacionadas con el estado del tráfico basado en los datos de sensores junto con el conocimiento previo –opcional- proporcionado por el experto en tráfico utilizando un planteamiento Bayesiano. Sobre este modelo de estados de tráfico se construye el modelo espacio-temporal probabilístico que aprende la dinámica de la transición de estados
APA, Harvard, Vancouver, ISO, and other styles
28

Tysk, Carl, and Jonathan Sundell. "Adaptive detection of anomalies in the Saab Gripen fuel tanks using machine learning." Thesis, Uppsala universitet, Signaler och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414208.

Full text
Abstract:
Gripen E, a fighter jet developed by Saab, has to fulfill a number of specifications and is therefore tested thoroughly. This project is about detecting anomalies in such tests and thereby improving the automation of the test data evaluation. The methodology during this project was to model the expected deviation between the measured signals and the corresponding signals from a fuel system model using machine learning methods. This methodology was applied to the mass in one of the fuel tanks. The challenge lies in the fact that the expected deviation is unknown and dependent on the operating conditions of the fuel system in the aircraft. Furthermore, two different machine learning approaches to estimate a prediction interval, within which the residual was expected to be, were tested. These were quantile regression and a variance estimation based method. The machine learning models used in this project were LSTM, Ridge Regression, Random Forest Regressor and Gradient Boosting Regressor. One of the problems encountered was imbalanced data, since different operating modes were not equally represented. Also, whether the time dependency of the signals had to be taken into account was investigated. Moreover, choosing which input signals to use for the machine learning methods had a large impact on the result. The concept appears to work well. Known anomalies were detected, and with a low degree of false alarms. The variance estimation based approach seems superior to quantile regression. For data containing anomalies, the target signal drifted away significantly outside the boundaries of the prediction interval. Such test flights were flagged for anomaly. Furthermore, the concept was also successfully verified for another fuel tank, with only minor and obvious adaptations, such as replacing the target signal with the new one.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Max Hongming. "Extension on Adaptive MAC Protocol for Space Communications." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1275.

Full text
Abstract:
This work devises a novel approach for mitigating the effects of Catastrophic Forgetting in Deep Reinforcement Learning-based cognitive radio engine implementations employed in space communication applications. Previous implementations of cognitive radio space communication systems utilized a moving window- based online learning method, which discards part of its understanding of the environment each time the window is moved. This act of discarding is called Catastrophic Forgetting. This work investigated ways to control the forgetting process in a more systematic manner, both through a recursive training technique that implements forgetting in a more controlled manner and an ensemble learning technique where each member of the ensemble represents the engine's understanding over a certain period of time. Both of these techniques were integrated into a cognitive radio engine proof-of-concept, and were delivered to the SDR platform on the International Space Station. The results were then compared to the results from the original proof-of-concept. Through comparison, the ensemble learning technique showed promise when comparing performance between training techniques during different communication channel contexts.
APA, Harvard, Vancouver, ISO, and other styles
30

Bridle, Robert Angus, and robert bridle@gmail com. "Adaptive User Interfaces for Mobile Computing Devices." The Australian National University. College of Engineering and Computer Sciences, 2008. http://thesis.anu.edu.au./public/adt-ANU20081117.184430.

Full text
Abstract:
This thesis examines the use of adaptive user interface elements on a mobile phone and presents two adaptive user interface approaches. The approaches attempt to increase the efficiency with which a user interacts with a mobile phone, while ensuring the interface remains predictable to a user. ¶ An adaptive user interface approach is presented that predicts the menu item a user will select. When a menu is opened, the predicted menu item is highlighted instead of the top-most menu item. The aim is to maintain the layout of the menu and to save the user from performing scrolling key presses. A machine learning approach is used to accomplish the prediction task. However, learning in the mobile phone environment produces several difficulties. These are limited availability of training examples, concept drift and limited computational resources. A novel learning approach is presented that addresses these difficulties. This learning approach addresses limited training examples and limited computational resources by employing a highly restricted hypothesis space. Furthermore, the approach addresses concept drift by determining the hypothesis that has been consistent for the longest run of training examples into the past. Under certain concept drift restrictions, an analysis of this approach shows it to be superior to approaches that use a fixed window of training examples. An experimental evaluation on data collected from several users interacting with a mobile phone was used to assess this learning approach in practice. The results of this evaluation are reported in terms of the average number of key presses saved. The benefit of menu-item prediction can clearly be seen, with savings of up to three key presses on every menu interaction. ¶ An extension of the menu-item prediction approach is presented that removes the need to manually specify a restricted hypothesis space. The approach uses a decision-tree learner to generate hypotheses online and uses the minimum description length principle to identify the occurrence of concept shifts. The identification of concept shifts is used to guide the hypothesis generation process. The approach is compared with the original menu-item prediction approach in which hypotheses are manually specified. Experimental results using the same datasets are reported. ¶ Another adaptive user interface approach is presented that induces shortcuts on a mobile phone interface. The approach is based on identifying shortcuts in the form of macros, which can automate a sequence of actions. A means of specifying relevant action sequences is presented, together with several learning approaches for predicting which shortcut to present to a user. A small subset of the possible shortcuts on a mobile phone was considered. This subset consisted of shortcuts that automated the actions of making a phone call or sending a text message. The results of an experimental evaluation of the shortcut prediction approaches are presented. The shortcut prediction process was evaluated in terms of predictive accuracy and stability, where stability was defined as the rate at which predicted shortcuts changed over time. The importance of stability is discussed, and is used to question the advantages of using sophisticated learning approaches for achieving adaptive user interfaces on mobile phones. Finally, several methods for combining accuracy and stability measures are presented, and the learning approaches are compared with these methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Bridle, Robert Angus. "Adaptive user interfaces for mobile computing devices /." View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081117.184430/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lesh, Neal. "Scalable and adaptive goal recognition /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/6897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wright, Hamish Michael. "A Homogeneous Hierarchical Scripted Vector Classification Network with Optimisation by Genetic Algorithm." Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1191.

Full text
Abstract:
A simulated learning hierarchical architecture for vector classification is presented. The hierarchy used homogeneous scripted classifiers, maintaining similarity tables, and selforganising maps for the input. The scripted classifiers produced output, and guided learning with permutable script instruction tables. A large space of parametrised script instructions was created, from which many different combinations could be implemented. The parameter space for the script instruction tables was tuned using a genetic algorithm with the goal of optimizing the networks ability to predict class labels for bit pattern inputs. The classification system, known as Dura, was presented with various visual classification problems, such as: detecting overlapping lines, locating objects, or counting polygons. The network was trained with a random subset from the input space, and was then tested over a uniformly sampled subset. The results showed that Dura could successfully classify these and other problems. The optimal scripts and parameters were analysed, allowing inferences about which scripted operations were important, and what roles they played in the learning classification system. Further investigations were undertaken to determine Dura's performance in the presence of noise, as well as the robustness of the solutions when faced with highly stochastic training sequences. It was also shown that robustness and noise tolerance in solutions could be improved through certain adjustments to the algorithm. These adjustments led to different solutions which could be compared to determine what changes were responsible for the increased robustness or noise immunity. The behaviour of the genetic algorithm tuning the network was also analysed, leading to the development of a super solutions cache, as well as improvements in: convergence, fitness function, and simulation duration. The entire network was simulated using a program written in C++ using FLTK libraries for the graphical user interface.
APA, Harvard, Vancouver, ISO, and other styles
34

Erdogmus, Deniz. "Information theoretic learning Renyi's entropy and its applications to adaptive system training /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000122.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains xv, 217 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
35

Stone, Erik E. Skubic Marge. "Adaptive temporal difference learning of spatial memory in the water maze task." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/6586.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on January 22, 2010). Thesis advisor: Dr. Marjorie Skubic. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
36

Vong, Chi Man. "Integrated machine learning techniques with application to adaptive decision support system for automotive engineering." Thesis, University of Macau, 2005. http://umaclib3.umac.mo/record=b1637079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Booth, Ash. "Automated algorithmic trading : machine learning and agent-based modelling in complex adaptive financial markets." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/397453/.

Full text
Abstract:
Over the last three decades, most of the world's stock exchanges have transitioned to electronic trading through limit order books, creating a need for a new set of models for understanding these markets. In this thesis, a number of models are described which provide insight into the dynamics of modern financial markets as well as providing a platform for optimising trading and regulatory decisions. The first part of this thesis proposes an autonomous system that uses novel machine learning techniques to predict the price return over well documented seasonal events and uses these predictions to develop a profitable trading strategy. The DAX, FTSE 100 and S&P 500 are explored for the presence of seasonality events before an automated trading system based on performance weighted ensembles of random forests is introduced and shown to improve the profitability and stability of trading such events. The performance of the models is analysed using a large sample of stocks and the results show that the system described in this section produces superior results in terms of both profitability and prediction accuracy compared with other ensemble techniques. The second part of this thesis explores price impact. For many players in financial markets, the price impact of their trading activity represents a large proportion of their transaction costs. This section of the thesis proposes an adaptation of the system introduced in the ?rst part for predicting the price impact of order book events. The system's performance is benchmarked using ensembles of other popular regression algorithms including: linear regression, neural networks and support vector regression using depth-of-book data from the BATS Chi-X exchange. The results show that recency weighted ensembles of random forests produce over 15% greater prediction accuracy on out-of-sample data, for 5 out of 6 timeframes studied, compared with all benchmarks. Finally, a novel procedure for extracting the directional effects of features is proposed and used to explore the features most dominant in the price formation process. The final part of this thesis addresses the requirement for testing algorithmic trading strategies laid out in the Markets in Financial Instruments Directive (MiFID) II by describing an agent-based simulation. Five types of agent operate in a limit order market producing a model that is able to reproduce a number of stylised market properties including: clustered volatility, autocorrelation of returns, long memory in order flow, concave price impact and the presence of extreme price events. The model is found to be insensitive to reasonable parameter variations. Finally, the model is used to explore how trading strategy affects the implementation shortfall of trading a large order. A number of execution strategies with various order types, are evolved and evaluated in the agent-based market. It is shown that the evolved strategies outperform the simple, well known strategies significantly, suggesting that execution strategy plays an important role in determining the implementation shortfall of trading large orders.
APA, Harvard, Vancouver, ISO, and other styles
38

Bai, Bing. "A Study of Adaptive Random Features Models in Machine Learning based on Metropolis Sampling." Thesis, KTH, Numerisk analys, NA, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293323.

Full text
Abstract:
Artificial neural network (ANN) is a machine learning approach where parameters, i.e., frequency parameters and amplitude parameters, are learnt during the training process. Random features model is a special case of ANN that the structure of random features model is as same as ANN’s but the parameters’ learning processes are different. For random features model, the amplitude parameters are learnt during the training process but the frequency parameters are sampled from some distributions. If the frequency distribution of the random features model is well-chosen, both models can approximate data well. Adaptive random Fourier features with Metropolis sampling is an enhanced random Fourier features model which can select appropriate frequency distribution adaptively. This thesis studies Rectified Linear Unit and sigmoid features and combines them with the adaptive idea to generate another two adaptive random features models. The results show that using the particular set of hyper-parameters, adaptive random Rectified Linear Unit features model can also approximate the data relatively well, though the adaptive random Fourier features model performs slightly better.
I artificiella neurala nätverk (ANN), som används inom maskininlärning, behöver parametrar, kallade frekvensparametrar och amplitudparametrar, hittasgenom en så kallad träningsprocess. Random feature-modeller är ett specialfall av ANN där träningen sker på ett annat sätt. I dessa modeller tränasamplitudparametrarna medan frekvensparametrarna samplas från någon sannolikhetstäthet. Om denna sannolikhetstäthet valts med omsorg kommer båda träningsmodellerna att ge god approximation av givna data. Metoden Adaptiv random Fourier feature[1] uppdaterar frekvensfördelningen adaptivt. Denna uppsats studerar aktiveringsfunktionerna ReLU och sigmoid och kombinerar dem med den adaptiva iden i [1] för att generera två ytterligare Random feature-modeller. Resultaten visar att om samma hyperparametrar som i [1] används så kan den adaptiva ReLU features-modellen approximera data relativt väl, även om Fourier features-modellen ger något bättre resultat.
APA, Harvard, Vancouver, ISO, and other styles
39

Reichstaller, Andre [Verfasser], and Alexander [Akademischer Betreuer] Knapp. "Machine Learning-based Test Strategies for Self-Adaptive Systems / Andre Reichstaller ; Betreuer: Alexander Knapp." Augsburg : Universität Augsburg, 2020. http://d-nb.info/1225683254/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lacaze, Sylvain. "Active Machine Learning for Computational Design and Analysis under Uncertainties." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/556446.

Full text
Abstract:
Computational design has become a predominant element of various engineering tasks. However, the ever increasing complexity of numerical models creates the need for efficient methodologies. Specifically, computational design under uncertainties remains sparsely used in engineering settings due to its computational cost. This dissertation proposes a coherent framework for various branches of computational design under uncertainties, including model update, reliability assessment and reliability-based design optimization. Through the use of machine learning techniques, computationally inexpensive approximations of the constraints, limit states, and objective functions are constructed. Specifically, a novel adaptive sampling strategy allowing for the refinement of any approximation only in relevant regions has been developed, referred to as generalized max-min. This technique presents various computational advantages such as ease of parallelization and applicability to any metamodel. Three approaches tailored for computational design under uncertainties are derived from the previous approximation technique. An algorithm for reliability assessment is proposed and its efficiency is demonstrated for different probabilistic settings including dependent variables using copulas. Additionally, the notion of fidelity map is introduced for model update settings with large number of dependent responses to be matched. Finally, a new reliability-based design optimization method with local refinement has been developed. A derivation of sampling-based probability of failure derivatives is also provided along with a discussion on numerical estimates. This derivation brings additional flexibility to the field of computational design. The knowledge acquired and techniques developed during this Ph.D. have been synthesized in an object-oriented MATLAB toolbox. The help and ergonomics of the toolbox have been designed so as to be accessible by a large audience.
APA, Harvard, Vancouver, ISO, and other styles
41

Pérez, Culubret Adrià 1993. "Learning how to simulate : Applying machine learning methods to improve molecular dynamics simulations." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2022. http://hdl.handle.net/10803/673392.

Full text
Abstract:
Caracteritzar la dinàmica de les proteïnes és essencial per tal d'entendre la connexió entre seqüència i funció. La simulació de dinàmiques moleculars és una de les tècniques principals per a estudiar la dinàmica de proteïnes per la seva capacitat de capturar processos dinàmics computacionals en diferents escales temporals amb resolució atòmica. Tanmateix, hi ha limitacions que impedeixen que la simulació de dinàmiques moleculars es converteixi en un model substitutiu de les dinàmiques reals de proteïnes, principalment per limitacions de mostreig i la inexactitud dels camps de força utilitzats. En aquesta tesi doctoral tractem aquestes limitacions mitjançant els últims avenços en aprenentatge automàtic. En la primera part de la tesi, desenvoluparem un nou algoritme de mostreig adaptatiu inspirat en mètodes d'aprenentatge reforçat, que aplicarem per a reconstruir la unió entre una proteïna desordenada i la seva parella d'unió. En la segona part de la tesi, desenvoluparem TorchMD, una llibreria d'aprenentatge profund per a simulacions de dinàmica molecular, que aplicarem per a aprendre un potencial "coarse-grained" per a simulacions de plegament de proteïnes.
APA, Harvard, Vancouver, ISO, and other styles
42

Joe-Yen, Stefan. "Performance Envelopes of Adaptive Ensemble Data Stream Classifiers." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1014.

Full text
Abstract:
This dissertation documents a study of the performance characteristics of algorithms designed to mitigate the effects of concept drift on online machine learning. Several supervised binary classifiers were evaluated on their performance when applied to an input data stream with a non-stationary class distribution. The selected classifiers included ensembles that combine the contributions of their member algorithms to improve overall performance. These ensembles adapt to changing class definitions, known as “concept drift,” often present in real-world situations, by adjusting the relative contributions of their members. Three stream classification algorithms and three adaptive ensemble algorithms were compared to determine the capabilities of each in terms of accuracy and throughput. For each< run of the experiment, the percentage of correct classifications was measured using prequential analysis, a well-established methodology in the evaluation of streaming classifiers. Throughput was measured in classifications performed per second as timed by the CPU clock. Two main experimental variables were manipulated to investigate and compare the range of accuracy and throughput exhibited by each algorithm under various conditions. The number of attributes in the instances to be classified and the speed at which the definitions of labeled data drifted were varied across six total combinations of drift-speed and dimensionality. The implications of results are used to recommend improved methods for working with stream-based data sources. The typical approach to counteract concept drift is to update the classification models with new data. In the stream paradigm, classifiers are continuously exposed to new data that may serve as representative examples of the current situation. However, updating the ensemble classifier in order to maintain or improve accuracy can be computationally costly and will negatively impact throughput. In a real-time system, this could lead to an unacceptable slow-down. The results of this research showed that,among several algorithms for reducing the effect of concept drift, adaptive decision trees maintained the highest accuracy without slowing down with respect to the no-drift condition. Adaptive ensemble techniques were also able to maintain reasonable accuracy in the presence of drift without much change in the throughput. However, the overall throughput of the adaptive methods is low and may be unacceptable for extremely time-sensitive applications. The performance visualization methodology utilized in this study gives a clear and intuitive visual summary that allows system designers to evaluate candidate algorithms with respect to their performance needs.
APA, Harvard, Vancouver, ISO, and other styles
43

Jordaan, Edzard Adolf Biermann. "Intelligent elevator control based on adaptive learning and optimisation." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95999.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Machine learning techniques have been around for a few decades now and are being established as a pre-dominant feature in most control applications. Elevators create a unique control application where traffic flow is controlled and directed according to certain control philosophies. Machine learning techniques can be implemented to predict and control every possible traffic flow scenario and deliver the best possible solution. Various techniques will be implemented in the elevator application in an attempt to establish a degree of artificial intelligence in the decision making process and to be able to have increased interaction with the passengers at all times. The primary objective for this thesis is to investigate the potential of machine learning solutions and the relevancy of such technologies in elevator control applications. The aim is to establish how the research field of machine learning, specifically neural network science, can be successfully utilised with the goal of creating an artificial intelligent (AI) controller. The AI controller is to adapt to its existing state and change its control parameters as required without the intervention of the user. The secondary objective for this thesis is to develop an elevator model that represents every aspect of the real-world application. The purpose of the model is to improve the accuracy of existing theoretical and simulated models, by modulating previously unknown and complex variables and constraints. The aim is to create a complete and fully functional testing platform for developing new elevator control philosophies and testing new elevator control mechanisms. To achieve these objectives, the main focus is directed to how waiting time, probability theory and power consumption predictions can be optimally utilised by means of machine learning solutions. The theoretical background is provided for these concepts and how each subject can potentially influence the decision making process. The reason why this approach has been difficult to implement in the past, is possibly mainly due to the lack of adequate representation for these concepts in an online environment without the continuous feedback from an Expert System. As a result of this thesis, the respective online models for each of these concepts were successfully developed in order to deal with the identified shortcomings. The developed online models for projected waiting times, probability networks and power consumption feedback were then combined to form a new Intelligent Elevator Controller (IEC) structure as opposed to the Expert System approach, mostly used in present computer based elevator controllers.
AFRIKAANSE OPSOMMING: Masjienleertegnieke bestaan al vir 'n paar dekades en is 'n oorwegende kenmerk in hedendaagse beheertoestelle. Hysbakke skep 'n unieke beheertoepassing, waar verkeersvloei beheer en gerig kan word volgens sekere beheer loso e. Masjienleertegnieke kan geïmplementeer word om elke moontlike verkeersvloei situasie te voorspel en te beheer en die beste moontlike oplossing te lewer. Verskeie tegnieke sal in die tesis ondersoek word in 'n poging om 'n mate van kunsmatige intelligensie in die besluitneming proses te skep asook verhoogte interaksie met die passasiers te alle tye. Die prim^ere doel van hierdie tesis is om die potensiaal van 'n masjienleer oplossing en die toepaslikheid van dit in hysbakbeheertoepassings te ondersoek. Die doel is om vas te stel hoe die navorsing in die veld van die masjienleer, spesi ek in neurale netwerk wetenskappe, suksesvol aangewend kan word met die doel om 'n kunsmatige intelligente beheerder te skep. Die kunsmatige intelligente beheerder moet kan aanpas by sy onmidelike omgewing en sy beheer parameters moet kan verander soos nodig sonder die ingryping van die gebruiker. Die sekond^ere doelwit vir hierdie tesis is om 'n hysbakmodel, wat elke aspek van die werklike w^ereld verteenwoordig, te ontwikkel. Die doel van die model is om die akkuraatheid van die bestaande teoretiese en gesimuleerde modelle te verbeter deur voorheen onbekende en komplekse veranderlikes en beperkings in ag te neem. Die doel is om 'n funksionele toetsplatform te skep vir die ontwikkeling van nuwe hysbakbeheer loso e en vir die toets van nuwe hysbakbeheermeganismes. Om hierdie doelwitte te bereik, is die hoo okus gerig om wagtyd, waarskynlikheidsteorie en kragverbruik voorspellings optimaal te gebruik deur middel van die masjienleer oplossings. Die teoretiese agtergrond is voorsien vir hierdie konsepte en hoe elke konsep potensieel die besluitneming kan beïnvloed. Die rede waarom hierdie benadering moeilik was om te implementeer tot hede, is moontlik te wyte aan die gebrek aan voldoende verteenwoordiging vir hierdie konsepte in 'n aanlynomgewing sonder die voortdurende terugvoer van 'n Deskundige Stelsel. As gevolg van hierdie tesis word die onderskeie aanlynmodelle vir elk van hierdie konsepte suksesvol ontwikkel om die geïdenti seerde tekortkominge te oorkom. Die ontwikkelde aanlynmodelle vir geprojekteerde wagtye, waarskynlikheidsnetwerke en kragverbruik terugvoer is dan gekombineer om 'n nuwe intelligente hysbakbeheerder struktuur te skep, in teenstelling met die Deskundige Stelsel benadering in die huidige rekenaar gebaseerde hysbakbeheerders.
APA, Harvard, Vancouver, ISO, and other styles
44

Maus, Rickard, and Mattias Arvidsson. "Predicting Parameters of Adaptive Integrate-and-Fire Models through Machine Learning with Gramian Angular Fields." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301737.

Full text
Abstract:
In the field of neuroscience, simulation of neurons and neuronal networks are often of great interest. Before neuron models can be used they require tuning of several parameters to properly replicate characteristics of a given neuron type. There are several methods to do this tuning of parameters but they have one common issue, they are computationally expensive. In an effort to reduce the computational cost we propose in this study the application of Convolutional Neural Networks with Gramian Angular Fields of voltage trace data to the task of parameter optimization through regression. Training and evaluating the network on simulated data from the AdEx model in NEST we found that Convolutional Neural Networks in conjunction with Gramian Angular Fields work exceptionally well on synthetic data; being able to predict all but one parameter with almost all reproduced traces within acceptable error ranges. The method shows great promise. However, this study was based purely on synthetic data. Future work on experimental data is therefore necessary to examine the method’s full capability.
Inom området av neurovetenskap är simulering av neuroner och neuronala nätverk ofta av stort intresse. Innan neuronmodellerna kan användas krävs det justering av flera parametrar för att korrekt replikera egenskaper hos en given neurontyp. Det finns flera metoder för att göra denna justering av parametrar men de har ett vanligt problem, de är beräkningstunga. I ett försök att minska den beräkningskostnad föreslår vi i denna studie en tillämpning av ’Convolutional Neural Networks’ med ’Gramian Angular Fields’ av spännings ’traces’ för att optimera parametrar med regression. Efter att ha tränat och evaluerat nätverket på data genererad från AdEx modellen i NEST fann vi att ’Convolutional Neural Networks’ i samband med ’Gramian Angular Fields’ fungerar exceptionellt bra på syntetisk data. Modellen kunde förutsäga alla utom en parameter med nästan alla återskapade ’traces’ inom acceptabla felintervall. Resultaten är lovande, men studien baserades enbart på syntetisk data. Framtida arbete med experimentella data är därför nödvändigt för att examinera metodens fulla förmåga.
APA, Harvard, Vancouver, ISO, and other styles
45

Buttar, Sarpreet Singh. "Applying Machine Learning to Reduce the Adaptation Space in Self-Adaptive Systems : an exploratory work." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-77201.

Full text
Abstract:
Self-adaptive systems are capable of autonomously adjusting their behavior at runtime to accomplish particular adaptation goals. The most common way to realize self-adaption is using a feedback loop(s) which contains four actions: collect runtime data from the system and its environment, analyze the collected data, decide if an adaptation plan is required, and act according to the adaptation plan for achieving the adaptation goals. Existing approaches achieve the adaptation goals by using formal methods, and exhaustively verify all the available adaptation options, i.e., adaptation space. However, verifying the entire adaptation space is often not feasible since it requires time and resources. In this thesis, we present an approach which uses machine learning to reduce the adaptation space in self-adaptive systems. The approach integrates with the feedback loop and selects a subset of the adaptation options that are valid in the current situation. The approach is applied on the simulator of a self-adaptive Internet of Things application which is deployed in KU Leuven, Belgium. We compare our results with a formal model based self-adaptation approach called ActivFORMS. The results show that on average the adaptation space is reduced by 81.2% and the adaptation time by 85% compared to ActivFORMS while achieving the same quality guarantees.
APA, Harvard, Vancouver, ISO, and other styles
46

Souriau, Rémi. "machine learning for modeling dynamic stochastic systems : application to adaptive control on deep-brain stimulation." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG004.

Full text
Abstract:
Ces dernières années ont été marquées par l'émergence d'un grand nombre de base données dans de nombreux domaines comme la médecine par exemple. La création de ces bases données a ouvert la voie à de nouvelles applications. Les propriétés des données sont parfois complexes (non linéarité, dynamique, grande dimension ou encore absence d'étiquette) et nécessite des modèles d'apprentissage performants. Parmi les modèles d'apprentissage existant, les réseaux de neurones artificiels ont connu un large succès ces dernières décennies. Le succès de ces modèles repose sur la non linéarité des neurones, l'utilisation de variables latentes et leur grande flexibilité leur permettant de s'adapter à de nombreux problèmes. Les machines de Boltzmann présentées dans cette thèse sont une famille de réseaux de neurones non supervisés. Introduite par Hinton dans les années 80, cette famille de modèle a connu un grand intérêt dans le début du 21e siècle et de nouvelles extensions sont proposées régulièrement.Cette thèse est découpée en deux parties. Une partie exploratoire sur la famille des machines de Boltzmann et une partie applicative. L'application étudiée est l'apprentissage non supervisé des signaux électroencéphalogramme intracrânien chez les rats Parkinsonien pour le contrôle des symptômes de la maladie de Parkinson.Les machines de Boltzmann ont donné naissance aux réseaux de diffusion. Il s'agit de modèles non supervisés qui reposent sur l'apprentissage d'une équation différentielle stochastique pour des données dynamiques et stochastiques. Ce réseau fait l'objet d'un développement particulier dans cette thèse et un nouvel algorithme d'apprentissage est proposé. Son utilisation est ensuite testée sur des données jouet ainsi que sur des données réelles
The past recent years have been marked by the emergence of a large amount of database in many fields like health. The creation of many databases paves the way to new applications. Properties of data are sometimes complex (non linearity, dynamic, high dimensions) and require to perform machine learning models. Belong existing machine learning models, artificial neural network got a large success since the last decades. The success of these models lies on the non linearity behavior of neurons, the use of latent units and the flexibility of these models to adapt to many different problems. Boltzmann machines presented in this thesis are a family of generative neural networks. Introduced by Hinton in the 80's, this family have got a large interest at the beginning of the 21st century and new extensions are regularly proposed.This thesis is divided into two parts. A first part exploring Boltzmann machines and their applications. In this thesis the unsupervised learning of intracranial electroencephalogram signals on rats with Parkinson's disease for the control of the symptoms is studied.Boltzmann machines gave birth to Diffusion networks which are also generative model based on the learning of a stochastic differential equation for dynamic and stochastic data. This model is studied again in this thesis and a new training algorithm is proposed. Its use is tested on toy data as well as on real database
APA, Harvard, Vancouver, ISO, and other styles
47

Hartness, Ken T. N. "Adaptive Planning and Prediction in Agent-Supported Distributed Collaboration." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4702/.

Full text
Abstract:
Agents that act as user assistants will become invaluable as the number of information sources continue to proliferate. Such agents can support the work of users by learning to automate time-consuming tasks and filter information to manageable levels. Although considerable advances have been made in this area, it remains a fertile area for further development. One application of agents under careful scrutiny is the automated negotiation of conflicts between different user's needs and desires. Many techniques require explicit user models in order to function. This dissertation explores a technique for dynamically constructing user models and the impact of using them to anticipate the need for negotiation. Negotiation is reduced by including an advising aspect to the agent that can use this anticipation of conflict to adjust user behavior.
APA, Harvard, Vancouver, ISO, and other styles
48

Mancini, Riccardo. "Optimizing cardboard-blank picking in a packaging machine by using Reinforcement Learning algorithms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
Artificial Intelligence (AI) has been one of the most promising research topics for years and the world of industrial process control is beginning to approach the possibilities offered by the so-called Machine Learning. Problems without a model, with multiple degrees of freedom and difficult to interpret, where traditional control technologies lose efficiency, seem like the ideal benchmark for AI. This thesis focuses on a packaging machine of coffee capsules being currently optimized at the Research and Innovation department of IMA S.p.A. company. In particular, the analysis concerns the apparatus responsible for picking the cardboard blanks that will be subsequently and progressively formed to envelop the capsules. The success of this first operation depends on various controllable parameters and as many disturbances. The relationship between the former and the latter is not easily identifiable, and its understanding has so far been entrusted to the experiential knowledge of operators called to intervene in the event of incorrect picking cycles. This thesis activity aims to achieve an adaptive control using Machine Learning algorithms, specifically the ones in the branch of Reinforcement Learning, to identify and autonomously perform the correction of the parameters that best avoid missing or incorrect picking cycles, which affect the productivity of the production system and the quality of the final product. After a dissertation on the theoretical foundations and the state of the art of RL, the case study is introduced and adapted to the RL framework. The subsequent choice of the best training modality is then followed by the description of the implementation steps, and the results obtained online during the tests of the controller are finally presented.
APA, Harvard, Vancouver, ISO, and other styles
49

Kaylani, Assem. "AN ADAPTIVE MULTIOBJECTIVE EVOLUTIONARY APPROACH TO OPTIMIZE ARTMAP NEURAL NETWORKS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2538.

Full text
Abstract:
This dissertation deals with the evolutionary optimization of ART neural network architectures. ART (adaptive resonance theory) was introduced by a Grossberg in 1976. In the last 20 years (1987-2007) a number of ART neural network architectures were introduced into the literature (Fuzzy ARTMAP (1992), Gaussian ARTMAP (1996 and 1997) and Ellipsoidal ARTMAP (2001)). In this dissertation, we focus on the evolutionary optimization of ART neural network architectures with the intent of optimizing the size and the generalization performance of the ART neural network. A number of researchers have focused on the evolutionary optimization of neural networks, but no research has been performed on the evolutionary optimization of ART neural networks, prior to 2006, when Daraiseh has used evolutionary techniques for the optimization of ART structures. This dissertation extends in many ways and expands in different directions the evolution of ART architectures, such as: (a) uses a multi-objective optimization of ART structures, thus providing to the user multiple solutions (ART networks) with varying degrees of merit, instead of a single solution (b) uses GA parameters that are adaptively determined throughout the ART evolution, (c) identifies a proper size of the validation set used to calculate the fitness function needed for ART's evolution, thus speeding up the evolutionary process, (d) produces experimental results that demonstrate the evolved ART's effectiveness (good accuracy and small size) and efficiency (speed) compared with other competitive ART structures, as well as other classifiers (CART (Classification and Regression Trees) and SVM (Support Vector Machines)). The overall methodology to evolve ART using a multi-objective approach, the chromosome representation of an ART neural network, the genetic operators used in ART's evolution, and the automatic adaptation of some of the GA parameters in ART's evolution could also be applied in the evolution of other exemplar based neural network classifiers such as the probabilistic neural network and the radial basis function neural network.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
50

Salem, Maher [Verfasser]. "Adaptive Real-time Anomaly-based Intrusion Detection using Data Mining and Machine Learning Techniques / Maher Salem." Kassel : Universitätsbibliothek Kassel, 2014. http://d-nb.info/1060417847/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography