To see the other types of publications on this topic, follow the link: Machine Learning Informé.

Dissertations / Theses on the topic 'Machine Learning Informé'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Machine Learning Informé.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guimbaud, Jean-Baptiste. "Enhancing Environmental Risk Scores with Informed Machine Learning and Explainable AI." Electronic Thesis or Diss., Lyon 1, 2024. http://www.theses.fr/2024LYO10188.

Full text
Abstract:
Dès la conception, des facteurs environnementaux tels que la qualité de l'air ou les habitudes alimentaires peuvent significativement influencer le risque de développer diverses maladies chroniques. Dans la littérature épidémiologique, des indicateurs connus sous le nom de Scores de Risque Environnemental (Environmental Risk Score, ERS) sont utilisés non seulement pour identifier les individus à risque, mais aussi pour étudier les relations entre les facteurs environnementaux et la santé. Une limite de la plupart des ERSs est qu'ils sont exprimés sous forme de combinaisons linéaires d'un nombre limité de facteurs. Cette thèse de doctorat vise à développer des indicateurs ERSs capables d'investiguer des relations non linéaires et des interactions à travers un large éventail d'expositions tout en découvrant des facteurs actionnables pour guider des mesures et interventions préventives, tant chez les adultes que chez les enfants. Pour atteindre cet objectif, nous exploitons les capacités prédictives des méthodes d'apprentissage automatique non paramétriques, combinées avec des outils récents d'IA explicable et des connaissances existantes du domaine. Dans la première partie de cette thèse, nous calculons des scores de risque environnemental basés sur l'apprentissage automatique pour la santé mentale, cardiométabolique et respiratoire de l'enfant. En plus d'identifier des relations non linéaires et des interactions entre expositions, nous avons identifié de nouveaux prédicteurs de maladies chez les enfants. Les scores peuvent expliquer une proportion significative de la variance des données et leurs performances sont stables à travers différentes cohortes. Dans la deuxième partie, nous proposons SEANN, une nouvelle approche intégrant des connaissances expertes sous forme d'Effet Agrégées (Pooled Effect Size, PES) dans l'entraînement de réseaux neuronaux profonds pour le calcul de scores de risque environnemental informés (Informed ERS). SEANN vise à calculer des ERSs plus robustes, généralisables à une population plus large, et capables de capturer des relations d'exposition plus proches de celles connues dans la littérature. Nous illustrons expérimentalement les avantages de cette approche en utilisant des données synthétiques. Par rapport à un réseau neuronal agnostique, nous obtenons une meilleure généralisation des prédictions dans des contextes de données bruitées et une fiabilité améliorée des interprétations obtenues en utilisant des méthodes d'Intelligence Artificielle Explicable (Explainable AI - XAI).Dans la dernière partie de cette thèse, nous proposons une application concrète de SEANN en utilisant les données d'une cohorte espagnole composée d'adultes. Comparé à un score de risque environnemental basé sur un réseau neuronal agnostique, le score obtenu avec SEANN capture des relations mieux alignées avec les associations de la littérature sans détériorer les performances prédictives. De plus, les expositions ayant une couverture littéraire limitée diffèrent significativement de celles obtenues avec la méthode agnostique de référence en bénéficiant de directions d'associations plus plausibles. En conclusion, nos scores de risque démontrent un indubitable potentiel pour la découverte informée de relation environnement-santé non linéaires peu connues, tirant parti des connaissances existantes sur les relations bien connues. Au-delà de leur utilité dans la recherche épidémiologique, nos indicateurs de risque sont capables de capturer, de manière holistique, des relations de risque au niveau individuel et d'informer les praticiens sur des facteurs de risque actionnables identifiés. Alors que dans l'ère post-génétique, la prévention en médecine personnalisée se concentrera de plus en plus sur les facteurs non héréditaires et actionnables, nous pensons que ces approches seront déterminantes pour façonner les futurs paradigmes de la santé
From conception onward, environmental factors such as air quality or dietary habits can significantly impact the risk of developing various chronic diseases. Within the epidemiological literature, indicators known as Environmental Risk Scores (ERSs) are used not only to identify individuals at risk but also to study the relationships between environmental factors and health. A limit of most ERSs is that they are expressed as linear combinations of a limited number of factors. This doctoral thesis aims to develop ERS indicators able to investigate nonlinear relationships and interactions across a broad range of exposures while discovering actionable factors to guide preventive measures and interventions, both in adults and children. To achieve this aim, we leverage the predictive abilities of non-parametric machine learning methods, combined with recent Explainable AI tools and existing domain knowledge. In the first part of this thesis, we compute machine learning-based environmental risk scores for mental, cardiometabolic, and respiratory general health for children. On top of identifying nonlinear relationships and exposure-exposure interactions, we identified new predictors of disease in childhood. The scores could explain a significant proportion of variance and their performances were stable across different cohorts. In the second part, we propose SEANN, a new approach integrating expert knowledge in the form of Pooled Effect Sizes (PESs) into the training of deep neural networks for the computation of extit{informed environmental risk scores}. SEANN aims to compute more robust ERSs, generalizable to a broader population, and able to capture exposure relationships that are closer to evidence known from the literature. We experimentally illustrate the approach's benefits using synthetic data, showing improved prediction generalizability in noisy contexts (i.e., observational settings) and improved reliability of interpretation using Explainable Artificial Intelligence (XAI) methods compared to an agnostic neural network. In the last part of this thesis, we propose a concrete application for SEANN using data from a cohort of Spanish adults. Compared to an agnostic neural network-based ERS, the score obtained with SEANN effectively captures relationships more in line with the literature-based associations without deteriorating the predictive performances. Moreover, exposures with poor literature coverage significantly differ from those obtained with the agnostic baseline method with more plausible directions of associations.In conclusion, our risk scores demonstrate substantial potential for the data-driven discovery of unknown nonlinear environmental health relationships by leveraging existing knowledge about well-known relationships. Beyond their utility in epidemiological research, our risk indicators are able to capture holistic individual-level non-hereditary risk associations that can inform practitioners about actionable factors in high-risk individuals. As in the post-genetic era, personalized medicine prevention will focus more and more on modifiable factors, we believe that such approaches will be instrumental in shaping future healthcare paradigms
APA, Harvard, Vancouver, ISO, and other styles
2

Mack, Jonas. "Physics Informed Machine Learning of Nonlinear Partial Differential Equations." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-441275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leung, Jason W. "Application of machine learning : automated trading informed by event driven data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105982.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 61-65).
Models of stock price prediction have traditionally used technical indicators alone to generate trading signals. In this paper, we build trading strategies by applying machine-learning techniques to both technical analysis indicators and market sentiment data. The resulting prediction models can be employed as an artificial trader used to trade on any given stock exchange. The performance of the model is evaluated using the S&P 500 index.
by Jason W. Leung.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.

Full text
Abstract:
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Ph. D.
Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
APA, Harvard, Vancouver, ISO, and other styles
5

Reichert, Nils. "CORRELATION BETWEEN COMPUTER RECOGNIZED FACIAL EMOTIONS AND INFORMED EMOTIONS DURING A CASINO COMPUTER GAME." Thesis, Fredericton: University of New Brunswick, 2012. http://hdl.handle.net/1882/44596.

Full text
Abstract:
Emotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jianxun. "Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77035.

Full text
Abstract:
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Cedergren, Linnéa. "Physics-informed Neural Networks for Biopharma Applications." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185423.

Full text
Abstract:
Physics-Informed Neural Networks (PINNs) are hybrid models that incorporate differential equations into the training of neural networks, with the aim of bringing the best of both worlds. This project used a mathematical model describing a Continuous Stirred-Tank Reactor (CSTR), to test two possible applications of PINNs. The first type of PINN was trained to predict an unknown reaction rate law, based only on the differential equation and a time series of the reactor state. The resulting model was used inside a multi-step solver to simulate the system state over time. The results showed that the PINN could accurately model the behaviour of the missing physics also for new initial conditions. However, the model suffered from extrapolation error when tested on a larger reactor, with a much lower reaction rate. Comparisons between using a numerical derivative or automatic differentiation in the loss equation, indicated that the latter had a higher robustness to noise. Thus, it is likely the best choice for real applications. A second type of PINN was trained to forecast the system state one-step-ahead based on previous states and other known model parameters. An ordinary feed-forward neural network with an equal architecture was used as baseline. The second type of PINN did not outperform the baseline network. Further studies are needed to conclude if or when physics-informed loss should be used in autoregressive applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Emerson, Guy Edward Toh. "Functional distributional semantics : learning linguistically informed representations from a precisely annotated corpus." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/284882.

Full text
Abstract:
The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors - but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about 'truth', a crucial concept in logic and formal semantics. In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a 'function', mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models.
APA, Harvard, Vancouver, ISO, and other styles
9

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Full text
Abstract:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a master step which is in charge of enforcing the constraints, while the training phase is delegated to a learner. In this work, we extend the algorithm in order to deal with semi-supervised learning and soft constraints. In particular, we focus our empirical evaluation on both regression and classification tasks involving monotonicity shape constraints. We demonstrate that our method is robust with respect to its hyperparameters, as well as being able to generalize very well while reducing the number of violations on the enforced constraints. Additionally, the method can even outperform, both in terms of accuracy and constraint satisfaction, other state-of-the-art techniques such as Lattice Models and Semantic-based Regularization with a Lagrangian Dual approach for automatic hyperparameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
10

Augustin, Lefèvre. "Méthodes d'apprentissage appliquées à la séparation de sources mono-canal." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00764546.

Full text
Abstract:
Étant donne un mélange de plusieurs signaux sources, par exemple un morceau et plusieurs instruments, ou un entretien radiophonique et plusieurs interlocuteurs, la séparation de source mono-canal consiste a' estimer chacun des signaux sources a' partir d'un enregistrement avec un seul microphone. Puisqu'il y a moins de capteurs que de sources, il y a a priori une infinité de solutions sans rapport avec les sources originales. Il faut alors trouver quelle information supplémentaire permet de rendre le problème bien pose. Au cours des dix dernières années, la factorisation en matrices positives (NMF) est devenue un composant majeurs des systèmes de séparation de sources. En langage profane, la NMF permet de d'écrire un ensemble de signaux audio a ́ partir de combinaisons d' éléments sonores simples (les atomes), formant un dictionnaire. Les systèmes de séparation de sources reposent alors sur la capacité a trouver des atomes qui puissent être assignes de fa con univoque 'a chaque source sonore. En d'autres termes, ils doivent être interprétables. Nous proposons dans cette thèse trois contributions principales aux méthodes d'apprentissage de dictionnaire. La première est un critère de parcimonie par groupes adapte a la NMF lorsque la mesure de distorsion choisie est la divergence d'Itakura-Saito. Dans la plupart des signaux de musique on peut trouver de longs intervalles ou' seulement une source est active (des soli). Le critère de parcimonie par groupe que nous proposons permet de trouver automatiquement de tels segments et d'apprendre un dictionnaire adapte a chaque source. Ces dictionnaires permettent ensuite d'effectuer la tache de séparation dans les intervalles ou' les sources sont mélangées. Ces deux taches d'identification et de séparation sont effectuées simultanément en une seule passe de l'algorithme que nous proposons. Notre deuxième contribution est un algorithme en ligne pour apprendre le dictionnaire a grande échelle, sur des signaux de plusieurs heures, ce qui était impossible auparavant. L'espace mémoire requis par une NMF estimée en ligne est constant alors qu'il croit linéairement avec la taille des signaux fournis dans la version standard, ce qui est impraticable pour des signaux de plus d'une heure. Notre troisième contribution touche a' l'interaction avec l'utilisateur. Pour des signaux courts, l'apprentissage aveugle est particulièrement difficile, et l'apport d'information spécifique au signal traite est indispensable. Notre contribution est similaire à l'inpainting et permet de prendre en compte des annotations temps-fréquence. Elle repose sur l'observation que la quasi-totalite du spectro- gramme peut être divise en régions spécifiquement assignées a' chaque source. Nous d'éecrivons une extension de NMF pour prendre en compte cette information et discutons la possibilité d'inférer cette information automatiquement avec des outils d'apprentissage statistique simples.
APA, Harvard, Vancouver, ISO, and other styles
11

Toure, Carine. "Capitalisation pérenne de connaissances industrielles : Vers des méthodes de conception incrémentales et itératives centrées sur l’activité." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI095/document.

Full text
Abstract:
Dans ce travail de recherche, nous nous intéressons à la question de la pérennité de l’usage des systèmes de gestion des connaissances (SGC) dans les entreprises. Les SGC sont ces environnements informatiques qui sont mis en place dans les entreprises pour mutualiser et construire l’expertise commune grâce aux collaborateurs. Le constat montre que, malgré la rigueur employée par les entreprises pour la mise en œuvre de ces SGC, le risque d’échec des initiatives de gestion des connaissances, notamment lié à l’acceptation de ces environnements par les utilisateurs professionnels ainsi qu’à leur usage continu et durable, reste d’actualité. La persistance et l’ampleur de ce constat dans les entreprises a motivé notre intérêt d’apporter une contribution à cette question générale de recherche. Comme propositions de réponse à cette problématique, nous avons donc 1) dégagé à partir de l’état de l’art, quatre facettes qui sont requises pour favoriser l’usage pérenne d’une plateforme gérant la connaissance ; 2) proposé un modèle théorique de régulation mixte qui unifie des outils de stimulation pour l’autorégulation et des outils soutenant l’accompagnement au changement et qui permet la mise en œuvre continue des différents facteurs stimulants l’usage pérenne des SGC ; 3) proposé une méthodologie de conception, adaptée à ce modèle et basée sur les concepts Agile, qui intègre une méthode d’évaluation mixte de la satisfaction et de l’usage effectif ainsi que des outils d’IHM pour l’exécution des différentes itérations de notre méthodologie ; 4) implémenté la méthodologie en contexte réel, à la Société du Canal de Provence, ce qui nous a permis de tester sa faisabilité et de proposer des ajustements/recommandations génériques aux concepteurs pour son application en contexte. L’outil résultant de notre implémentation a reçu un accueil positif par les utilisateurs en termes de satisfaction et d’usages
In this research, we are interested in the question of sustainability of the use of knowledge management systems (KMS) in companies. KMS are those IT environments that are set up in companies to share and build common expertise through collaborators. Findings show that, despite the rigor employed by companies in the implementation of these KMS, the risk of knowledge management initiatives being unsuccessful, particularly related to the acceptance and continuous use of these environments by users remains prevalent. The persistence of this fact in companies has motivated our interest to contribute to this general research question. As contributions to this problem, we have 1) identified from the state of the art, four facets that are required to promote the perennial use of a platform managing knowledge; 2) proposed a theoretical model of mixed regulation that unifies tools for self-regulation and tools to support change, and allows the continuous implementation of the various factors that stimulate the sustainable use of CMS; 3) proposed a design methodology, adapted to this model and based on the Agile concepts, which incorporates a mixed evaluation methodology of satisfaction and effective use as well as CHI tools for the completion of different iterations of our methodology; 4) implemented the methodology in real context at the Société du Canal de Provence, which allowed us to test its feasibility and propose generic adjustments / recommendations to designers for its application in context. The tool resulting from our implementation was positively received by the users in terms of satisfaction and usages
APA, Harvard, Vancouver, ISO, and other styles
12

SIMONETTA, FEDERICO. "MUSIC INTERPRETATION ANALYSIS. A MULTIMODAL APPROACH TO SCORE-INFORMED RESYNTHESIS OF PIANO RECORDINGS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/918909.

Full text
Abstract:
This Thesis discusses the development of technologies for the automatic resynthesis of music recordings using digital synthesizers. First, the main issue is identified in the understanding of how Music Information Processing (MIP) methods can take into consideration the influence of the acoustic context on the music performance. For this, a novel conceptual and mathematical framework named “Music Interpretation Analysis” (MIA) is presented. In the proposed framework, a distinction is made between the “performance” – the physical action of playing – and the “interpretation” – the action that the performer wishes to achieve. Second, the Thesis describes further works aiming at the democratization of music production tools via automatic resynthesis: 1) it elaborates software and file formats for historical music archiving and multimodal machine-learning datasets; 2) it explores and extends MIP technologies; 3) it presents the mathematical foundations of the MIA framework and shows preliminary evaluations to demonstrate the effectiveness of the approach
APA, Harvard, Vancouver, ISO, and other styles
13

Bonantini, Andrea. "Analisi di dati e sviluppo di modelli predittivi per sistemi di saldatura." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24664/.

Full text
Abstract:
Il presente lavoro di tesi ha come obiettivo quello di predire la lunghezza dell'arco elettrico che si forma nel processo di saldatura MIG/MAG, quando un elettrodo fusibile (filo) si porta a una distanza opportuna dal componente che deve essere saldato. In questo caso specifico, la lega di materiale che forma il filo è fatta di Alluminio-Magnesio. In particolare, in questo elaborato sarà presentato l'impatto che avranno alcune grandezze fisiche, come tensione, corrente e velocità di trascinamento del filo che viene fuso, durante il procedimento della saldatura, e come queste influenzeranno la dimensione dell'arco. Più precisamente, sono stati creati dei modelli previsionali capaci di prevedere la lunghezza d'arco sulla base di tali grandezze, secondo due criteri distinti: black-box e knowledge-driven. Nello specifico, il capitolo uno prevede una panoramica sullo stato dell'arte della saldatura MIG/MAG, introducendo concretamente il Gruppo Cebora, le modalità di acquisizione dei dati e il modello fisico con cui al momento si calcola la lunghezza dell'arco elettrico. Il secondo capitolo mostra l'analisi dei dati e spiega le decisioni sperimentali che sono state prese per gestirli e comprenderli al meglio; inoltre, in questo capitolo si capirà l'accuratezza del modello di Cebora, confrontando le sue predizioni con i dati reali. Il terzo capitolo è più operativo e vengono presentate le prime rete neurali costruite, che possiedono un approccio black-box ed alcune manipolazioni sulla corrente. Il quarto capitolo sposta l'attenzione sul ruolo della tensione, e sono realizzate nuove reti con un approccio differente, ovvero knowledge-driven. Il quinto capitolo trae le conclusioni di questo elaborato, esaminando gli aspetti positivi e negativi dei migliori modelli ottenuti.
APA, Harvard, Vancouver, ISO, and other styles
14

Santos, Jadson da Silva. "Estudo comparativo de diferentes classificadores baseados em aprendizagem de m?quina para o processo de Reconhecimento de Entidades Nomeadas." Universidade Estadual de Feira de Santana, 2016. http://localhost:8080/tede/handle/tede/554.

Full text
Abstract:
Submitted by Jadson Francisco de Jesus SILVA (jadson@uefs.br) on 2018-01-24T22:42:26Z No. of bitstreams: 1 JadsonDisst.pdf: 3499973 bytes, checksum: 5deaf9020f758e9c07f86e9e62890129 (MD5)
Made available in DSpace on 2018-01-24T22:42:26Z (GMT). No. of bitstreams: 1 JadsonDisst.pdf: 3499973 bytes, checksum: 5deaf9020f758e9c07f86e9e62890129 (MD5) Previous issue date: 2016-09-09
The Named Entity Recognition (NER) process is the task of identifying relevant termsintextsandassigningthemalabel.Suchwordscanreferencenamesofpeople, organizations, and places. The variety of techniques that can be used in the named entityrecognitionprocessislarge.Thetechniquescanbeclassifiedintothreedistinct approaches: rule-based, machine learning and hybrid. Concerning to the machine learningapproaches,severalfactorsmayinfluenceitsaccuracy,includingtheselected classifier, the set of features extracted from the terms, the characteristics of the textual bases, and the number of entity labels. In this work, we compared classifiers that use machine learning applied to the NER task. The comparative study includes classifiers based on CRF (Conditional Random Fields), MEMM (MaximumEntropy Markov Model) and HMM (Hidden Markov Model), which are compared in two corpora in Portuguese derived from WikiNer, and HAREM, and two corporas in English derived from CoNLL-03 and WikiNer. The comparison of the classifiers shows that the CRF is superior to the other classifiers, both with Portuguese and English texts. This study also includes the comparison of the individual and joint contribution of features, including contextual features, besides the comparison ofthe NER per named entity labels, between classifiers andcorpora.
O processo de Reconhecimento de Entidades Nomeadas (REN) ? a tarefa de iden- tificar termos relevantes em textos e atribu?-los um r?tulo. Tais palavras podem referenciar nomes de pessoas, organiza??es e locais. A variedade de t?cnicas que podem ser usadas no processo de reconhecimento de entidades nomeadas ? grande. As t?cnicas podem ser classificadas em tr?s abordagens distintas: baseadas em regras, baseadas em aprendizagem de m?quina e h?bridas. No que diz respeito as abordagens de aprendizagem de m?quina, diversos fatores podem influenciar sua exatida?, incluindo o classificador selecionado, o conjunto de features extra?das dos termos, as caracter?sticas das bases textuais e o n?mero de r?tulos de entidades. Neste trabalho, comparamos classificadores que utilizam aprendizagem de m?quina aplicadas a tarefa do REN. O estudo comparativo inclui classificadores baseados no CRF (Condicional Random Fields), MEMM (Maximum Entropy Markov Model) e HMM (Hidden Markov Model), os quais s?o comparados em dois corporas em portugu?s derivados do WikiNer, e HAREM, e dois corporas em ingl?s derivados doCoNLL-03 e WikiNer. A compara??o dos classificadores demonstra que o CRF ? superior aos demais classificadores, tanto com textos em portugu?s, quanto ingl?s. Este estudo tamb?m inclui a compara??o da contribui??o, individual e em conjunto de features, incluindo features de contexto, al?m da compara??o do REN por r?otulos de entidades nomeadas, entre os classificadores e os corpora.
APA, Harvard, Vancouver, ISO, and other styles
15

Muriithi, Paul Mutuanyingi. "A case for memory enhancement : ethical, social, legal, and policy implications for enhancing the memory." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/a-case-for-memory-enhancement-ethical-social-legal-and-policy-implications-for-enhancing-the-memory(bf11d09d-6326-49d2-8ef3-a40340471acf).html.

Full text
Abstract:
The desire to enhance and make ourselves better is not a new one and it has continued to intrigue throughout the ages. Individuals have continued to seek ways to improve and enhance their well-being for example through nutrition, physical exercise, education and so on. Crucial to this improvement of their well-being is improving their ability to remember. Hence, people interested in improving their well-being, are often interested in memory as well. The rationale being that memory is crucial to our well-being. The desire to improve one’s memory then is almost certainly as old as the desire to improve one’s well-being. Traditionally, people have used different means in an attempt to enhance their memories: for example in learning through storytelling, studying, and apprenticeship. In remembering through practices like mnemonics, repetition, singing, and drumming. In retaining, storing and consolidating memories through nutrition and stimulants like coffee to help keep awake; and by external aids like notepads and computers. In forgetting through rituals and rites. Recent scientific advances in biotechnology, nanotechnology, molecular biology, neuroscience, and information technologies, present a wide variety of technologies to enhance many different aspects of human functioning. Thus, some commentators have identified human enhancement as central and one of the most fascinating subject in bioethics in the last two decades. Within, this period, most of the commentators have addressed the Ethical, Social, Legal and Policy (ESLP) issues in human enhancements as a whole as opposed to specific enhancements. However, this is problematic and recently various commentators have found this to be deficient and called for a contextualized case-by-case analysis to human enhancements for example genetic enhancement, moral enhancement, and in my case memory enhancement (ME). The rationale being that the reasons for accepting/rejecting a particular enhancement vary depending on the enhancement itself. Given this enormous variation, moral and legal generalizations about all enhancement processes and technologies are unwise and they should instead be evaluated individually. Taking this as a point of departure, this research will focus specifically on making a case for ME and in doing so assessing the ESLP implications arising from ME. My analysis will draw on the already existing literature for and against enhancement, especially in part two of this thesis; but it will be novel in providing a much more in-depth analysis of ME. From this perspective, I will contribute to the ME debate through two reviews that address the question how we enhance the memory, and through four original papers discussed in part three of this thesis, where I examine and evaluate critically specific ESLP issues that arise with the use of ME. In the conclusion, I will amalgamate all my contribution to the ME debate and suggest the future direction for the ME debate.
APA, Harvard, Vancouver, ISO, and other styles
16

Ayo, Brenda. "Integrating openstreetmap data and sentinel-2 Imagery for classifying and monitoring informal settlements." Master's thesis, 2020. http://hdl.handle.net/10362/93641.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial Technologies
The identification and monitoring of informal settlements in urban areas is an important step in developing and implementing pro-poor urban policies. Understanding when, where and who lives inside informal settlements is critical to efforts to improve their resilience. This study aims at integrating OSM data and sentinel-2 imagery for classifying and monitoring the growth of informal settlements methods to map informal areas in Kampala (Uganda) and Dar es Salaam (Tanzania) and to monitor their growth in Kampala. Three building feature characteristics of size, shape and Distance to nearest Neighbour were derived and used to cluster and classify informal areas using Hotspot Cluster analysis and ML approach on OSM buildings data. The resultant informal regions in Kampala were used with Sentinel-2 image tiles to investigate the spatiotemporal changes in informal areas using Convolutional Neural Networks (CNNs). Results from Optimized Hot Spot Analysis and Random Forest Classification show that Informal regions can be mapped based on building outline characteristics. An accuracy of 90.3% was achieved when an optimally trained CNN was executed on a test set of 2019 satellite image tiles. Predictions of informality from new datasets for the years 2016 and 2017 provided promising results on combining different open source geospatial datasets to identify, classify and monitor informal settlements.
APA, Harvard, Vancouver, ISO, and other styles
17

Schumacher, Johannes. "Time Series Analysis informed by Dynamical Systems Theory." Doctoral thesis, 2015. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2015061113245.

Full text
Abstract:
This thesis investigates time series analysis tools for prediction, as well as detection and characterization of dependencies, informed by dynamical systems theory. Emphasis is placed on the role of delays with respect to information processing in dynamical systems, as well as with respect to their effect in causal interactions between systems. The three main features that characterize this work are, first, the assumption that time series are measurements of complex deterministic systems. As a result, functional mappings for statistical models in all methods are justified by concepts from dynamical systems theory. To bridge the gap between dynamical systems theory and data, differential topology is employed in the analysis. Second, the Bayesian paradigm of statistical inference is used to formalize uncertainty by means of a consistent theoretical apparatus with axiomatic foundation. Third, the statistical models are strongly informed by modern nonlinear concepts from machine learning and nonparametric modeling approaches, such as Gaussian process theory. Consequently, unbiased approximations of the functional mappings implied by the prior system level analysis can be achieved. Applications are considered foremost with respect to computational neuroscience but extend to generic time series measurements.
APA, Harvard, Vancouver, ISO, and other styles
18

Rautela, Mahindra Singh. "Hybrid Physics-Data Driven Models for the Solution of Mechanics Based Inverse Problems." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6123.

Full text
Abstract:
Inverse problems pose a significant challenge as they aim to estimate the causal factors that result in a measured response. However, the responses are often truncated, partially available, and corrupted by measurement noise, rendering the problems ill-posed, and may have multiple or no solutions. Solving such problems using regularization transforms them into a family of well-posed functions. While physics-based models are interpretable, they operate under approximations and assumptions. Data-driven models such as machine learning and deep learning have shown promise in solving mechanics-based inverse problems, but they lack robustness, convergence, and generalization when operating under partial information, compromising the interpretability and explainability of their predictions. To overcome these challenges, hybrid physics-data-driven models can be formulated by integrating prior knowledge of physical laws, expert knowledge, spatial invariances, empirically validated rules, etc., acting as a regularizing agent to select a more feasible solution space. This approach improves prediction accuracy, robustness, generalization, interpretability, and explainability of the data-driven models. In this dissertation, we propose various physics-data-driven models to solve inverse problems related to engineering mechanics by integrating prior knowledge and its representation into a data-driven pipeline at different stages. We have used these hybrid models to solve six different inverse problems, such as leakage estimation of a pressurized habitat, estimating dispersion relations of a waveguide, structural damage identification, filtering temperature effects in guided waves, material property prediction, and guided wave generation and material design. The dissertation presents a detailed overview of inverse problems, definitions of the six inverse problems, and the motivation behind using hybrid models for their solution. Six different hybrid models, such as adaptive model calibration, physics-informed neural networks, inverse deep surrogate, deep latent variable, and unsupervised representation learning models, are formulated, and arranged on different levels of a pyramid, showing the trade-off between autonomy and explainability. All these new methods are designed with practical implementation in mind. The first model uses an adaptive real-time calibration framework to estimate the severity of leaks in a pressurized deep space habitat before they become a threat to the crew and habitat. The second model utilizes a physics-informed neural network to estimate the speed of wave propagation in a waveguide from limited experimental observations. The third model uses deep surrogate models to solve structural damage identification and material property prediction problems. The fourth model proposes a domain knowledge-based data augmentation scheme for ultrasonic guided waves-based damage identification. The fifth model uses unsupervised feature learning to solve guided waves-based structural anomaly detection and filtering the temperature effects on guided waves. The final model employs a deep latent variable model for structural anomaly detection, guided wave generation, and material design problems. Overall, the thesis demonstrates the effectiveness of hybrid models that combine prior knowledge with machine learning techniques to address a wide range of inverse problems. These models offer faster, more accurate, and more automated solutions to these problems than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Coimbra, Ana Cecília Sousa Rocha. "Improving clinical problem list with evidence based medicine, patient oriented medical records and intelligence." Doctoral thesis, 2021. http://hdl.handle.net/1822/75189.

Full text
Abstract:
Programa Doutoral em Engenharia Biomédica
As listas de problemas clínicos são muito importantes na prestação de cuidados de saúde, principalmente em termos de precisão. A presente tese tem como base um conjunto de estudos realizados no Centro Hospitalar Universitário do Porto, onde o principal objetivo é o de melhorar as listas de problemas clínicos. O primeiro estudo foca-se nos passos iniciais do desenvolvimento de um sistema de registo clínico inovador que utiliza openEHR e terminologia SNOMED CT. Este sistema irá permitir a criação de registos estruturados através da utilização de arquétipos, terá também definidos protocolos baseados nas guidelines HL7 versão 3. O segundo e terceiro estudo centram-se na codificação dos relatórios de alta. A codificação dos relatórios de alta permite um melhor agrupamento de episódios nos Grupo de Diagnóstico Homogéneos, daí a importância de tornar este processo o mais eficiente possível e com o mínimo de erros. Deste modo foi desenvolvida uma plataforma para que os médicos possam facilmente codificar os referidos episódios, tendo em background processos de gestão para auxiliar o workflow de todo o processo de codificação. O quarto e último estudo refere-se ao desenvolvimento de uma plataforma capaz de disponibilizar consentimentos informados personalizados, onde os médicos podem adaptar os consentimentos aos diferentes tipos de casos que encontram. A metodologia adotada é a Design Science Research (DSR) suportada por uma filosofia pragmática. Ao longo do desenvolvimento do projeto um conjunto de grupos de foco irão contribuir para a continua avaliação do sistema.
The clinical problems list is very important in the provision of health care, mainly in terms of accuracy. This thesis is based on a set of studies carried out at the Centro Hospitalar Universitário do Porto where the main objective is improving the lists of clinical problems. The first study focuses on the initial steps of developing an innovative clinical record system that uses openEHR and SNOMED CT terminology. This system will allow the creation of structured records through the use of archetypes, it will also have defined protocols based on the guidelines HL7 version 3. The second and third studies focus on the codification of discharge reports. The codification of discharge reports allows for a better grouping of episodes in the Homogeneous Diagnostic Groups, hence the importance of making this process as efficient as possible and with the minimum of errors. In this way, a platform was developed so that doctors can easily code these episodes, with management processes in the background to assist the workflow of the entire coding process. The fourth and final study refers to the development of a platform capable of providing personalized informed consent where doctors can adapt the consent to the different types of cases they encounter. The methodology adopted is Design Science Research (DSR) supported by a pragmatic philosophy. Throughout the development of the project, a set of ´focus groups will contribute to the continuous evaluation of the system.
APA, Harvard, Vancouver, ISO, and other styles
20

Yadav, Sangeeta. "Data Driven Stabilization Schemes for Singularly Perturbed Differential Equations." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6095.

Full text
Abstract:
This thesis presents a novel way of leveraging Artificial Neural Network (ANN) to aid conventional numerical techniques for solving Singularly Perturbed Differential Equation (SPDE). SPDEs are challenging to solve with conventional numerical techniques such as Finite Element Methods (FEM) due to the presence of boundary and interior layers. Often the standard numerical solution shows spurious oscillations in the vicinity of these layers. Stabilization techniques are often employed to eliminate these spurious oscillations in the numerical solution. The accuracy of the stabilization technique depends on a user-chosen stabilization parameter whose optimal value is challenging to find. A few formulas for the stabilization parameter exist in the literature, but none extends well for high-dimensional and complex problems. In order to solve this challenge, we have developed the following ANN-based techniques for predicting this stabilization parameter: 1) SPDE-Net: As a proof of concept, we have developed an ANN called SPDE-Net for one-dimensional SPDEs. In the proposed method, we predict the stabilization parameter for the Streamline Upwind Petrov Galerkin (SUPG) stabilization technique. The prediction task is modelled as a regression problem using equation coefficients and domain parameters as inputs to the neural network. Three training strategies have been proposed, i.e. supervised learning, L 2-Error minimization (global) and L2-Error minimization (local). The proposed method outperforms existing state-of-the-art ANN-based partial differential equations (PDE) solvers, such as Physics Informed Neural Networks (PINNs). 2) AI-stab FEM With an aim for extending SPDE-Net for two-dimensional problems, we have also developed an optimization scheme using another Neural Network called AI-stab FEM and showed its utility in solving higher-dimensional problems. Unlike SPDE-Net, it minimizes the equation residual along with the crosswind derivative term and can be classified as an unsupervised method. We have shown that the proposed approach yields stable solutions for several two-dimensional benchmark problems while being more accurate than other contemporary ANN-based PDE solvers such as PINNs and Variational Neural Networks for the Solution of Partial Differential Equations (VarNet) 3) SPDE-ConvNet In the last phase of the thesis, we attempt to predict a cell-wise stabilization parameter to treat the interior/boundary layer regions adequately by developing an oscillations-aware neural network. We present SPDE-ConvNet, Convolutional Neural Network (CNN), for predicting the local (cell-wise) stabilization parameter. For the network training, we feed the gradient of the Galerkin solution, which is an indirect metric for representing oscillations in the numerical solution, along with the equation coefficients, to the network. It obtains a cell-wise stabilization parameter while sharing the network parameters among all the cells for an equation. Similar to AI-stab FEM, this technique outperforms PINNs and VarNet. We conclude the thesis with suggestions for future work that can leverage our current understanding of data-driven stabilization schemes for SPDEs to develop and improve the next-generation neural network-based numerical solvers for SPDEs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography