Dissertations / Theses on the topic 'Machine learning approches'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 43 dissertations / theses for your research on the topic 'Machine learning approches.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Arman, Molood. "Machine Learning Approaches for Sub-surface Geological Heterogeneous Sources." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG014.
Full textIn oil and gas exploration and production, understanding subsurface geological structures, such as well logs and rock samples, is essential to provide predictive and decision support tools. Gathering and using data from a variety of sources, both structured and unstructured, such as relational databases and digitized reports on the subsurface geology, are critical. The main challenge for the structured data is the lack of a global schema to cross-reference all attributes from different sources. The challenges are different for unstructured data. Most subsurface geological reports are scanned versions of documents. Our dissertation aims to provide a structured representation of the different data sources and to build domain-specific language models for learning named entities related to subsurface geology
Peyrache, Jean-Philippe. "Nouvelles approches itératives avec garanties théoriques pour l'adaptation de domaine non supervisée." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4023/document.
Full textDuring the past few years, an increasing interest for Machine Learning has been encountered, in various domains like image recognition or medical data analysis. However, a limitation of the classical PAC framework has recently been highlighted. It led to the emergence of a new research axis: Domain Adaptation (DA), in which learning data are considered as coming from a distribution (the source one) different from the one (the target one) from which are generated test data. The first theoretical works concluded that a good performance on the target domain can be obtained by minimizing in the same time the source error and a divergence term between the two distributions. Three main categories of approaches are derived from this idea : by reweighting, by reprojection and by self-labeling. In this thesis work, we propose two contributions. The first one is a reprojection approach based on boosting theory and designed for numerical data. It offers interesting theoretical guarantees and also seems able to obtain good generalization performances. Our second contribution consists first in a framework filling the gap of the lack of theoretical results for self-labeling methods by introducing necessary conditions ensuring the good behavior of this kind of algorithm. On the other hand, we propose in this framework a new approach, using the theory of (epsilon, gamma, tau)- good similarity functions to go around the limitations due to the use of kernel theory in the specific context of structured data
Cherif, Aymen. "Réseaux de neurones, SVM et approches locales pour la prévision de séries temporelles." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4003/document.
Full textTime series forecasting is a widely discussed issue for many years. Researchers from various disciplines have addressed it in several application areas : finance, medical, transportation, etc. In this thesis, we focused on machine learning methods : neural networks and SVM. We have also been interested in the meta-methods to push up the predictor performances, and more specifically the local models. In a divide and conquer strategy, the local models perform a clustering over the data sets before different predictors are affected into each obtained subset. We present in this thesis a new algorithm for recurrent neural networks to use them as local predictors. We also propose two novel clustering techniques suitable for local models. The first is based on Kohonen maps, and the second is based on binary trees
Hollocou, Alexandre. "Nouvelles approches pour le partitionnement de grands graphes." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE063.
Full textGraphs are ubiquitous in many fields of research ranging from sociology to biology. A graph is a very simple mathematical structure that consists of a set of elements, called nodes, connected to each other by edges. It is yet able to represent complex systems such as protein-protein interaction or scientific collaborations. Graph clustering is a central problem in the analysis of graphs whose objective is to identify dense groups of nodes that are sparsely connected to the rest of the graph. These groups of nodes, called clusters, are fundamental to an in-depth understanding of graph structures. There is no universal definition of what a good cluster is, and different approaches might be best suited for different applications. Whereas most of classic methods focus on finding node partitions, i.e. on coloring graph nodes so that each node has one and only one color, more elaborate approaches are often necessary to model the complex structure of real-life graphs and to address sophisticated applications. In particular, in many cases, we must consider that a given node can belong to more than one cluster. Besides, many real-world systems exhibit multi-scale structures and one much seek for hierarchies of clusters rather than flat clusterings. Furthermore, graphs often evolve over time and are too massive to be handled in one batch so that one must be able to process stream of edges. Finally, in many applications, processing entire graphs is irrelevant or expensive, and it can be more appropriate to recover local clusters in the neighborhood of nodes of interest rather than color all graph nodes. In this work, we study alternative approaches and design novel algorithms to tackle these different problems. The novel methods that we propose to address these different problems are mostly inspired by variants of modularity, a classic measure that accesses the quality of a node partition, and by random walks, stochastic processes whose properties are closely related to the graph structure. We provide analyses that give theoretical guarantees for the different proposed techniques, and endeavour to evaluate these algorithms on real-world datasets and use cases
Godet, Pierre. "Approches par apprentissage pour l’estimation de mouvement multiframe en vidéo." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG005.
Full textThis work concerns the use of temporal information on a sequence of more than two images for optical flow estimation. Optical flow is defined as the dense field (in any pixel) of the apparent movements in the image plane. We study on the one hand the use of a basis of temporal models, learned by principal component analysis from the studied data, to model the temporal dependence of the movement. This first study focuses on the context of particle image velocimetry in fluid mechanics. On the other hand, the new state of the art of optical flow estimation having recently been established by methods based on deep learning, we train convolutional neural networks to estimate optical flow by taking advantage of temporal continuity, in the case of natural image sequences. We then propose STaRFlow, a convolutional neural network exploiting a memory of information from the past by using a temporal recurrence. By repeated application of the same recurrent cell, the same learned parameters are used for the different time steps and for the different levels of a multiscale process. This architecture is lighter than competing networks while giving STaRFlow state-of-the-art performance. In the course of our work, we highlight several cases where the use of temporal information improves the quality of the estimation, in particular in the presence of occlusions, when the image quality is degraded (blur, noise), or in the case of thin objects
Delecraz, Sébastien. "Approches jointes texte/image pour la compréhension multimodale de documents." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0634/document.
Full textThe human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions
Akerma, Mahdjouba. "Impact énergétique de l’effacement dans un entrepôt frigorifique : analyse des approches systémiques : boîte noire / boîte blanche." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS187.
Full textRefrigerated warehouses and cold rooms, mainly used for food conservation, constitute available storage cells; they can be considered as a network of "thermal batteries" ready to be used and one of the best existing solutions to store and delay electricity consumption. However, the risk related to temperature fluctuations of products due to periods of demand response - DR* and the risk of energy overconsumption limit the use of this strategy by industrials in food refrigeration. The present PhD thesis aims to characterize the electrical DR of warehouses and cold rooms by examining the thermal behavior of those systems, in terms of temperature fluctuation and electrical consumption. An experimental set-up was developed to study several DR scenarios (duration, frequency and operating conditions) and to propose new indicators to characterize the impact of DR periods on the thermal and energy behavior of refrigeration systems. This study has highlighted the importance of the presence of load to limit the temperature rise and thus to reduce the impact on stored products. The potential for DR application in the case of a cold store and a cold room was assessed, based on the development of two modeling approaches: “black box” (Machine Learning by artificial neural networks using Deep Learning models) and “white box” (physics). A possibility of interaction between these two approaches has been proposed, based on the use of black box models for prediction and the use of the white box model to generate input and output data
Pinault, Florian. "Apprentissage par renforcement pour la généralisation des approches automatiques dans la conception des systèmes de dialogue oral." Phd thesis, Université d'Avignon, 2011. http://tel.archives-ouvertes.fr/tel-00933937.
Full textPereira, Cécile. "Nouvelles approches bioinformatiques pour l'étude à grande échelle de l'évolution des activités enzymatiques." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112065/document.
Full textThis thesis has for objective to propose new methods allowing the study of the evolution of the metabolism. For that purpose, we chose to deal with the problem of comparison of the metabolism of hundred microorganisms.To compare the metabolism of various species, it is necessary to know at first the metabolism of each of these species.We work with proteomes of the microorganisms coming from various databases and sequenced and annotated by various teams, via various methods. The functional annotation can thus be of heterogeneous quality. That is why it is necessary to make a standardized functional annotation of this proteomes.The annotation of protein sequences can be realized by the transfer of annotations between orthologs sequences. There are more than 39 databases listing orthologues predicted by various methods. It is known that these methods lead to partially different predictions. To take into account current predictions and also adding relevant information, we developed the meta approach MARIO. This one combines the intersections of the results of several methods of detection of groups of orthologs and add sequences to this groups by using HMM profiles. We show that our meta approach allows to predict a largest number of orthologs while improving the similarity of function of the pairs of predicted orthologs. It allowed us to predict the enzymatic directory of 178 proteomes of microorganisms (among which 174 fungi).Secondly, we analyze these enzymatic directories in order to analyse the evolution of the metabolism. In this purpose, we look for combinations of presence / absence of enzymatic activities allowing to characterize a taxonomic group. So, it becomes possible to deduct if the creation of a particular taxonomic group can give some explanation by (or led to) the appearance of specificities at the level of its metabolism.For that purpose, we applied interpretable machine learning methods (rulers and decision trees) to the enzymatic profiles. We use as attributes the enzymatic activities, as classes the taxonomic groups and as examples the fungi. The results, coherent with our current knowledge on these species, show that the application of methods of machine learning is effective to extract informations of the phylogenetic profiles. The metabolism thus keeps tracks of the evolution of the species.Furthermore, this approach, in the case of prediction of classifiers presenting a low number of errors, can allow to highlight the existence of likely horizontal transfers. It is the case for example of the transfer of the gene coding for the EC:3.1.6.6 of an ancestor of pezizomycotina towards an ancestor of Ustilago maydis
Morvant, Emilie. "Apprentissage de vote de majorité pour la classification supervisée et l'adaptation de domaine : approches PAC-Bayésiennes et combinaison de similarités." Phd thesis, Aix-Marseille Université, 2013. http://tel.archives-ouvertes.fr/tel-00879072.
Full textDelecraz, Sébastien. "Approches jointes texte/image pour la compréhension multimodale de documents." Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0634.
Full textThe human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions
Khemiri, Abdelhak. "Approches pour la vérification et la validation des modèles de production : application à une usine de fabrication de semi-conducteurs." Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0157.
Full textToday, business processes have a central place in organizations and the question of their reliability has attracted increasing attention from both industry and the scientific community. Indeed, an error or malfunction in production models can seriously weaken companies. This risk is amplified by the growing importance of process automation and computerization. Thus, this thesis focuses on issues related to the verification and validation of processes and production models of a manufacturing plant. To meet these needs, two approaches have been proposed in this thesis. The first one aims at improving the informational model of production processes through an approach based on machine learning to discover the rules that correspond to the right configuration of the informational model. An industrialization phase is carried out in a semiconductor manufacturing plant and the results obtained are presented. The second contribution concerns the impact of data in the functional perspective of a business process, which limits the use of traditional verification methods. Thus, we propose an approach that combines discrete-event simulation and model-checking. Simulation allows taking advantage of experts' knowledge in order to identify a subset of states where a given property is more likely to be unsatisfied, allowing model-checking to focus on this subset. The approach is tested and validated on a network on-chip model
Azouz, Nesrine. "Approches intelligentes pour le pilotage adaptatif des systèmes en flux tirés dans le contexte de l'industrie 4.0." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC028/document.
Full textToday, many production systems are managed in "pull" control system and used "card-based" methods such as: Kanban, ConWIP, COBACABANA, etc. Despite their simplicity and efficiency, these methods are not suitable when production is not stable and customer demand varies. In such cases, the production systems must therefore adapt the “tightness” of their production flow throughout the manufacturing process. To do this, we must determine how to dynamically adjust the number of cards (or e-card) depending on the context. Unfortunately, these decisions are complex and difficult to make in real time. In addition, in some cases, changing too often the number of kanban cards can disrupt production and cause a nervousness problem. The opportunities offered by Industry 4.0 can be exploited to define smart flow control strategies to dynamically adapt this number of kanban cards.In this thesis, we propose, firstly, an adaptive approach based on simulation and multi-objective optimization technique, able to take into account the problem of nervousness and to decide autonomously (or to help managers) when and where adding or removing Kanban cards. Then, we propose a new adaptive and intelligent approach based on a neural network whose learning is first realized offline using a twin digital model (simulation) and exploited by a multi-objective optimization method. Then, the neural network could be able to decide in real time, when and at which manufacturing stage it is relevant to change the number of kanban cards. Comparisons made with the best methods published in the literature show better results with less frequent changes
Pomot, Lucas. "Métamatériaux sismiques : transformation géométrique, homogénéisation et approche expérimentale." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0406.
Full textFirst a general process is proposed to experimentally design anisotropic inhomogeneous metamaterials obtained through a change of coordinate in the Helmholtz equation. The method is applied to the case of a cylindrical transformation that allows to perform cloaking. To approximate such complex metamaterials we apply results of the theory of homogenization and combine them with a genetic algorithm. To illustrate the power of our approach, we design three types of cloaks composed of isotropic concentric layers structured with three types of perforations: curved rectangles, split rings and crosses. These cloaks have parameters compatible with existing technology and they mimic the behavior of the transformed material.Then we focus on elastic waves, espcially plate waves. Controlling elastic waves in plates is a major challenge previously addressed without energy considerations. We propose an energy approach for the design of plate cloaks, which prevents any unphysical features. Within this framework, it is shown that the Kirchhoff-Love equation for anisotropic heterogeneous plates is form invariant for a class of transformations with a vanishing Hessian. This formalism is detailed and numerically validated with three-dimensional simulations in the time domain. Finally we performed a lab scale experiment studying the interaction between surface wave propagating in rock and a network of resonators made of aluminium rod. Using high precision mesurement methods we managed to give new insights on this type of interactions
Lesieur, Thibault. "Factorisation matricielle et tensorielle par une approche issue de la physique statistique." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS345/document.
Full textIn this thesis we present the result on low rank matrix and tensor factorization. Matrices being such an ubiquitous mathematical object a lot of machine learning can be mapped to a low-rank matrix factorization problem. It is for example one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. The result presented in this thesis have been included in previous work [LKZ 201].The problem of low rank matrix becomes harder once one adds constraint to the problem like for instance the positivity of one of the factor of the factorization. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models -- presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail ageneral form of the low-rank approximate message passing (Low-RAMP) algorithm that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods
Gordaliza, Pastor Paula. "Fair learning : une approche basée sur le transport optimale." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30084.
Full textThe aim of this thesis is two-fold. On the one hand, optimal transportation methods are studied for statistical inference purposes. On the other hand, the recent problem of fair learning is addressed through the prism of optimal transport theory. The generalization of applications based on machine learning models in the everyday life and the professional world has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. In the first part of the thesis, we motivate the fairness problem by presenting some comprehensive results from the study of the statistical parity criterion through the analysis of the disparate impact index on the real and well-known Adult Income dataset. Importantly, we show that trying to make fair machine learning models may be a particularly challenging task, especially when the training observations contain bias. Then a review of Mathematics for fairness in machine learning is given in a general setting, with some novel contributions in the analysis of the price for fairness in regression and classification. In the latter, we finish this first part by recasting the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. Finally, we propose a random repair which yields a tradeoff between minimal information loss and a certain amount of fairness. The second part is devoted to the asymptotic theory of the empirical transportation cost. We provide a Central Limit Theorem for the Monge-Kantorovich distance between two empirical distributions with different sizes n and m, Wp(Pn,Qm), p > = 1, for observations on R. In the case p > 1 our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centering constants. We provide a consistent estimate of the asymptotic variance which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification. Additionally, we provide a moderate deviation principle for the empirical transportation cost in general dimension. Finally, Wasserstein barycenters and variance-like criterion using Wasserstein distance are used in many problems to analyze the homogeneity of collections of distributions and structural relationships between the observations. We propose the estimation of the quantiles of the empirical process of the Wasserstein's variation using a bootstrap procedure. Then we use these results for statistical inference on a distribution registration model for general deformation functions. The tests are based on the variance of the distributions with respect to their Wasserstein's barycenters for which we prove central limit theorems, including bootstrap versions
Jarry, Gabriel. "Analyse et détection des trajectoires d'approches atypiques des aéronefs à l'aide de l'analyse de données fonctionnelles et de l'apprentissage automatique." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30284.
Full textImproving aviation safety generally involves identifying, detecting and managing undesirable events that can lead to final events with fatalities. Previous studies conducted by the French National Supervisory Authority have led to the identification of non-compliant approaches presenting deviation from standard procedures as undesirable events. This thesis aims to explore functional data analysis and machine learning techniques in order to provide algorithms for the detection and analysis of atypical trajectories in approach from ground side. Four research directions are being investigated. The first axis aims to develop a post-op analysis algorithm based on functional data analysis techniques and unsupervised learning for the detection of atypical behaviours in approach. The model is confronted with the analysis of airline flight safety offices, and is applied in the particular context of the COVID-19 crisis to illustrate its potential use while the global ATM system is facing a standstill. The second axis of research addresses the generation and extraction of information from radar data using new techniques such as Machine Learning. These methodologies allow to \mbox{improve} the understanding and the analysis of trajectories, for example in the case of the estimation of on-board parameters from radar parameters. The third axis proposes novel data manipulation and generation techniques using the functional data analysis framework. Finally, the fourth axis focuses on extending the post-operational algorithm into real time with the use of optimal control techniques, giving directions to new situation awareness alerting systems
Bardolle, Frédéric. "Modélisation des hydrosystèmes par approche systémique." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAH006/document.
Full textIn the light of current knowledge, hydrosystems cannot be modelled as a whole since underlying physical principles are not totally understood. Systemic models simplify hydrosystem representation by considering only water flows. The aim of this work is to provide a systemic modelling tool giving information about hydrosystem physical behavior while being simple and parsimonious. This model, called HMSA (for Hydrosystem Modelling with a Systemic Approach) is based on parametric transfer functions chose for their low parametrization, their general nature and their physical interpretation. It is versatile, since its architecture is modular, and the user can choose the number of inputs, outputs and transfer functions. Inversion is done with recent machine learning heuristic family, based on swarm intelligence called PSO (Particle Swarm Optimization). The model and its inversion algorithms are tested first with a textbook case, and then with a real-world case
Darmet, Ludovic. "Vers une approche basée modèle-image flexible et adaptative en criminalistique des images." Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-03086427.
Full textImages are nowadays a standard and mature medium of communication.They appear in our day to day life and therefore they are subject to concernsabout security. In this work, we study different methods to assess theintegrity of images. Because of a context of high volume and versatilityof tampering techniques and image sources, our work is driven by the necessity to developflexible methods to adapt the diversity of images.We first focus on manipulations detection through statistical modeling ofthe images. Manipulations are elementary operations such as blurring,noise addition, or compression. In this context, we are more preciselyinterested in the effects of pre-processing. Because of storagelimitation or other reasons, images can be resized or compressed justafter their capture. Addition of a manipulation would then be applied on analready pre-processed image. We show that a pre-resizing of test datainduces a drop of performance for detectors trained on full-sized images.Based on these observations, we introduce two methods to counterbalancethis performance loss for a pipeline of classification based onGaussian Mixture Models. This pipeline models the local statistics, onpatches, of natural images. It allows us to propose adaptation of themodels driven by the changes in local statistics. Our first method ofadaptation is fully unsupervised while the second one, only requiring a fewlabels, is weakly supervised. Thus, our methods are flexible to adaptversatility of source of images.Then we move to falsification detection and more precisely to copy-moveidentification. Copy-move is one of the most common image tampering technique. Asource area is copied into a target area within the same image. The vastmajority of existing detectors identify indifferently the two zones(source and target). In an operational scenario, only the target arearepresents a tampering area and is thus an area of interest. Accordingly, wepropose a method to disentangle the two zones. Our method takesadvantage of local modeling of statistics in natural images withGaussian Mixture Model. The procedure is specific for each image toavoid the necessity of using a large training dataset and to increase flexibility.Results for all the techniques described above are illustrated on publicbenchmarks and compared to state of the art methods. We show that theclassical pipeline for manipulations detection with Gaussian MixtureModel and adaptation procedure can surpass results of fine-tuned andrecent deep-learning methods. Our method for source/target disentanglingin copy-move also matches or even surpasses performances of the latestdeep-learning methods. We explain the good results of these classicalmethods against deep-learning by their additional flexibility andadaptation abilities.Finally, this thesis has occurred in the special context of a contestjointly organized by the French National Research Agency and theGeneral Directorate of Armament. We describe in the Appendix thedifferent stages of the contest and the methods we have developed, as well asthe lessons we have learned from this experience to move the image forensics domain into the wild
Sayadi, Karim. "Classification du texte numérique et numérisé. Approche fondée sur les algorithmes d'apprentissage automatique." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066079/document.
Full textDifferent disciplines in the humanities, such as philology or palaeography, face complex and time-consuming tasks whenever it comes to examining the data sources. The introduction of computational approaches in humanities makes it possible to address issues such as semantic analysis and systematic archiving. The conceptual models developed are based on algorithms that are later hard coded in order to automate these tedious tasks. In the first part of the thesis we propose a novel method to build a semantic space based on topics modeling. In the second part and in order to classify historical documents according to their script. We propose a novel representation learning method based on stacking convolutional auto-encoder. The goal is to automatically learn plot representations of the script or the written language
Trullo, Ramirez Roger. "Approche basées sur l'apprentissage en profondeur pour la segmentation des organes à risques dans les tomodensitométries thoraciques." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR063.
Full textRadiotherapy is one of the options for treatment currently available for patients affected by cancer, one of the leading cause of deaths worldwide. Before radiotherapy, organs at risk (OAR) located near the target tumor, such as the heart, the lungs, the esophagus, etc. in thoracic cancer, must be outlined, in order to minimize the quantity of irradiation that they receive during treatment. Today, segmentation of the OAR is performed mainly manually by clinicians on Computed Tomography (CT) images, despite some partial software support. It is a tedious task, prone to intra and inter-observer variability. In this work, we present several frameworks using deep learning techniques to automatically segment the heart, trachea, aorta and esophagus. In particular, the esophagus is notably challenging to segment, due to the lack of surrounding contrast and shape variability across different patients. As deep networks and in particular fully convolutional networks offer now state of the art performance for semantic segmentation, we first show how a specific type of architecture based on skip connections can improve the accuracy of the results. As a second contribution, we demonstrate that context information can be of vital importance in the segmentation task, where we propose the use of two collaborative networks. Third, we propose a different, distance aware representation of the data, which is then used in junction with adversarial networks, as another way to constrain the anatomical context. All the proposed methods have been tested on 60 patients with 3D-CT scans, showing good performance compared with other methods
Castro, Márcio. "Optimisation de la performance des applications de mémoire transactionnelle sur des plates-formes multicoeurs : une approche basée sur l'apprentissage automatique." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM074/document.
Full textMulticore processors are now a mainstream approach to deliver higher performance to parallel applications. In order to develop efficient parallel applications for those platforms, developers must take care of several aspects, ranging from the architectural to the application level. In this context, Transactional Memory (TM) appears as a programmer friendly alternative to traditional lock-based concurrency for those platforms. It allows programmers to write parallel code as transactions, which are guaranteed to execute atomically and in isolation regardless of eventual data races. At runtime, transactions are executed speculatively and conflicts are solved by re-executing conflicting transactions. Although TM intends to simplify concurrent programming, the best performance can only be obtained if the underlying runtime system matches the application and platform characteristics. The contributions of this thesis concern the analysis and improvement of the performance of TM applications based on Software Transactional Memory (STM) on multicore platforms. Firstly, we show that the TM model makes the performance analysis of TM applications a daunting task. To tackle this problem, we propose a generic and portable tracing mechanism that gathers specific TM events, allowing us to better understand the performances obtained. The traced data can be used, for instance, to discover if the TM application presents points of contention or if the contention is spread out over the whole execution. Our tracing mechanism can be used with different TM applications and STM systems without any changes in their original source codes. Secondly, we address the performance improvement of TM applications on multicores. We point out that thread mapping is very important for TM applications and it can considerably improve the global performances achieved. To deal with the large diversity of TM applications, STM systems and multicore platforms, we propose an approach based on Machine Learning to automatically predict suitable thread mapping strategies for TM applications. During a prior learning phase, we profile several TM applications running on different STM systems to construct a predictor. We then use the predictor to perform static or dynamic thread mapping in a state-of-the-art STM system, making it transparent to the users. Finally, we perform an experimental evaluation and we show that the static approach is fairly accurate and can improve the performance of a set of TM applications by up to 18%. Concerning the dynamic approach, we show that it can detect different phase changes during the execution of TM applications composed of diverse workloads, predicting thread mappings adapted for each phase. On those applications, we achieve performance improvements of up to 31% in comparison to the best static strategy
Piette, Eric. "Une nouvelle approche au General Game Playing dirigée par les contraintes." Thesis, Artois, 2016. http://www.theses.fr/2016ARTO0401/document.
Full textThe ability for a computer program to effectively play any strategic game, often referred to General Game Playing (GGP), is a key challenge in AI. The GGP competitions, where any game is represented according to a set of logical rules in the Game Description Language (GDL), have led researches to compare various approaches, including Monte Carlo methods, automatic constructions of evaluation functions, logic programming, and answer set programming through some general game players. In this thesis, we offer a new approach driven by stochastic constraints. We first focus on a translation process from GDL to stochastic constraint networks (SCSP) in order to provide compact representations of strategic games and to model strategies. In a second part, we exploit a fragment of SCSP through an algorithm called MAC-UCB by coupling the MAC (Maintaining Arc Consistency) algorithm, used to solve each stage of the SCSP in turn, together with the UCB (Upper Confidence Bound) policy for approximating the values of those strategies obtained by the last stage in the sequence. The efficiency of this technical on the others GGP approaches is confirmed by WoodStock, implementing MAC-UCB, the actual leader on the GGP Continuous Tournament. Finally, in the last part, we propose an alternative approach to symmetry detection in stochastic games, inspired from constraint programming techniques. We demonstrate experimentally that MAC-UCB, coupled with our constranit-based symmetry detection approach, significantly outperforms the best approaches and made WoodStock the GGP champion 2016
Boisbunon, Aurélie. "Sélection de modèle : une approche décisionnelle." Phd thesis, Université de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00793898.
Full textRibault, Alnour. "Optimisation de la consommation d’énergie d’un entrepôt frigorifique : double approche par la recherche opérationnelle et l’apprentissage automatique." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE2008.
Full textCold storage in Europe consume important amounts of energy to maintain cold rooms at low temperatures. The cold production control method most commonly used in cold stores does not account for variations in the price of electricity caused by the fluctuating needs of the electrical network. The thermal inertia of the cold rooms as well as the coolant tank could be used as energy storage. Moreover, the compressors are often used at suboptimal production levels. Those practices lead to extra energy consumption costs.In the present research work, two approaches are proposed to improve the control of cold stores. The first approach is based on the mathematical modelling of the cold stores, and by the application of optimisation algorithms to those models in order to generate energy consumption schedules with minimal cost. The second approach, based on machine learning techniques, aims at establishing the best production decision in a given context by predicting the future cost generated by each possible production decision. These two approaches are compared to the most common control method for cold stores
Niaf, Émilie. "Aide au diagnostic du cancer de la prostate par IRM multi-paramétrique : une approche par classification supervisée." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10271/document.
Full textProstate cancer is one of the leading cause of death in France. Multi-parametric MRI is considered the most promising technique for cancer visualisation, opening the way to focal treatments as an alternative to prostatectomy. Nevertheless, its interpretation remains difficult and subject to inter- and intra-observer variability, which motivates the development of expert systems to assist radiologists in making their diagnosis. We propose an original computer-aided diagnosis system returning a malignancy score to any suspicious region outlined on MR images, which can be used as a second view by radiologists. The CAD performances are evaluated based on a clinical database of 30 patients, exhaustively and reliably annotated thanks to the histological ground truth obtained via prostatectomy. Finally, we demonstrate the influence of this system in clinical condition based on a ROC analysis involving 12 radiologists, and show a significant increase of diagnostic accuracy, rating confidence and a decrease in inter-expert variability. Building an anatomo-radiological correlation database is a complex and fastidious task, so that numerous studies base their evaluation analysis on the expertise of one experienced radiologist, which is thus doomed to contain uncertainties. We propose a new classification scheme, based on the support vector machine (SVM) algorithm, which is able to account for uncertain data during the learning step. The results obtained, both on toy examples and on our clinical database, demonstrate the potential of this new approach that can be extended to any machine learning problem relying on a probabilitic labelled dataset
Jabaian, Bassam. "Systèmes de compréhension et de traduction de la parole : vers une approche unifiée dans le cadre de la portabilité multilingue des systèmes de dialogue." Phd thesis, Université d'Avignon, 2012. http://tel.archives-ouvertes.fr/tel-00818970.
Full textMaaloul, Mohamed. "Approche hybride pour le résumé automatique de textes : Application à la langue arabe." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4778.
Full textThis thesis falls within the framework of Natural Language Processing. The problems of automatic summarization of Arabic documents which was approached, in this thesis, are based on two points. The first point relates to the criteria used to determine the essential content to extract. The second point focuses on the means to express the essential content extracted in the form of a text targeting the user potential needs.In order to show the feasibility of our approach, we developed the "L.A.E" system, based on a hybrid approach which combines a symbolic analysis with a numerical processing.The evaluation results are encouraging and prove the performance of the proposed hybrid approach.These results showed, initially, the applicability of the approach in the context of mono documents without restriction as for their topics (Education, Sport, Science, Politics, Interaction, etc), their content and their volume. They also showed the importance of the machine learning in the phase of classification and selection of the sentences forming the final extract
Qamar, Ali Mustafa. "Mesures de similarité et cosinus généralisé : une approche d'apprentissage supervisé fondée sur les k plus proches voisins." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM083.
Full textAlmost all machine learning problems depend heavily on the metric used. Many works have proved that it is a far better approach to learn the metric structure from the data rather than assuming a simple geometry based on the identity matrix. This has paved the way for a new research theme called metric learning. Most of the works in this domain have based their approaches on distance learning only. However some other works have shown that similarity should be preferred over distance metrics while dealing with textual datasets as well as with non-textual ones. Being able to efficiently learn appropriate similarity measures, as opposed to distances, is thus of high importance for various collections. If several works have partially addressed this problem for different applications, no previous work is known which has fully addressed it in the context of learning similarity metrics for kNN classification. This is exactly the focus of the current study. In the case of information filtering systems where the aim is to filter an incoming stream of documents into a set of predefined topics with little supervision, cosine based category specific thresholds can be learned. Learning such thresholds can be seen as a first step towards learning a complete similarity measure. This strategy was used to develop Online and Batch algorithms for information filtering during the INFILE (Information Filtering) track of the CLEF (Cross Language Evaluation Forum) campaign during the years 2008 and 2009. However, provided enough supervised information is available, as is the case in classification settings, it is usually beneficial to learn a complete metric as opposed to learning thresholds. To this end, we developed numerous algorithms for learning complete similarity metrics for kNN classification. An unconstrained similarity learning algorithm called SiLA is developed in which case the normalization is independent of the similarity matrix. SiLA encompasses, among others, the standard cosine measure, as well as the Dice and Jaccard coefficients. SiLA is an extension of the voted perceptron algorithm and allows to learn different types of similarity functions (based on diagonal, symmetric or asymmetric matrices). We then compare SiLA with RELIEF, a well known feature re-weighting algorithm. It has recently been suggested by Sun and Wu that RELIEF can be seen as a distance metric learning algorithm optimizing a cost function which is an approximation of the 0-1 loss. We show here that this approximation is loose, and propose a stricter version closer to the the 0-1 loss, leading to a new, and better, RELIEF-based algorithm for classification. We then focus on a direct extension of the cosine similarity measure, defined as a normalized scalar product in a projected space. The associated algorithm is called generalized Cosine simiLarity Algorithm (gCosLA). All of the algorithms are tested on many different datasets. A statistical test, the s-test, is employed to assess whether the results are significantly different. GCosLA performed statistically much better than SiLA on many of the datasets. Furthermore, SiLA and gCosLA were compared with many state of the art algorithms, illustrating their well-foundedness
Kosowska-Stamirowska, Zuzanna. "Évolution et robustesse du réseau maritime mondial : une approche par les systèmes complexes." Thesis, Paris 1, 2020. http://www.theses.fr/2020PA01H022.
Full textOver 70% of the total value of international trade is carried by sea, accounting for 80% of all cargo in terms of volume. In 2016, the UN Secretary General drew attention to the role of maritime transport, describing it as “the backbone of global trade and of the global economy”. Maritime trade flows impact not only the economic development of the concerned regions, but also their ecosystems. Moving ships are an important vector of spread for bioinvasions. Shipping routes are constantly evolving and likely to be affected by the consequences of Climate Change, while at the same time ships are a considerable source of air pollution, with CO2 emissions at a level comparable to Germany, and NOx and SOx emissions comparable to the United States. With the development of Arctic shipping becoming a reality, the need to understand the behavior of this system and to forecast future maritime trade flows reasserts itself. Despite their scope and crucial importance, studies of maritime trade flows on a global scale, based on data and formal methods are scarce, and even fewer studies address the question of their evolution. In this thesis we use a unique database on daily movements of the world fleet between 1977 and 2008 provided by the maritime insurer Lloyd’s in order to build a complex network of maritime trade flows where ports stand for nodes and links are created by ship voyages. In this thesis we perform a data-driven analysis of the maritime trade network. We use tools from Complexity Science and Machine Learning applied on network data to study the network’s properties and develop models for predicting the opening of new shipping lines and for forecasting future trade volume on links. Applying Machine Learning to analyse networked trade flows appears to be a new approach with respect to the state-of-the-art, and required careful selection and customization of existing Machine Learning tools to make them fit networked data on physical flows. The results of the thesis suggest a hypothesis of trade following a random walk on the underlying network structure. [...] Thanks to a natural experiment, involving traffic redirection from the port of Kobe after the 1995 earthquake, we find that the traffic was redirected preferentially to ports which had the highest number of Common Neighbors with Kobe before the cataclysm. Then, by simulating targeted attacks on the maritime trade network, we analyze the best criteria which may serve to maximize the harm done to the network and analyse the overall robustness of the network to different types of attacks. All these results hint that maritime trade flows follow a form of random walk on the network of sea connections, which provides evidence for a novel view on the nature of trade flows
Mazac, Sébastien. "Approche décentralisée de l'apprentissage constructiviste et modélisation multi-agent du problème d'amorçage de l'apprentissage sensorimoteur en environnement continu : application à l'intelligence ambiante." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10147/document.
Full textThe theory of cognitive development from Jean Piaget (1923) is a constructivist perspective of learning that has substantially influenced cognitive science domain. Within AI, lots of works have tried to take inspiration from this paradigm since the beginning of the discipline. Indeed it seems that constructivism is a possible trail in order to overcome the limitations of classical techniques stemming from cognitivism or connectionism and create autonomous agents, fitted with strong adaptation ability within their environment, modelled on biological organisms. Potential applications concern intelligent agents in interaction with a complex environment, with objectives that cannot be predefined. Like robotics, Ambient Intelligence (AmI) is a rich and ambitious paradigm that represents a high complexity challenge for AI. In particular, as a part of constructivist theory, the agent has to build a representation of the world that relies on the learning of sensori-motor patterns starting from its own experience only. This step is difficult to set up for systems in continuous environments, using raw data from sensors without a priori modelling.With the use of multi-agent systems, we investigate the development of new techniques in order to adapt constructivist approach of learning on actual cases. Therefore, we use ambient intelligence as a reference domain for the application of our approach
Dubois-Chevalier, Julie. "Chimiothèque : vers une approche rationnelle pour la sélection de sous-chimiothèques." Phd thesis, Université d'Orléans, 2011. http://tel.archives-ouvertes.fr/tel-00675250.
Full textBanus, Cobo Jaume. "Coeur & Cerveau. Lien entre les pathologies cardiovasculaires et la neurodégénérescence par une approche combinée biophysique et statistique." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4030.
Full textClinical studies have identified several cardiovascular risk factors associated to dementia and cardiac pathologies, but their pathological interaction remains poorly understood. Classically, the investigation of the heart-brain relationship is mostly carried out through statistical analysis exploring the association between cardiac indicators and cognitive biomarkers. This kind of investigations are usually performed in large-scale epidemiological datasets, for which joint measurements of both brain and heart are available. For this reason, most of these analyses are performed on cohorts representing the general population. Therefore, the generalisation of these findings to dementia studies is generally difficult, since extensive assessments of cardiac and cardiovascular function in currently available dementia dataset is usually lacking. Another limiting factor of current studies is the limited interpretability of the complex pathophysiological relations between heart and brain allowed by standard correlation analyses. Improving our understanding of the implications of cardiovascular function in dementia ultimately requires the development of more refined mechanistic models of cardiac physiology, as well as the development of novel approaches allowing to integrate these models with image-based brain biomarkers. To address these challenges, in this thesis we developed new computational tools based on the integration of mechanistic models within a statistical learning framework. First, we studied the association between non-observable physiological indicators, such as cardiac contractility, and brain-derived imaging features. To this end, the parameter-space of a mechanistic model of the cardiac function was constrained during the personalisation stage based on the relationships between the parameters of the cardiac model and brain information. This allows to tackle the ill-posedness of the inverse problem associated to model personalisation, and obtain patient-specific solutions that are comparable population-wise.Second, we developed a probabilistic imputation model that allows to impute missing cardiac information in datasets with limited data. The imputation leverages on the cardiac-brain dynamics learned in a large-scale population analysis, and uses this knowledge to obtain plausible solutions in datasets with partial data. The generative nature of the approach allows to simulate the evolution of cardiac model parameters as brain features change. The framework is based on a conditional variational autoencoder (CVAE) combined with Gaussian process (GP) regression. Third, we analysed the potential role of cardiac model parameters as early biomarkers for dementia, which could help to identify individuals at risk. To this end, we imputed missing cardiac information in an Alzheimer's disease (AD) longitudinal cohort. Next, via disease progression modelling we estimated the disease stage for each individual based on the evolution of biomarkers. This allowed to obtain a model of the disease evolution, to analyse the role of cardiac function in AD, and to identify cardiac model parameters as potential early-stage biomarkers of dementia. These results demonstrate the importance of the developed tools by providing clinically plausible associations between cardiac model parameters and brain imaging features in an epidemiological dataset, as well as highlighting insights about the physiological relationship between cardiac function and dementia biomarkers. The obtained results open new research directions, such as the use of more complex mechanistic models that allow to better characterise the heart-brain relationship, or the use of biophysical cardiac models to derive in-silico biomarkers for identifying individuals at risk of dementia in clinical routine, and/or for their inclusion in neuroprotective trials
Castellano, Aloïs. "Étude des effets de la température sur les combustibles nucléaires par une approche ab initio." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS062.
Full textTo ensure the security of nuclear electricity production, an understanding of the behavior of nuclear fuel materials is necessary. This work aims at making a contribution to the study of the effects of temperature on nuclear fuels, by using an ab initio approach through density functional theory and ab initio molecular dynamics (AIMD). To explicity take account of the temperature, a non-perturbative lattice dynamics method is formalised, allowing to study the evolution of phonons and thermodynamic properties with temperature. In order to reduce the important computational cost of AIMD, a machine-learning based sampling method is developped, which allows to accelerate the simulation of materials at finite temperature. Those different methods are applied to describe the stabilisation of uranium-molybdenum alloy at high temperature, as well as the lattice dynamics of uranium and plutonium dioxides
Loisel, Julie. "Détection des ruptures de la chaîne du froid par une approche d'apprentissage automatique." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASB014.
Full textThe cold chain is essential to ensure food safety and avoid food waste. Wireless sensors are increasingly used to monitor the air temperature through the cold chain, however, the exploitation of these measurements is still limited. This thesis explores how machine learning can be used to predict the temperature of different food products types from the measured air temperature in a pallet and detect cold chain breaks. We introduced, firstly, a definition of a cold chain break based on two main product categories: products obligatorily preserved at a regulated temperature such as meat and fish, and products for which a temperature is recommended such as fruits and vegetables. The cold chain break leads to food poisoning for the first product category and organoleptic quality degradation for the second one.For temperature-regulated products, it is crucial to predict the product temperature to ensure that it does not exceed the regulatory temperature. Although several studies demonstrated the effectiveness of neural networks for the prediction, none has compared the synthetic and experimental data to train them. In this thesis, we proposed to compare these two types of data in order to provide guidelines for the development of neural networks. In practice, the products and packaging are diverse; experiments for each application are impossible due to the complexity of implementation. By comparing synthetic and experimental data, we were able to determine best practices for developing neural networks to predict product temperature and maintain cold chain. For temperature-regulated products, once the cold chain break is detected, they are no more consumable and must be eliminated. For temperature-recommended products, we compared three different approaches to detect cold chain breaks and implement corrective actions: a) method based on a temperature threshold, b) method based on a classifier which determines whether the products will be delivered with the expected qualities, and c) method also based on a classifier but which integrates the cost of the corrective measure in the decision-making process. The performances of the three methods are discussed and prospects for improvement are proposed
Shahzad, Atif. "Une Approche Hybride de Simulation-Optimisation Basée sur la fouille de Données pour les problèmes d'ordonnancement." Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00647353.
Full textRaimondo, Federico. "Normalisation et automatisation du diagnostic des patients atteints de troubles de la conscience : une approche par apprentissage automatique appliquée aux signaux électrophysiologiques du cerveau et du corps." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS321.
Full textAdvances in modern medicine have led to an increase of patients diagnosed with disorders of consciousness (DOC). In these conditions, patients are awake, but without behavioural signs of awareness. An accurate evaluation of DOC patients has medico-ethical and societal implications, and it is of crucial importance because it typically informs prognosis. Misdiagnosis of patients, however, is a major concern in clinics due to intrinsic limitations of behavioural tools. One accessible assisting methodology for clinicians is electroencephalography (EEG). In a previous study, we introduced the use of EEG-extracted markers and machine learning as a tool for the diagnosis of DOC patients. In this work, we developed an automated analysis tool, and analysed the applicability and limitations of this method. Additionally, we proposed two approaches to enhance the accuracy of this method: (1) the use of multiple stimulation modalities to include neural correlates of multisensory integration and (2) the analysis of consciousness-mediated modulations of cardiac activity. Our results exceed the current state of knowledge in two dimensions. Clinically, we found that the method can be used in heterogeneous contexts, confirming the utility of machine learning as an automated tool for clinical diagnosis. Scientifically, our results highlight that brain-body interactions might be the fundamental mechanism to support the fusion of multiple senses into a unique percept, leading to the emergence of consciousness. Taken together, this work illustrates the importance of machine learning to individualised clinical assessment, and paves the way for inclusion of bodily functions when quantifying global states of consciousness
Avances en la medicina moderna han llevado a un incremento en el número de pacientes diagnosticados con desordenes de consciencia (DOC). En estas condiciones, los pacientes se encuentran despiertos, pero no muestran signos de entendimiento acerca de si mismos o el lugar donde se encuentran. Una evaluación precisa de los pacientes tiene implicaciones medico-éticas y sociales, y es de suma importancia porque típicamente informa el pronós- tico. Los diagnósticos erróneos, no obstante, es una gran preocupación en las clínicas debido a las limitaciones intrínsecas de las herramientas de diagnostico basados en comportamiento. Una tecnología accesible para asistir a los médicos es la electroencefalografía (EEG). In un estudio previo, introducimos el uso de marcadores extraídos de EEG en combinación con aprendizaje automático como una herramienta para el diagnostico de pacientes DOC. En este trabajo, desarrollamos una herramienta de análisis automatizado, y analizamos la aplicabil- idad y limitaciones de este método. Adicionalmente, proponemos dos enfoques para incre- mentar la precision del diagnóstico: (1) el uso de múltiples modalidades de estimulación para incluir los correlatos neuronales de la integración multisensorial y (2) el análisis de las mod- ulaciones de la actividad cardíaca mediadas por la conciencia. Nuestros resultados exceden el conocimiento actual en dos dimensiones. Clínicamente, encontramos que el método puede ser utilizada en contextos heterogéneos, confirmando la utilidad del aprendizaje automático como una herramientas para el diagnóstico clínico. Científicamente, nuestros resultados re- saltan que las interacciones entre el cerebro y el cuerpo pueden ser el mecanismo funda- mental para sostener la fusión de multiples sentidos en una única percepción, conduciendo a la emergencia de la consciencia. En conjunto, este trabajo ilustra la importancia del apren- dizaje automático para la evaluación clínica individualizada, y crea un punto de partida para la inclusión de las funciones corporales en la cuantificación de los estados de conciencia globales
Deleforge, Antoine. "Projection d'espaces acoustiques: une approche par apprentissage automatisé de la séparation et de la localisation." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00913965.
Full textBerkane, Yassamina. "Time organization for urban mobility congestion : an interdisciplinary approach for Bureaux des Temps." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG071.
Full textPopulation growth and urbanization are major challenges for the management of cities and their resources. Urban mobility is one of these challenges because of its impact on quality of life, economic productivity, and environmental sustainability. In this thesis, the concept of time management is integrated into a new approach to reduce urban mobility congestion. Traditional methods primarily focus on spatial aspects, thus neglecting the temporal dimension. We concentrate on integrating concepts from the Social and Human Sciences (SHS), particularly sociological concepts, into Sciences and Technologies of Information and Communication (STIC) to propose a decision-support solution for the time bureaus, which aims to modify temporalities to reduce urban congestion.The proposed approach consists of three phases: identifying mobility profiles, analyzing traffic congestion, and making temporal decisions. Mobility profiles are predicted using learning techniques that take into account sociological criteria. Traffic congestion analysis investigates Waze data. Finally, the use of time series models allows us to predict congestion levels to propose optimized departure times to avoid urban congestion. The proposed solution has the potential to integrate a set of heterogeneous data in congestion management for harmonious urban cities
Silva, Bernardes Juliana. "Evolution et apprentissage automatique pour l'annotation fonctionnelle et la classification des homologies lointains en protéines." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00684155.
Full textZowid, Fauzi Mohammed. "Development and performance evaluation of multi-criteria inventory classification methods." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0331.
Full textThis thesis deals with the issue of inventory classification within supply chains. More specifically, it aims to provide new alternative classification methods to address the multi-criteria inventory classification (MCIC) problem. It is well known that the ABC inventory classification technique is widely used to streamline inventory systems composed of thousands of stock-keeping-units (SKUs). Single-criterion inventory classification (SCIC) methods are often used in practice and recently MCIC techniques have also attracted researchers and practitioners. With regard to the MCIC techniques, large number of studies have been developed that belong to three main approaches, namely: (1) the machine learning (ML), (2) the mathematical programming (MP), and (3) the multi-criteria decision making (MCDM). On the ML approach, many research methods belonging to the supervised ML type have been proposed as well as a number of hybrid methods. However, to the best of our knowledge, very few research studies have considered the unsupervised ML type. On the MP approach, a number of methods have been developed using linear and non-linear programming, such as the Ng and the ZF methods. Yet, most of these developed methods still can be granted more attentions for more improvements and shortcomings reduction. On the MCDM approach, several methods have been proposed to provide ABC classifications, including the TOPSIS (technique for order preference by similarity to ideal solution) method, which is well known for its wide attractiveness and utilization, as well as some hybrid TOPSIS methods.It is worth noting that most of the published studies have only focused on providing classification methods to rank the SKUs in an inventory system without any interest in the original and most important goal of this exercise, which is achieving a combined service-cost inventory performance, i.e. the maximization of service levels and the minimization of inventory costs. Moreover, most of the existing studies have not considered large and real-life datasets to recommend the run of MCIC technique for real life implementations. Thus, this thesis proposes first to evaluate the inventory performance (cost and service) of existing MCIC methods and to provide various alternative classification methods that lead to higher service and cost performance. More specifically, three unsupervised machine learning methods are proposed and analyzed: the Agglomerative hierarchical clustering, the Gaussian mixture model and K-means. In addition, other hybrid methods within the MP and MCDM approaches are also developed. These proposed methods represent a hybridization of the TOPSIS and Ng methods with the triangular distribution, the Simple additive weighting (SAW) and the Multi-objective optimization method by ratio analysis (MOORA).To conduct our research, the thesis empirically analyzes the performance of the proposed methods by means of two datasets containing more than nine thousand SKUs. The first dataset is a benchmark dataset originating from a Hospital Respiratory Theory Unit, often used in the literature dealing with the MCIC methods, composed of 47 SKUs. The second dataset consists of 9,086 SKUs and coming from a retailer in the Netherlands. The performances of the proposed methods are compared to that of existing MCIC classification methods in the literature. The empirical results reveal that the proposed methods can carry promising performances by leading to a higher combined service-cost efficiency
Lavallée, Jean-François. "Moranapho : apprentissage non supervisé de la morphologie d'une langue par généralisation de relations analogiques." Thèse, 2010. http://hdl.handle.net/1866/4524.
Full textRecently, we have witnessed a growing interest in applying the concept of formal analogy to unsupervised morphology acquisition. The attractiveness of this concept lies in its parallels with the mental process involved in the creation of new words based on morphological relations existing in the language. However, the use of formal analogy remain marginal partly due to their high computational cost. In this document, we present Moranapho, a graph-based system founded on the concept of formal analogy. Our participation in the 2009 Morpho Challenge (Kurimo:10) and our subsequent experiments demonstrate that the performance of Moranapho are favorably comparable to the state-of-the-art. We studied the influence of some of its components on the quality of the morphological analysis produced as well. Finally, we will discuss our findings based on well-established theories in the field of linguistics. This allows us to provide some predictions on the successes and failures of our system when applied to languages other than those tested in our experiments.
Alameda-Pineda, Xavier. "Analyse Égocentrique de Scènes Audio-Visuelles. Une approche par Apprentissage Automatique et Traitement du Signal." Phd thesis, 2013. http://tel.archives-ouvertes.fr/tel-00880117.
Full text