Rozprawy doktorskie na temat „Intelligence artificielle – Applications financières”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Intelligence artificielle – Applications financières”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Bertrand, Astrid. "Misplaced trust in AI : the explanation paradox and the human-centric path. A characterisation of the cognitive challenges to appropriately trust algorithmic decisions and applications in the financial sector". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT012.
Pełny tekst źródłaAs AI is becoming more widespread in our everyday lives, concerns have been raised about comprehending how these opaque structures operate. In response, the research field of explainability (XAI) has developed considerably in recent years. However, little work has studied regulators' need for explainability or considered effects of explanations on users in light of legal requirements for explanations. This thesis focuses on understanding the role of AI explanations to enable regulatory compliance of AI-enhanced systems in financial applications. The first part reviews the challenge of taking into account human cognitive biases in the explanations of AI systems. The analysis provides several directions to better align explainability solutions with people's cognitive processes, including designing more interactive explanations. It then presents a taxonomy of the different ways to interact with explainability solutions. The second part focuses on specific financial contexts. One study takes place in the domain of online recommender systems for life insurance contracts. The study highlights that feature based explanations do not significantly improve non expert users' understanding of the recommendation, nor lead to more appropriate reliance compared to having no explanation at all. Another study analyzes the needs of regulators for explainability in anti-money laundering and financing of terrorism. It finds that supervisors need explanations to establish the reprehensibility of sampled failure cases, or to verify and challenge banks' correct understanding of the AI
Séguillon, Michel. "Simulation et intelligence artificielle dans le cadre des stratégies financières complexes". Aix-Marseille 3, 1992. http://www.theses.fr/1992AIX32014.
Pełny tekst źródłaIn this thesis the problem of defining the probabilities laws of complex financial strategies is studied. First an overview of the different classical financial elementary operations is given. Mathematical theorems are proposed which permit to define the resulting probabilities. Then the numerical aspect of the problem is considered and an efficial numerical method is proposed; a number of interesting examples are given which concerned the main different aspects of the problem. Extensions for futures researches are given at the end of the work
Cathelain, Guillaume. "Ballistocardiographie et applications". Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLP029.
Pełny tekst źródłaGlobally, healthcare systems have increasing costs and the number of hospitalizations grows. Telehealth brings hospital at home and provides health structures with new opportunities to improve the patient care pathway. Physiological monitoring is a prerequisite in efficient telehealth systems and is performed by connected medical devices that are not fully automated. Patients need to use them actively on a day-to-day basis: these drawbacks lead either to patient disengagement or to additional caregiver support. Passive contactless vital signs’ monitors, such as ballistocardiograms sleep trackers that measure motor, respiratory and cardiac activities, can solve the telehealth inefficiency. Moreover, they are more comfortable and safer for patients than traditional monitors, which is crucial for neonatal neurological development or in case of mental degeneration, though they are currently less accurate. How to improve physiological monitoring accuracy in ballistocardiography to increase telehealth efficiency? In this thesis, materials are provided by a self-designed accelerometer-based instrumentation, a dedicated software, a heartbeat simulator, and measurement campaigns for raw ballistocardiograms’ databases. Novel analog amplification and digital filtering methods are investigated to improve ballistocardiography accuracy. The ballistocardiographic force, coming from the aortic arch deformation during the ventricular systole and measured on the bedside, is indeed modulated by respiratory and motor activities, and is polluted by environment mechanical artifacts. Furthermore, the ballistocardiography is unstandardized and ballistocardiograms have high inter- and intra-variabilities, depending on the beddings, the position in bed, the morphology and the physiology of the patient. Analog amplification is studied from two perspectives: the mechanical amplification of ballistocardiograms from the patient to the sensor, and the electronic amplification of the analog acceleration signal. First, concerning the mechanical amplification, a novel waveguide bedding, a cotton tape encircling the mattress, was invented to concentrate the strain energy of the ballistocardiographic force in one direction, from the thorax straight to the attached sensor. Second, concerning the electronic amplification, a mixed-signal front-end was conceived to optimize the tradeoff between the electronic amplifier gain and the saturation time after a movement. The conditioning circuit measures the unamplified sensor output, passes it through a digital filter with a sharp transition frequency bandwidth and a proper initialization, and analogically amplifies the difference between this unwanted synthesized signal and the unamplified sensor output using a low noise instrumentation amplifier. Digital filtering methods aims at separating signal sources, removing artifacts then detecting vital signs. Three original algorithms have been designed to efficiently recognize heartbeats in ballistocardiograms. The first algorithm is dynamic time warping template matching, where a heartbeat template is used to match heartbeats using a warping distance. The second algorithm models ballistocardiograms with periodic hidden Markov models. The third algorithm, the U-Net neural network, is supervised and segments heartbeats in ballistocardiograms. Finally, ballistocardiograms are mechanically and electronically amplified by 12 dB and 21 dB respectively, without saturation time; and digital filtering algorithms reach a 97% precision and 96% recall for heartbeats detection. Shortly, the designed ballistocardiograph will be clinically evaluated in a pediatric intensive care unit and in telemedicine against other ballistocardiographs and the gold standard methods
Do, Si Hoàng. "Informatique répartie et applications à la domotique". Paris 8, 2004. http://www.theses.fr/2004PA083710.
Pełny tekst źródłaHome networking includes the whole services and techniques aiming at integrating, in the individual dwellings or apartment building, of the specific functions to the habitat which can, for some cases, collaborate between them and use communication networks. The applying fields are, for example, energy management, lighting management, security, etc. But it also covers, with the advent of information highways, the entertainment, e-learning, the videoconference and telephony on Internet network. The thesis objective is to design, implement, test and validate a home services system by applying mobile agent technology combining with the brought solutions by the standard home services framework OSGi
Querrec, Ronan. "Les Systèmes Multi-Agents pour les Environnements Virtuels de Formation : Application à la sécurité civile". Brest, 2003. http://www.theses.fr/2002BRES2037.
Pełny tekst źródłaThis study concerns virtual environments for training in operationnal conditions. The principal developed idea is that these environments are heterogeneous and open multiagent systems. The MASCARET model is proposed to organize the interactions between the agents and to give them reactives, cognitives ans socials abilities to simulate the physical and social environment. The physical environment represent, in a realistic way, the phenomena that the learners and the teachers have to take into account. The social environment is simulated by agents executing collaborative and adaptative tasks. They realize, in team, the procedures that they have to adapt to the environment. The users participate to the training environment throught their avatar. To validate our model, the SecuRevi application for fire-fighters training is developed
Duprat, Jean. "LAIOS : un réseau multiprocesseur orienté vers des applications d'intelligence artificielle". Phd thesis, Grenoble INPG, 1988. http://tel.archives-ouvertes.fr/tel-00329699.
Pełny tekst źródłaMeyer, Cédric. "Théorie de vision dynamique et applications à la robotique mobile". Paris 6, 2013. http://www.theses.fr/2013PA066137.
Pełny tekst źródłaThe recognition of objects is a key challenge in increasing the autonomy of robots and their performances. Although many techniques of object recognition have been developed in the field of frame-based vision, none of them hold the comparison against human perception in terms of performance, weight and power consumption. Neuromorphic engineering models biological components into artificial chips. It recently provides an event-based camera inspired from a biological retina. The sparse, asynchronous, scene-driven visual data generated by these sensors allow the development of computationally efficient bio-inspired artificial vision. I focus in this thesis on studying how event-based acquisition and its accurate temporal precision can change object recognition by adding precise timing in the process. This thesis first introduces a frame-based object detection and recognition algorithm used for semantic mapping. It then studies quantitatively what are the advantages of using event-based acquisition using mutual information. It then inquires into low level event-based spatiotemporal features in the context of dynamic scene to introduce an implementation of a real-time multi-kernel feature tracking using Gabor filters or any kernel. Finally, a fully asynchronous time-oriented architecture of object recognition mimicking V1 visual cortex is presented. It extends the state of the art HMAX model in a pure temporal implementation of object recognition
Duprat, Jean Courtois Bernard Della Dora Jean Jorrand Philippe. "LAIOS un réseau multiprocesseur orienté vers des applications d'intelligence artificielle /". S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00329699.
Pełny tekst źródłaLiu, Ziming. "Méthodes hybrides d'intelligence artificielle pour les applications de navigation autonome". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4004.
Pełny tekst źródłaAutonomous driving is a challenging task that has a wide range of applications in the real world. The autonomous driving system can be used in different platforms, such as cars, drones, and robots. These autonomous systems will reduce a lot of human labor and improve the efficiency of the current transportation system. Some autonomous systems have been used in real scenarios, such as delivery robots, and service robots. In the real world, autonomous systems need to build environment representations and localize themselves to interact with the environment. There are different sensors can be used for these objectives. Among them, the camera sensor is the best choice between cost and reliability. Currently, visual autonomous driving has achieved significant improvement with deep learning. Deep learning methods have advantages for environment perception. However, they are not robust for visual localization where model-based methods have more reliable results. To utilize the advantages of both data-based and model-based methods, a hybrid visual odometry method is explored in this thesis. Firstly, efficient optimization methods are critical for both model-based and data-based methods which share the same optimization theory. Currently, most deep learning networks are still trained with inefficient first-order optimizers. Therefore, this thesis proposes to extend efficient model-based optimization methods to train deep learning networks. The Gaussian-Newton and the efficient second-order methods are applied for deep learning optimization. Secondly, the model-based visual odometry method is based on the prior depth information, the robust and accurate depth estimation is critical for the performance of visual odometry module. Based on traditional computer vision theory, stereo vision can compute the depth with the correct scale, which is more reliable than monocular solutions. However, the current two-stage 2D-3D stereo networks have the problems of depth annotations and disparity domain gap. Correspondingly, a pose-supervised stereo network and an adaptive stereo network are investigated. However, the performance of two-stage networks is limited by the quality of 2D features that build stereo-matching cost volume. Instead, a new one-stage 3D stereo network is proposed to learn features and stereo-matching implicitly in a single stage. Thirdly, to keep robust, the stereo network and the dense direct visual odometry module are combined to build a stereo hybrid dense direct visual odometry (HDVO). Dense direct visual odometry is more reliable than the feature-based method because it is optimized with global image information. The HDVO is optimized with the photometric minimization loss. However, this loss suffers noises from the occlusion area, homogeneous texture area, and dynamic objects. This thesis explores removing noisy loss values with binary masks. Moreover, to reduce the effects of dynamic objects, semantic segmentation results are used to improve these masks. Finally, to be generalized for a new data domain, a test-time training method for visual odometry is explored. These proposed methods have been evaluated on public autonomous driving benchmarks, and show state-of-the-art performances
Page, Michel. "Systèmes experts à base de connaissances profondes : application à un poste de travail intelligent pour le comptable". Phd thesis, Grenoble INPG, 1990. http://tel.archives-ouvertes.fr/tel-00338750.
Pełny tekst źródłaLabbi, Abderrahim. "Sur l'approximation et les systèmes dynamiques dans les réseaux neuronaux : applications en intelligence artificielle". Grenoble INPG, 1993. http://www.theses.fr/1993INPG0199.
Pełny tekst źródłaCrémilleux, Bruno Robert Claudine. "Induction automatique aspects théoriques, le système ARBRE, applications en médecine /". S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00339492.
Pełny tekst źródłaAbernot, Madeleine. "Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.
Pełny tekst źródłaIn the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Mouton, Rémy. "Outils intelligents pour les musicologues". Le Mans, 1992. http://www.theses.fr/1995LEMA1014.
Pełny tekst źródłaJiao, Yang. "Applications of artificial intelligence in e-commerce and finance". Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0002/document.
Pełny tekst źródłaArtificial Intelligence has penetrated into every aspect of our lives in this era of Big Data. It has brought revolutionary changes upon various sectors including e-commerce and finance. In this thesis, we present four applications of AI which improve existing goods and services, enables automation and greatly increase the efficiency of many tasks in both domains. Firstly, we improve the product search service offered by most e-commerce sites by using a novel term weighting scheme to better assess term importance within a search query. Then we build a predictive model on daily sales using a time series forecasting approach and leverage the predicted results to rank product search results in order to maximize the revenue of a company. Next, we present the product categorization challenge we hold online and analyze the winning solutions, consisting of the state-of-the-art classification algorithms, on our real dataset. Finally, we combine skills acquired previously from time series based sales prediction and classification to predict one of the most difficult but also the most attractive time series: stock. We perform an extensive study on every single stocks of S&P 500 index using four state-of-the-art classification algorithms and report very promising results
Mokhtari, Myriam. "Réseau neuronal aléatoire : applications à l'apprentissage et à la reconnaissance d'images". Paris 5, 1994. http://www.theses.fr/1994PA05S019.
Pełny tekst źródłaClere, Pascal. "Etude de l'architecture du processeur d'une machine pour les applications temps réel en intelligence artificielle : maia". Paris 11, 1989. http://www.theses.fr/1989PA112179.
Pełny tekst źródłaMAIA is a joint project between the Centre National d'Etudes des Télécommunications (CNET) at LANNION and the Compagnie Générale d'Electricité at Laboratoires de Marcoussis. MAIA is both a workstation for software development and for executing applications which need powerful syrnbolic computation and real-time supports. As far many specialized workstations, MAlAis a language-machine, both Lisp-machine and Prelog-machine, with microprogrammed support for list manipulation and memory management, hardware for dynarnic data types checking, collecter assist and Lisp stack heads. The software is made of an integrated environment based on Lisp. It includes a Lisp compiler and interpreter as well as a Prolog compiler and interpreter. The kernel system includes real-time multi-processing based on SCEPTRE, garbage-collection based on the MOON's algorithm and virtual memory management
DE, SIQUEIRA JOSE. "Controle de la demonstration automatique de theoremes, construction de contre-modeles et applications en intelligence artificielle". Paris 11, 1992. http://www.theses.fr/1992PA112425.
Pełny tekst źródłaShminke, Boris. "Applications de l'IA à l'étude des structures algébriques finies et à la démonstration automatique de théorèmes". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4058.
Pełny tekst źródłaThis thesis contributes to a finite model search and automated theorem proving, focusing primarily but not limited to artificial intelligence methods. In the first part, we solve an open research question from abstract algebra using an automated massively parallel finite model search, employing the Isabelle proof assistant. Namely, we establish the independence of some abstract distributivity laws in residuated binars in the general case. As a by-product of this finding, we contribute a Python client to the Isabelle server. The client has already found its application in the work of other researchers and higher education. In the second part, we propose a generative neural network architecture for producing finite models of algebraic structures belonging to a given variety in a way inspired by image generation models such as GANs (generative adversarial networks) and autoencoders. We also contribute a Python package for generating finite semigroups of small size as a reference implementation of the proposed method. In the third part, we design a general architecture of guiding saturation provers with reinforcement learning algorithms. We contribute an OpenAI Gym-compatible collection of environments for directing Vampire and iProver and demonstrate its viability on select problems from the Thousands of Problems for Theorem Provers (TPTP) library. We also contribute a containerised version of an existing ast2vec model and show its applicability to embedding logical formulae written in the clausal-normal form. We argue that the proposed modular approach can significantly speed up experimentation with different logic formulae representations and synthetic proof generation schemes in future, thus addressing the data scarcity problem, notoriously limiting the progress in applying the machine learning techniques for automated theorem proving
Ye, Xiaoyan. "Applications of Artificial Intelligence to Control and Analyze the Performance of Fiber-Optic Transmission Systems". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT048.
Pełny tekst źródłaThe surging demands for internet traffic have necessitated continuous expansion in opticalfiber communication systems capacity, cornerstone of global communication networks. This thesisdelves into innovative solutions addressing the challenges posed by ultra-wideband (UWB) amplificationand precise noise estimation in optical transmission systems. Optical fiber communication systems haveundergone significant evolution to meet escalating capacity requirements. Progressing from optical amplifiersand coherent detection to advanced modulationformat and digital signal processing (DSP) algorithms. To meet the need for higher traffic demands in opticalnetworks, integrating UWB schemes and implementing low-margin network designs have becomeprimordial. This work explores fundamental aspects of UWB amplification. Accurate prediction of Ramangain profiles and optimal pump configurations design is paramount, yet conventional methods prove computationallyintensive. Here, Machine Learning (ML) emerges as a powerful tool, simplifying complexityand enhancing accuracy in these scenarios. Additionally, the thesis addresses the challenge of designinglow-margin systems by developing a reliable Quality of Transmission (QoT) tool. Optical fiber transmissionsystems contend with diverse impairments such as fiber attenuation, ASE noise, laser phase noise, nonlinearinterference (NLI), etc. While linear impairments can be effectively mitigated and characterized, traditionalmethods may falter in estimating some major nonlinear impairments, posing challenges in accuracyand complexity. Consequently, this work delves into data-driven approaches, including ML frameworks,to provide effective estimation of Kerr nonlinear impairments and electronically enhanced phase noise(EEPN) In summary, this thesis leverages ML and data-driven methods to enhance the performance ofoptical transmission systems. These advancements are poised to shape the future of optical communicationsystems, facilitating higher capacities and more reliable transmissions in our rapidly evolving digitalenvironment
Orchampt, Patrick. "Apports des techniques d'intelligence artificielle en conception assistée par ordinateur : applications à la schématique". Lyon 1, 1987. http://www.theses.fr/1987LYO19017.
Pełny tekst źródłaYim, Pascal. "Résolution dans les systèmes formels abstraits : applications a la programmation en logique, aux systemes de reecriture et aux grammaires formelles". Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0041.
Pełny tekst źródłaJacquelinet, Christian. "Modélisation du langage naturel et représentation des connaissances par graphes conceptuels : applications au domaine de l'information médicale". Rennes 1, 2002. http://www.theses.fr/2002REN1B060.
Pełny tekst źródłaSeinturier, Julien. "Fusion de connaissances : applications aux relevés photogrammétriques de fouilles archéologiques sous-marines". Phd thesis, Toulon, 2007. https://theses.hal.science/tel-00838699.
Pełny tekst źródłaThis work propose a study of the knowledge fusion and its applications in the survey of underwater archaeological excavations. Our framework is the knowledge based measure, which can be represented by a synthesis between theoretical models designed by specific domain experts and set of observations performed on surveyed objects. During the study of an archaeological site, surveys can be made by different operators and at different times. This multiplication of observations can lead lo inconsistencies during the aggregation of the partial results. The building of a final result requires a fusion process piloted by the leader of the study. Such process must be automated while leaving to the operator the choice of the methods used to ensure the final consistency. This work is divided into three parts: a theoretical study of the known methods for knowledge fusion, the implementation of fusion methods as part of the knowledge based measurement and experimentation of proposed solutions during surveys in full-scale applications. In the first part, we present a theoretical study of known belief fusion methods and we propose a new framework allowing to express these methods at a semantic and syntactic level while adding it a reversibility. This framework is based on polynomials weights that allow to represent priorities between believes and history of the changes of these priorities. In the second pan. We present knowledge based measurement by representing a formalism of representation based on entity notion. We propose then adapted technics of fusion in the new representation. These technics are based on first order logic. Algorithms effusion are proposed and studied. In the last part, we introduce experimentation of techniques of proposed fusion methods. We offer a description of the developed tools used as pan of plan European VENUS (http://www. Venus-project. Eu) but also their extensions to building archaeology and in underwater biology
Seinturier, Julien. "Fusion de connaissances : Applications aux relevés photogrammétriques de fouilles archéologiques sous-marines". Phd thesis, Université du Sud Toulon Var, 2007. http://tel.archives-ouvertes.fr/tel-00838699.
Pełny tekst źródłaDurand, Audrey. "Déclinaisons de bandits et leurs applications". Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28250.
Pełny tekst źródłaThis thesis deals with various variants of the bandits problem, wihch corresponds to a simplified instance of a RL problem with emphasis on the exploration-exploitation trade-off. More specifically, the focus is on three variants: contextual, structured, and multi-objective bandits. In the first, an agent searches for the optimal action depending on a given context. In the second, an agent searches for the optimal action in a potentially large space characterized by a similarity metric. In the latter, an agent searches for the optimal trade-off on a Pareto front according to a non-observable preference function. The thesis introduces algorithms adapted to each of these variants, whose performances are supported by theoretical guarantees and/or empirical experiments. These bandit variants provide a framework for two real-world applications with high potential impact: 1) adaptive treatment allocation for the discovery of personalized cancer treatment strategies; and 2) online optimization of microscopic imaging parameters for the efficient acquisition of useful images. The thesis therefore offers both algorithmic, theoretical, and applicative contributions. An adaptation of the BESA algorithm, GP BESA, is proposed for the problem of contextual bandits. Its potential is highlighted by simulation experiments, which motivated the deployment of the strategy in a wet lab experiment on real animals. Promising results show that GP BESA is able to extend the longevity of mice with cancer and thus significantly increase the amount of data collected on subjects. An adaptation of the TS algorithm, Kernel TS, is proposed for the problem of structured bandits in RKHS. A theoretical analysis allows to obtain convergence guarantees on the cumulative pseudo-regret. Concentration results for the regression with variable regularization as well as a procedure for adaptive tuning of the regularization based on the empirical estimation of the noise variance are also introduced. These contributions make it possible to lift the typical assumption on the a priori knowledge of the noise variance in streaming kernel regression. Numerical results illustrate the potential of these tools. Empirical experiments also illustrate the performance of Kernel TS and raise interesting questions about the optimality of theoretical intuitions. A new variant of multi-objective bandits, generalizing the literature, is also proposed. More specifically, the new framework considers that the preference articulation between the objectives comes from a nonobservable function, typically a user (expert), and suggests integrating this expert into the learning loop. The concept of preference radius is then introduced to evaluate the robustness of the expert’s preference function to errors in the estimation of the environment. A variant of the TS algorithm, TS-MVN, is introduced and analyzed. Empirical experiments support the theoreitcal results and provide a preliminary investigation of questions about the presence of an expert in the learning loop. Put together, structured and multi-objective bandits approaches are then used to tackle the online STED imaging parameters optimization problem. Experimental results on a real microscopy setting and with real neural samples show that the proposed technique makes it possible to significantly accelerate the process of parameters characterization and facilitate the acquisition of images relevant to experts in neuroscience.
Jiao, Yang. "Applications of artificial intelligence in e-commerce and finance". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0002.
Pełny tekst źródłaArtificial Intelligence has penetrated into every aspect of our lives in this era of Big Data. It has brought revolutionary changes upon various sectors including e-commerce and finance. In this thesis, we present four applications of AI which improve existing goods and services, enables automation and greatly increase the efficiency of many tasks in both domains. Firstly, we improve the product search service offered by most e-commerce sites by using a novel term weighting scheme to better assess term importance within a search query. Then we build a predictive model on daily sales using a time series forecasting approach and leverage the predicted results to rank product search results in order to maximize the revenue of a company. Next, we present the product categorization challenge we hold online and analyze the winning solutions, consisting of the state-of-the-art classification algorithms, on our real dataset. Finally, we combine skills acquired previously from time series based sales prediction and classification to predict one of the most difficult but also the most attractive time series: stock. We perform an extensive study on every single stocks of S&P 500 index using four state-of-the-art classification algorithms and report very promising results
La, Barbera Giammarco. "Learning anatomical digital twins in pediatric 3D imaging for renal cancer surgery". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT040.
Pełny tekst źródłaPediatric renal cancers account for 9% of pediatric cancers with a 9/10 survival rate at the expense of the loss of a kidney. Nephron-sparing surgery (NSS, partial removal of the kidney) is possible if the cancer meets specific criteria (regarding volume, location and extent of the lesion). Indication for NSS is relying on preoperative imaging, in particular X-ray Computerized Tomography (CT). While assessing all criteria in 2D images is not always easy nor even feasible, 3D patient-specific models offer a promising solution. Building 3D models of the renal tumor anatomy based on segmentation is widely developed in adults but not in children. There is a need of dedicated image processing methods for pediatric patients due to the specificities of the images with respect to adults and to heterogeneity in pose and size of the structures (subjects going from few days of age to 16 years). Moreover, in CT images, injection of contrast agent (contrast-enhanced CT, ceCT) is often used to facilitate the identification of the interface between different tissues and structures but this might lead to heterogeneity in contrast and brightness of some anatomical structures, even among patients of the same medical database (i.e., same acquisition procedure). This can complicate the following analyses, such as segmentation. The first objective of this thesis is to perform organ/tumor segmentation from abdominal-visceral ceCT images. An individual 3D patient model is then derived. Transfer learning approaches (from adult data to children images) are proposed to improve state-of-the-art performances. The first question we want to answer is if such methods are feasible, despite the obvious structural difference between the datasets, thanks to geometric domain adaptation. A second question is if the standard techniques of data augmentation can be replaced by data homogenization techniques using Spatial Transformer Networks (STN), improving training time, memory requirement and performances. In order to deal with variability in contrast medium diffusion, a second objective is to perform a cross-domain CT image translation from ceCT to contrast-free CT (CT) and vice-versa, using Cycle Generative Adversarial Network (CycleGAN). In fact, the combined use of ceCT and CT images can improve the segmentation performances on certain anatomical structures in ceCT, but at the cost of a double radiation exposure. To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it. We present an extension of CycleGAN to generate such images, from unpaired databases. Anatomical constraints are introduced by automatically selecting the region of interest and by using the score of a Self-Supervised Body Regressor, improving the selection of anatomically-paired images between the two domains (CT and ceCT) and enforcing anatomical consistency. A third objective of this work is to complete the 3D model of patient affected by renal tumor including also arteries, veins and collecting system (i.e. ureters). An extensive study and benchmarking of the literature on anatomic tubular structure segmentation is presented. Modifications to state-of-the-art methods for our specific application are also proposed. Moreover, we present for the first time the use of the so-called vesselness function as loss function for training a segmentation network. We demonstrate that combining eigenvalue information with structural and voxel-wise information of other loss functions results in an improvement in performance. Eventually, a tool developed for using the proposed methods in a real clinical setting is shown as well as a clinical study to further evaluate the benefits of using 3D models in pre-operative planning. The intent of this research is to demonstrate through a retrospective evaluation of experts how criteria for NSS are more likely to be found in 3D compared to 2D images. This study is still ongoing
Ali, Sadaqat. "Energy management of multi-source DC microgrid systems for residential applications". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0159.
Pełny tekst źródłaCompared to the alternating current (AC) electrical grid, the direct current (DC) electrical grid has demonstrated numerous advantages, such as its natural interface with renewable energy sources (RES), energy storage systems, and DC loads. It offers superior efficiency with fewer conversion steps, simpler control without skin effect or reactive power considerations. DC microgrids remain a relatively new technology, and their network architectures, control strategies, and stabilization techniques require significant research efforts. In this context, this thesis focuses on energy management issues in a multi-source DC electrical grid dedicated to residential applications. The DC electrical grid consists of distributed generators (solar panels), a hybrid energy storage system (HESS) with batteries and a supercapacitor (SC), and DC loads interconnected via DC/DC power converters. The primary objective of this research is to develop an advanced energy management strategy (EMS) to enhance the operational efficiency of the system while improving its reliability and sustainability. A hierarchical simulation platform of the DC electrical grid has been developed using MATLAB/Simulink. It comprises two layers with different time scales: a local control layer (time scale of a few seconds to minutes due to converter switching behavior) for controlling local components, and a system-level control layer (time scale of a few days to months with accelerated testing) for long-term validation and performance evaluation of the EMS. In the local control layer, solar panels, batteries, and the supercapacitor have been modeled and controlled separately. Various control modes, such as current control, voltage control, and maximum power point tracking (MPPT), have been implemented. A low-pass filter (LPF) has been applied to divide the total HESS power into low and high frequencies for the batteries and supercapacitor. Different LPF cutoff frequencies for power sharing have also been studied. A combined hybrid bi-level EMS and automatic sizing have been proposed and validated. It mainly covers five operational scenarios, including solar panel production reduction, load reduction, and three scenarios involving HESS control combined with supercapacitor state of charge (SOC) control retention. An objective function that considers both capital expenditure (CAPEX) and operating costs (OPEX) has been designed for EMS performance evaluation. The interaction between the HESS and EMS has been jointly studied based on an open dataset of residential electrical consumption profiles covering both summer and winter seasons. Finally, an experimental platform of a multi-source DC electrical grid has been developed to validate the EMS in real-time. It comprises four lithium-ion batteries, a supercapacitor, a programmable DC power supply, a programmable DC load, corresponding DC/DC converters, and a real-time controller (dSPACE/Microlabbox). Accelerated tests have been conducted to verify the proposed EMS in different operational scenarios by integrating real solar panels and load consumption profiles. The hierarchical simulation and experimental DC electrical grid platforms can be generally used to verify and evaluate various EMS
Fenet, Serge. "Vers un paradigme de programmation pour les applications distribuées basé sur le comportement des insectes sociaux : application à la sécurité des réseaux". Lyon 1, 2001. http://www.theses.fr/2001LYO10261.
Pełny tekst źródłaLechevallier, Antoine. "Physics Informed Deep Learning : Applications to well opening and closing events". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS062.
Pełny tekst źródłaThe reduction of CO2 emission into the atmosphere is mandatory to achieve ecological transition. CO2 geological storage is an essential instrument for efficient Carbon Capture and Storage (CCS) policies. Numerical simulations provide the solution to the multi-phase flow equations that model the behavior of the CO2 injection site. They are an important tool to decide either or not to exploit a potential carbon storage site and to monitor the operations (potential gas leakage, optimal positioning of CO2 injection wells, etc.). However, numerical simulations of fluid flow in porous media are computationally demanding: it can take up to several hours on a HPC cluster in order to simulate one injection scenario for a large CO2 reservoir if we want to accurately model the complex physical processes involved.parMore specifically, well events (opening and closure) cause important numerical difficulties due to their instant impact on the system. This often forces a drastic reduction of the time step size to be able to solve the non-linear system of equations resulting from the discretization of the continuous mathematical model. However, these specific well events in a simulation are relatively similar across space and time: the degree of similarity between two well events depends on a few parameters such as the injection condition, the state of the reservoir at the time of the event, the boundary conditions or the porous media parameters (permeability and porosity) around each well. Recent interest in machine learning applied to the prediction of physical processes has fueled the development of ''Physics Informed Deep Learning'', where machine learning models either replace or complement traditional numerical algorithms while preserving the inherent constraints from the physical model. Therefore, the objective of this thesis is to adapt recent advances in physics informed deep learning in order to alleviate the impact of well events in the numerical simulation of multiphase flow in porous media. Our main contributions are separated in three parts.parIn the first part, we replace the traditional numerical solver with a machine-learning model. We demonstrate the feasibility of learning parameter-to-solution operators for partial differential equation problems. However, when utilizing the machine-learning model for time iteration, we observe that the predicted solution diverges from the true solution. Consequently, in the second part, we use an hybrid approach that complements the traditional non-linear solver with a machine-learning model while preserving numerical guarantees. In practice, we utilize and tailor to our purpose the hybrid Newton methodology, which involves predicting a global initialization for Newton's method closer to the solution than the standard one. We use the state-of-the-art Fourier Neural Operator machine-learning model as a predictive model. Our methodology is applied to two test cases and exhibits promising results by reducing up to 54% the number of Newton iterations compared to a reference method.parIn the last part, we apply the hybrid Newton methodology to predict an initialization in the near-well region, where the main variations of CO2 saturations occur. We investigate the impact of the local domain size and then demonstrate, for a 1D case, that it is possible to learn a local initialization for any well location. Then, we apply this local approach to a 2D case and compare the performances between the hybrid Newton strategy and a Domain Decomposition-inspired strategy. We speed up the handling of well events by around 45% in terms of Newton iterations
Denize, Julien. "Self-supervised representation learning and applications to image and video analysis". Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR37.
Pełny tekst źródłaIn this thesis, we develop approaches to perform self-supervised learning for image and video analysis. Self-supervised representation learning allows to pretrain neural networks to learn general concepts without labels before specializing in downstream tasks faster and with few annotations. We present three contributions to self-supervised image and video representation learning. First, we introduce the theoretical paradigm of soft contrastive learning and its practical implementation called Similarity Contrastive Estimation (SCE) connecting contrastive and relational learning for image representation. Second, SCE is extended to global temporal video representation learning. Lastly, we propose COMEDIAN a pipeline for local-temporal video representation learning for transformers. These contributions achieved state-of-the-art results on multiple benchmarks and led to several academic and technical published contributions
Talbot, Jean-Marc. "Contraintes ensemblistes définies et co-définies : extensions et applications". Lille 1, 1998. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1998/50376-1998-367.pdf.
Pełny tekst źródłaRose, Cédric. "Modélisation stochastique pour le raisonnement médical et ses applications à la télémédecine". Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00598564.
Pełny tekst źródłaAversano, Gianmarco. "Development of physics-based reduced-order models for reacting flow applications". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC095/document.
Pełny tekst źródłaWith the final objective being to developreduced-order models for combustion applications,unsupervised and supervised machine learningtechniques were tested and combined in the workof the present Thesis for feature extraction and theconstruction of reduced-order models. Thus, the applicationof data-driven techniques for the detection offeatures from turbulent combustion data sets (directnumerical simulation) was investigated on two H2/COflames: a spatially-evolving (DNS1) and a temporallyevolvingjet (DNS2). Methods such as Principal ComponentAnalysis (PCA), Local Principal ComponentAnalysis (LPCA), Non-negative Matrix Factorization(NMF) and Autoencoders were explored for this purpose.It was shown that various factors could affectthe performance of these methods, such as the criteriaemployed for the centering and the scaling of theoriginal data or the choice of the number of dimensionsin the low-rank approximations. A set of guidelineswas presented that can aid the process ofidentifying meaningful physical features from turbulentreactive flows data. Data compression methods suchas Principal Component Analysis (PCA) and variationswere combined with interpolation methods suchas Kriging, for the construction of computationally affordablereduced-order models for the prediction ofthe state of a combustion system for unseen operatingconditions or combinations of model input parametervalues. The methodology was first tested forthe prediction of 1D flames with an increasing numberof input parameters (equivalence ratio, fuel compositionand inlet temperature), with variations of the classicPCA approach, namely constrained PCA and localPCA, being applied to combustion cases for the firsttime in combination with an interpolation technique.The positive outcome of the study led to the applicationof the proposed methodology to 2D flames withtwo input parameters, namely fuel composition andinlet velocity, which produced satisfactory results. Alternativesto the chosen unsupervised and supervisedmethods were also tested on the same 2D data.The use of non-negative matrix factorization (NMF) forlow-rank approximation was investigated because ofthe ability of the method to represent positive-valueddata, which helps the non-violation of important physicallaws such as positivity of chemical species massfractions, and compared to PCA. As alternative supervisedmethods, the combination of polynomial chaosexpansion (PCE) and Kriging and the use of artificialneural networks (ANNs) were tested. Results from thementioned work paved the way for the developmentof a digital twin of a combustion furnace from a setof 3D simulations. The combination of PCA and Krigingwas also employed in the context of uncertaintyquantification (UQ), specifically in the bound-to-bounddata collaboration framework (B2B-DC), which led tothe introduction of the reduced-order B2B-DC procedureas for the first time the B2B-DC was developedin terms of latent variables and not in terms of originalphysical variables
Trabelsi, Ahmed. "Modulation des niveaux de résistance dans une mémoire PCM pour des applications neuromorphiques". Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT027.
Pełny tekst źródłaThe exponential growth of data in recent years has led to a significant increase in energy consumption, creating a pressing need for innovative memory technologies to overcome the limitations of conventional solutions. This data deluge has resulted in a forecasted consumption surge in data centers, with an expected fourfold increase in data by 2025 compared to the present volume. To address this challenge, emerging memory technologies such as RRAM (Resistive RAM), PCM (Phase-Change Memory), and MRAM (Magnetoresistive RAM) are being developed to offer high density, fast access times, and non-volatility, thereby revolutionizing storage and memory solutions (Molas & Nowak, 2021).One promising technique to address the need for innovative memory technologies is the use of frequency modulation to modulate resistance in PCM which is a crucial aspect of its use in neuromorphic computing. PCM is a non-volatile memory technology based on the reversible phase transition between amorphous and crystalline phases of certain materials. The ability to alter conductance levels makes PCM well-suited for synaptic realizations in neuromorphic computing. The progressive crystallization of the phase-change material and the subsequent increase in device conductance enable PCM to be used in neuromorphic applications. Additionally, PCM-based memristor neural networks have been developed, and the resistance drift effect in PCM has been quantified, opening up new paths for the development of PCM-based memristor neuromorphic accelerators. Furthermore, frequency modulation has been identified as a promising technique to modulate resistance in PCM. This approach can be applied to PCM as well as RRAM, and it is expected to yield improved learning effects in more complex networks using multi-level cells (Wang et al., 2011). The primary aim of this thesis is to explore innovative methods for controlling resistance levels in PCM devices with a focus on their application in neuromorphic systems. The research involves a comprehensive understanding of the mechanisms underlying PCM devices and an identification of parameters that may influence the reliability of these devices. Additionally, the thesis aims to propose a novel approach to effectively modulate resistance levels in PCM devices, contributing to advancements in this field
Dimitracopoulou, Angélique. "Le tutorat dans les systèmes informatisés d'apprentissage : étude de la conception et réalisation d'un tutoriel d'aide à la représentation physique des situations étudiées par la mécanique". Paris 7, 1995. http://www.theses.fr/1995PA070089.
Pełny tekst źródłaThe modelisation of tutorials process, a crucial question in the field of computational didactics, is the main question of our research in design and realization of the prototype arpia : it is a tuturial of aid for the physics representation of situations in mechanics ; it is interactive and adapted to the learner. Our target is to help the students (between 17-19 years old) who face difficulties in developping a physics representation of situations (diagrams of forces and movements). The theoretical framework of the design of this tutorial is based on a precisedidactic analysis : epistemological and cognitive analysis of content, analysis of students' difficulties, clarification of learning hypothesis. The interface of communication has been designed in order to permit to the student to construct his representations directly (for instance by drowing the vectors). The process of cognitif diagnosis identifies and interprets the elements of student's representations. The prototype arpia has actually eight types of action : for instance, advice for a use of atechnic, procedure of correction of errors, process of explanation
Sadeghsa, Shohre. "Prédiction, réseau de neurones et optimisation : applications aux domaines des agro-matériaux et de la télécommunication". Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0091.
Pełny tekst źródłaIn the context of global changes in the world, the use of vegetal resources in composite materials is an alternative solution to the exploitation of fossil resources. However, the development of the vegetal composite requires taking into account the time and space availability of bio-sourced raw materials, their interchangeability, and their consequences on resulting functional characteristics. Optimization of the vegetal composite faces a variety of available data, the complexity to establish cause and effect relationships, and the mandatory handling of unexpected events (such as disaster, crises, break down, etc, ...). Thus, a reliable and sustainable system is required in order to produce the vegetal composite with constant efficiency. Artificial intelligence methods should allow improving the understanding and control of the production related to the concerned materials. Considering the uncertainty and changeability of the data related to the vegetal materials, we applied machine learning and artificial intelligence methods to predict the parameters of the experiments dynamically. A model can adapt itself based on the training data. Predictions are dynamic, and the results are data-oriented.Herein, the vegetal composite is studied with three aspects: to predict the compressive strength of the composite, to predict the flexural strength of the composite, and a simulation model to predict the parameters of the compressive composite strength test. To overcome the mentioned problems, artificial intelligence and machine learning methods are suggested as a solution that learns from the old data in order to converge towards better local optima. The development of the vegetal composite requires taking into account the specific parameters of the bio-sourced raw materials such as temporal availability, interchangeability, and the consequences on the resulting functional properties. Optimization of the vegetal composites can be viewed as a complex problem related to different domains such as biology, physico-chemistry, and process engineering. The sustainable optimization and production of the vegetal materials also require the localization, centralization, and consolidation of the supply chain sites. The supply chain problems consist of localizing the production sites, routing, and scheduling, and storage of raw materials, and final distribution and marketing. Regrouping, consolidating or clustering are referred to the act of merging two or more sites. It is the act of reducing the number of existing centers. In order to keep the continuous efficiency in the supply and production chains, each site has to provide the whole services that used to be served by the replaced sites. The regrouping problem can be seen in any part of this chain. The k-clustering problem can be defined in any of the two parts of the supply chain that are in direct relations. This thesis aims to deal with the complexity encountered to optimize the vegetal composite and to ensure the sustainability of the supply, production, and marketing sites. The former is achieved, in the first part, by optimizing the characteristics of the composite material using artificial intelligence methods. The latter is presented in the second part of this study. The proposed method merges the supply chain site(s) using the K-clustering problem. Different optimization solution methods are proposed. The applied transversal approach allowing the coupling of skills is presented within the research unit EPROAD. The proposed methods are from the field of artificial intelligence, combinatorial optimization, discrete modeling resulting from applied mathematics, sensitivity analysis, and the field of process engineering for the development of intelligent cooperative methods
Sow, Cheickh Tidiane. "Utilisation du calcul formel dans la mécanique de Hamilton : modélisation et applications". Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090011.
Pełny tekst źródłaGaguet, Laurent. "Attitudes mentales et planification en intelligence artificielle : modélisation d'un agent rationnel dans un environnement multi-agents". Clermont-Ferrand 2, 2000. http://www.theses.fr/2000CLF20023.
Pełny tekst źródłaVinurel, Jean-Jacques. "Une application de l'intelligence artificielle à l'enseignement assisté par ordinateur". Paris 6, 1986. http://www.theses.fr/1986PA066379.
Pełny tekst źródłaAdjali, Omar. "Dynamic architecture for multimodal applications to reinforce robot-environment interaction". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV100.
Pełny tekst źródłaKnowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence. More specifically, in the context of robotic applications, knowledge representation and reasoning approaches are necessary to solve decision problems that autonomous robots face when it comes to evolve in uncertain, dynamic and complex environments or to ensure a natural interaction in human environment. In a robotic interaction system, information has to be represented and processed at various levels of abstraction: From sensor up to actions and plans. Thus, knowledge representation provides the means to describe the environment with different abstraction levels which allow performing appropriate decisions. In this thesis we propose a methodology to solve the problem of multimodal interaction by describing a semantic interaction architecture based on a framework that demonstrates an approach for representing and reasoning with environment knowledge representation language (EKRL), to enhance interaction between robots and their environment. This framework is used to manage the interaction process by representing the knowledge involved in the interaction with EKRL and reasoning on it to make inference. The interaction process includes fusion of values from different sensors to interpret and understand what is happening in the environment, and the fission which suggests a detailed set of actions that are for implementation. Before such actions are implemented by actuators, these actions are first evaluated in a virtual environment which mimics the real-world environment to assess the feasibility of the action implementation in the real world. During these processes, reasoning abilities are necessary to guarantee a global execution of a given interaction scenario. Thus, we provided EKRL framework with reasoning techniques to draw deterministic inferences thanks to unification algorithms and probabilistic inferences to manage uncertain knowledge by combining statistical relational models using Markov logic Networks(MLN) framework with EKRL. The proposed work is validated through scenarios that demonstrate the usability and the performance of our framework in real world applications
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS027.
Pełny tekst źródłaThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Fabbri, André. "Dynamique d'apprentissage pour Monte Carlo Tree Search : applications aux jeux de Go et du Clobber solitaire impartial". Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10183/document.
Pełny tekst źródłaMonte Carlo Tree Search (MCTS) has been initially introduced for the game of Go but has now been applied successfully to other games and opens the way to a range of new methods such as Multiple-MCTS or Nested Monte Carlo. MCTS evaluates game states through thousands of random simulations. As the simulations are carried out, the program guides the search towards the most promising moves. MCTS achieves impressive results by this dynamic, without an extensive need for prior knowledge. In this thesis, we choose to tackle MCTS as a full learning system. As a consequence, each random simulation turns into a simulated experience and its outcome corresponds to the resulting reinforcement observed. Following this perspective, the learning of the system results from the complex interaction of two processes : the incremental acquisition of new representations and their exploitation in the consecutive simulations. From this point of view, we propose two different approaches to enhance both processes. The first approach gathers complementary representations in order to enhance the relevance of the simulations. The second approach focuses the search on local sub-goals in order to improve the quality of the representations acquired. The methods presented in this work have been applied to the games of Go and Impartial Solitaire Clobber. The results obtained in our experiments highlight the significance of these processes in the learning dynamic and draw up new perspectives to enhance further learning systems such as MCTS
Crémilleux, Bruno. "Induction automatique : aspects théoriques, le système ARBRE, applications en médecine". Phd thesis, Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00339492.
Pełny tekst źródłaDarwesh, Aso. "Diagnostic cognitif en EIAH : le système PépiMep". Paris 6, 2010. http://www.theses.fr/2010PA066397.
Pełny tekst źródłaBerkouk, Nicolas. "Persistence and Sheaves : from Theory to Applications". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX032.
Pełny tekst źródłaTopological data analysis is a recent field of research aiming at using techniques coming from algebraic topology to define descriptors of datasets. To be useful in practice, these descriptors must be computable, and coming with a notion of metric, in order to express their stability properties with res-pect to the noise that always comes with real world data. Persistence theory was elaborated in the early 2000’s as a first theoretical setting to define such des-criptors - the now famous so-called barcodes. Howe-ver very well suited to be implemented in a compu-ter, persistence theory has certain limitations. In this manuscript, we establish explicit links between the theory of derived sheaves equipped with the convolu-tion distance (after Kashiwara-Schapira) and persis-tence theory.We start by showing a derived isometry theorem for constructible sheaves over R, that is, we express the convolution distance between two sheaves as a matching distance between their graded barcodes. This enables us to conclude in this setting that the convolution distance is closed, and that the collec-tion of constructible sheaves over R equipped with the convolution distance is locally path-connected. Then, we observe that the collection of zig-zag/level sets persistence modules associated to a real valued function carry extra structure, which we call Mayer-Vietoris systems. We classify all Mayer-Vietoris sys-tems under finiteness assumptions. This allows us to establish a functorial isometric correspondence bet-ween the derived category of constructible sheaves over R equipped with the convolution distance, and the category of strongly pfd Mayer-Vietoris systems endowed with the interleaving distance. We deduce from this result a way to compute barcodes of sheaves from already existing software.Finally, we give a purely sheaf theoretic definition of the notion of ephemeral persistence module. We prove that the observable category of persistence mo-dules (the quotient category of persistence modules by the sub-category of ephemeral ones) is equivalent to the well-known category of -sheaves
Sorici, Alexandru. "Un Intergiciel de Gestion du Contexte basé Multi-Agent pour les Applications d'Intelligence Ambiante". Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0790/document.
Pełny tekst źródłaThe complexity and magnitude of Ambient Intelligence scenarios imply that attributes such as modeling expressiveness, flexibility of representation and deployment, as well as ease of configuration and development become central features for context management systems.However, existing works in the literature seem to explore these development-oriented attributes at a low degree.Our goal is to create a flexible and well configurable context management middleware, able to respond to different scenarios. To this end, our solution is built on the basis of principles and techniques of the Semantic Web and Multi-Agent Systems.We use the Semantic Web to provide a new context meta-model, allowing for an expressive and extensible modeling of content, meta-properties (e.g. temporal validity, quality parameters) and dependencies (e.g. integrity constraints).In addition, we develop a middleware architecture that relies on Multi-Agent Systems and a service component based design. Each agent of the system encapsulates a functional aspect of the context provisioning processes (acquisition, coordination, distribution, use).We introduce a new way to structure the deployment of agents depending on the multi-dimensionality aspects of the application's context model. Furthermore, we develop declarative policies governing the adaptation behavior of the agents managing the provisioning of context information.Simulations of an intelligent university scenario show that appropriate tooling built around our middleware can provide significant advantages in the engineering of context-aware applications
Putina, Andrian. "Unsupervised anomaly detection : methods and applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT012.
Pełny tekst źródłaAn anomaly (also known as outlier) is an instance that significantly deviates from the rest of the input data and being defined by Hawkins as 'an observation, which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism'. Anomaly detection (also known as outlier or novelty detection) is thus the machine learning and data mining field with the purpose of identifying those instances whose features appear to be inconsistent with the remainder of the dataset. In many applications, correctly distinguishing the set of anomalous data points (outliers) from the set of normal ones (inliers) proves to be very important. A first application is data cleaning, i.e., identifying noisy and fallacious measurement in a dataset before further applying learning algorithms. However, with the explosive growth of data volume collectable from various sources, e.g., card transactions, internet connections, temperature measurements, etc. the use of anomaly detection becomes a crucial stand-alone task for continuous monitoring of the systems. In this context, anomaly detection can be used to detect ongoing intrusion attacks, faulty sensor networks or cancerous masses.The thesis proposes first a batch tree-based approach for unsupervised anomaly detection, called 'Random Histogram Forest (RHF)'. The algorithm solves the curse of dimensionality problem using the fourth central moment (aka kurtosis) in the model construction while boasting linear running time. A stream based anomaly detection engine, called 'ODS', that leverages DenStream, an unsupervised clustering technique is presented subsequently and finally Automated Anomaly Detection engine which alleviates the human effort required when dealing with several algorithm and hyper-parameters is presented as last contribution
Mainsant, Marion. "Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.
Pełny tekst źródłaThe human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI