Dissertationen zum Thema „Intelligence artificielle (IA) embarquée“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-18 Dissertationen für die Forschung zum Thema "Intelligence artificielle (IA) embarquée" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Mainsant, Marion. „Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond“. Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.
Der volle Inhalt der QuelleThe human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI
Blachon, David. „Reconnaissance de scènes multimodale embarquée“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM001/document.
Der volle Inhalt der QuelleContext: This PhD takes place in the contexts of Ambient Intelligence and (Mobile) Context/Scene Awareness. Historically, the project comes from the company ST-Ericsson. The project was depicted as a need to develop and embed a “context server” on the smartphone that would get and provide context information to applications that would require it. One use case was given for illustration: when someone is involved in a meeting and receives a call, then thanks to the understanding of the current scene (meet at work), the smartphone is able to automatically act and, in this case, switch to vibrate mode in order not to disturb the meeting. The main problems consist of i) proposing a definition of what is a scene and what examples of scenes would suit the use case, ii) acquiring a corpus of data to be exploited with machine learning based approaches, and iii) propose algorithmic solutions to the problem of scene recognition.Data collection: After a review of existing databases, it appeared that none fitted the criteria I fixed (long continuous records, multi-sources synchronized records necessarily including audio, relevant labels). Hence, I developed an Android application for collecting data. The application is called RecordMe and has been successfully tested on 10+ devices, running Android 2.3 and 4.0 OS versions. It has been used for 3 different campaigns including the one for scenes. This results in 500+ hours recorded, 25+ volunteers were involved, mostly in Grenoble area but abroad also (Dublin, Singapore, Budapest). The application and the collection protocol both include features for protecting volunteers privacy: for instance, raw audio is not saved, instead MFCCs are saved; sensitive strings (GPS coordinates, device ids) are hashed on the phone.Scene definition: The study of existing works related to the task of scene recognition, along with the analysis of the annotations provided by the volunteers during the data collection, allowed me to propose a definition of a scene. It is defined as a generalisation of a situation, composed of a place and an action performed by one person (the smartphone owner). Examples of scenes include taking a transportation, being involved in a work meeting, walking in the street. The composition allows to get different kinds of information to provide on the current scene. However, the definition is still too generic, and I think that it might be completed with additionnal information, integrated as new elements of the composition.Algorithmics: I have performed experiments involving machine learning techniques, both supervised and unsupervised. The supervised one is about classification. The method is quite standard: find relevant descriptors of the data through the use of an attribute selection method. Then train and test several classifiers (in my case, there were J48 and Random Forest trees ; GMM ; HMM ; and DNN). Also, I have tried a 2-stage system composed of a first step of classifiers trained to identify intermediate concepts and whose predictions are merged in order to estimate the most likely scene. The unsupervised part of the work aimed at extracting information from the data, in an unsupervised way. For this purpose, I applied a bottom-up hierarchical clustering, based on the EM algorithm on acceleration and audio data, taken separately and together. One of the results is the distinction of acceleration into groups based on the amount of agitation
Chamberland, Simon. „Deux investigations en IA : contrôler les déplacements d'un robot mobile et coordonner les décisions d'une IA pour les jeux“. Mémoire, Université de Sherbrooke, 2013. http://savoirs.usherbrooke.ca/handle/11143/45.
Der volle Inhalt der QuelleChamberland, Simon. „Deux investigations en IA : contr??ler les d??placements d'un robot mobile et coordonner les d??cisions d'une IA pour les jeux“. Mémoire, Universit?? de Sherbrooke, 2013. http://savoirs.usherbrooke.ca/handle/11143/45.
Der volle Inhalt der QuelleNabholtz, Franz-Olivier. „Problématisation prospective des stratégies de la singularité“. Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB019.
Der volle Inhalt der QuelleFrom past world to globalization, from modernity to postmodernity, from the human to the transhuman: - the digital and technological revolution brings out issues that permeate our daily lives beyond even what common sense can imagine. The massification of data, analyzed as the result of a hyper-connectivity, linked to a convergence "big data-artificial intelligence" raises the question of its fair use and distribution between highly voluntary private actors (GAFA) and public institutions for the least outdated, as to the principles of rational efficiency representing one of the characteristics of datas. A predictive characteristic that corresponds to a vital need of states. A human society with specific knowledge of its situation could make rational choices based on predictive scenarios and would no longer behave in the same way and no longer normalize in the same way. If we reject transhumanism in its ideological dimension, we take for granted the conceptual dimensions of the theory of singularity that we problematize in this work by an analysis of information specific to an approach of economic intelligence, even beyond of common thought and consensus inherited from a deductive school of thought that has been affirmed by demonstration and imposed by a form of ideology that exists everywhere, if not in social sciences. Inductive thinking, whose primary characteristic is predictive correlation, would see the development of probabilistic, multidisciplinary, bold and peculiar political science scenarios, the main idea of which would be to detect and anticipate, as predictive medicine (this is what singularity tells us), major societal and political future trends. However, the nature of this work will have to be fully independent. The process of exploiting big data by means of algorithms, outside traditional processes of scientific validation, will be based on a new model, in which the proof of the cause will undoubtedly take on a quantum or synaptic dimension in a near future, analyzed thus, as singular
Ayats, H. Ambre. „Construction de graphes de connaissances à partir de textes avec une intelligence artificielle explicable et centrée-utilisateur·ice“. Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS095.
Der volle Inhalt der QuelleWith recent advances in artificial intelligence, the question of human control has become central. Today, this involves both research into explainability and designs centered around interaction with the user. What's more, with the expansion of the semantic web and automatic natural language processing methods, the task of constructing knowledge graphs from texts has become an important issue. This thesis presents a user-centered system for the construction of knowledge graphs from texts. This thesis presents several contributions. First, we introduce a user-centered workflow for the aforementioned task, having the property of progressively automating the user's actions while leaving them a fine-grained control over the outcome. Next, we present our contributions in the field of formal concept analysis, used to design an explainable instance-based learning module for relation classification. Finally, we present our contributions in the field of relation extraction, and how these fit into the presented workflow
Robert, Gabriel. „MHiCS, une architecture de sélection de l'action motivationnelle et hiérarchique à systèmes de classeurs pour personnages non joueurs adaptatifs“. Paris 6, 2005. http://www.theses.fr/2005PA066165.
Der volle Inhalt der QuelleAfchar, Darius. „Interpretable Music Recommender Systems“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.
Der volle Inhalt der Quelle‘‘Why do they keep recommending me this music track?’’ ‘‘Why did our system recommend these tracks to users?’’ Nowadays, streaming platforms are the most common way to listen to recorded music. Still, music recommendations — at the heart of these platforms — are not an easy feat. Sometimes, both users and engineers may be equally puzzled about the behaviour of a music recommendation system (MRS). MRS have been successfully employed to help explore catalogues that may be as large as tens of millions of music tracks. Built and optimised for accuracy, real-world MRS often end up being quite complex. They may further rely on a range of interconnected modules that, for instance, analyse audio signals, retrieve metadata about albums and artists, collect and aggregate user feedbacks on the music service, and compute item similarities with collaborative filtering. All this complexity hinders the ability to explain recommendations and, more broadly, explain the system. Yet, explanations are essential for users to foster a long-term engagement with a system that they can understand (and forgive), and for system owners to rationalise failures and improve said system. Interpretability may also be needed to check the fairness of a decision or can be framed as a means to control the recommendations better. Moreover, we could also recursively question: Why does an explanation method explain in a certain way? Is this explanation relevant? What could be a better explanation? All these questions relate to the interpretability of MRSs. In the first half of this thesis, we explore the many flavours that interpretability can have in various recommendation tasks. Indeed, since there is not just one recommendation task but many (e.g., sequential recommendation, playlist continuation, artist similarity), as well as many angles through which music may be represented and processed (e.g., metadata, audio signals, embeddings computed from listening patterns), there are as many settings that require specific adjustments to make explanations relevant. A topic like this one can never be exhaustively addressed. This study was guided along some of the mentioned modalities of musical objects: interpreting implicit user logs, item features, audio signals and similarity embeddings. Our contribution includes several novel methods for eXplainable Artificial Intelligence (XAI) and several theoretical results, shedding new light on our understanding of past methods. Nevertheless, similar to how recommendations may not be interpretable, explanations about them may themselves lack interpretability and justifications. Therefore, in the second half of this thesis, we found it essential to take a step back from the rationale of ML and try to address a (perhaps surprisingly) understudied question in XAI: ‘‘What is interpretability?’’ Introducing concepts from philosophy and social sciences, we stress that there is a misalignment in the way explanations from XAI are generated and unfold versus how humans actually explain. We highlight that current research tends to rely too much on intuitions or hasty reduction of complex realities into convenient mathematical terms, which leads to the canonisation of assumptions into questionable standards (e.g., sparsity entails interpretability). We have treated this part as a comprehensive tutorial addressed to ML researchers to better ground their knowledge of explanations with a precise vocabulary and a broader perspective. We provide practical advice and highlight less popular branches of XAI better aligned with human cognition. Of course, we also reflect back and recontextualise our methods proposed in the previous part. Overall, this enables us to formulate some perspective for our field of XAI as a whole, including its more critical and promising next steps as well as its shortcomings to overcome
Dubus, Georges. „Transformation de programmes logiques : application à la personnalisation et à la personnification d’agents“. Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0017/document.
Der volle Inhalt der QuelleThis thesis deals with personalization and personification of rational agents within the framework of web applications. Personalization and personification techniques are more and more used to answer the needs of users. Most of those techniques are based on reasoning tools that come from the artificial inteligence field. However, those techniques are usually used in an ad-hoc way for each application. The approach of this thesis is to consider personaliaation and personification as two instances of alteration of behaviour, and to study the alteration of the behaviours of rational agents. The main contributions are WAIG, a formalism for the expression of web applications based on the agent programming language Golog, and PAGE, a formal framework for the manipulation and the alteration of Golog agent programs, which allow to transform an agent automatically following a given criterion. Those contributions are illustrated by concrete scenarios from the fields of personalization and personification
Wang, Olivier. „Adaptive Rules Model : Statistical Learning for Rule-Based Systems“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX037/document.
Der volle Inhalt der QuelleBusiness Rules (BRs) are a commonly used tool in industry for the automation of repetitive decisions. The emerging problem of adapting existing sets of BRs to an ever-changing environment is the motivation for this thesis. Existing Supervised Machine Learning techniques can be used when the adaptation is done knowing in detail which is the correct decision for each circumstance. However, there is currently no algorithm, theoretical or practical, which can solve this problem when the known information is statistical in nature, as is the case for a bank wishing to control the proportion of loan requests its automated decision service forwards to human experts. We study the specific learning problem where the aim is to adjust the BRs so that the decisions are close to a given average value.To do so, we consider sets of Business Rules as programs. After formalizing some definitions and notations in Chapter 2, the BR programming language defined this way is studied in Chapter 3, which proves that there exists no algorithm to learn Business Rules with a statistical goal in the general case. We then restrain the scope to two common cases where BRs are limited in some way: the Iteration Bounded case in which no matter the input, the number of rules executed when taking the decision is less than a given bound; and the Linear Iteration Bounded case in which rules are also all written in Linear form. In those two cases, we later produce a learning algorithm based on Mathematical Programming which can solve this problem. We briefly extend this theory and algorithm to other statistical goal learning problems in Chapter 5, before presenting the experimental results of this thesis in Chapter 6. The last includes a proof of concept to automate the main part of the learning algorithm which does not consist in solving a Mathematical Programming problem, as well as some experimental evidence of the computational complexity of the algorithm
Marroquín, Cortez Roberto Enrique. „Context-aware intelligent video analysis for the management of smart buildings“. Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK040/document.
Der volle Inhalt der QuelleTo date, computer vision systems are limited to extract digital data of what the cameras "see". However, the meaning of what they observe could be greatly enhanced by environment and human-skills knowledge.In this work, we propose a new approach to cross-fertilize computer vision with contextual information, based on semantic modelization defined by an expert.This approach extracts the knowledge from images and uses it to perform real-time reasoning according to the contextual information, events of interest and logic rules. The reasoning with image knowledge allows to overcome some problems of computer vision such as occlusion and missed detections and to offer services such as people guidance and people counting. The proposed approach is the first step to develop an "all-seeing" smart building that can automatically react according to its evolving information, i.e., a context-aware smart building.The proposed framework, named WiseNET, is an artificial intelligence (AI) that is in charge of taking decisions in a smart building (which can be extended to a group of buildings or even a smart city). This AI enables the communication between the building itself and its users to be achieved by using a language understandable by humans
Chuquimia, Orlando. „Smart Vision Chip pour l’exploration du côlon“. Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS192.pdf.
Der volle Inhalt der QuelleCCR is the second highest cause of death by cancer worldwide with 880,792 deaths in 2018 and a mortality rate of 47.6%. 95% of CCR cases begin with the presence of a growth on the inner lining of the colon or the rectum, called a polyp. The endoscopic capsule was invented by Paul Swain in 1990. It is a pill incorpo- rating a camera and a radio communication system that the patient swallows and transmit images from the gastrointestinal tract through the body in a workstation. Once all images are transmitted, a gastroenterologist downloads them to perform a visual analysis and detect abnormalities and tumors. Using this device doctors can detect polyps, at least 5 mm, with sensitivity and specificity respectively of 68.8% and 81.3%. This endoscopic capsule presents some limitations and weaknesses re- lated to the spatial and temporal resolution of images, its energy autonomy and the number of images transmitted to be analyzed by the gastroenterologist. We studied the design of an embedded system containing a processing chain capable of detecting polyps to be integrated into an endoscopic capsule, creating a new medical device: an intelligent endoscopic capsule. To realize this device, we took into account all the non-functional constraints related to the integration into an endoscopic capsule. This device must be a new tool for early detection of precancerous colorectal lesions : polyps
Heurteau, Alexandre. „Etude bioinformatique intégrative : déterminants et dynamique des interactions chromosomiques à longue distance“. Electronic Thesis or Diss., Toulouse 3, 2019. http://www.theses.fr/2019TOU30343.
Der volle Inhalt der QuelleInsulator Binding Proteins (IBPs) could be involved in the three-dimensional folding of genomes into topological domains (or "TADs"). In particular, TADs would help to separate the inactive/heterochromatin and active/euchromatin compartments. IBPs are also able to block specific contacts between the activator or enhancer elements of one TAD and target gene promoters present in another TAD. Thus, insulators may influence gene expression according to several regulatory modes that have yet to be characterized at genome level. The results obtained in the first part of my thesis show how IBPs influence gene expression according to a new regulatory mechanism, as shown at the scale of the Drosophila genome. Our bioinformatics analyses show that IBPs regulate the spread of repressive heterochromatin (H3K27me3) both in cis and trans. Trans regulations involve chromatin loops between insulators positioned at the heterochromatin boundary and distant insulators positioned at the edges of euchromatic genes. Trans spreading leads to the formation of "micro-domains" of heterochromatin, thereby repressing distant genes. In particular, an insulator mutant that prevents loop formation significantly reduces the establishment of micro-domains. In addition, these micro-domains would be formed during development suggesting a new insulator-dependent mechanism for gene regulation. Furthermore, we could uncover a novel function of cohesion, a key regulator of 3D loops in humans, in regulating non-coding RNAs (ncRNAs), including "PROMoters uPstream Transcripts" (PROMPTs) and enhancers RNAs (eRNAs). The MTR4 helicase is essential to the control of coding and noncoding RNA stability by the human nuclear-exosome targeting (NEXT) complex and pA-tail exosome targeting (PAXT) complex. Remarkably, ncRNAs could be detected upon depletion of the Mtr4 helicase of the human NEXT complex. Moreover, depletion of additional NEXT subunits, ZFC3H1 and ZCCHC8 (or Z1 and Z8), also led to uncover ncRNAs often produced from the same loci as upon MTR4 depletion. Curiously however, mapping of Mtr4 binding sites highlighted that Mtr4 binds to sites that are distant from PROMPTs. Rather than acting in cis, our data suggest that regulation of PROMPTs could involve specific long-distance contacts between these distant MTR4 binding sites and promoters bound by Z1/Z8. As such, integration of Hi-C data together with the detection of PROMPTS upon MTR4-, Z1- or Z8- depletions highlight possible role of long-range interactions in regulating PROMPTs, from distant MTR4-bound sites. This work may establish a new relationship between the 3D structure of genomes and the regulation of ncRNAs
Condevaux, Charles. „Méthodes d'apprentissage automatique pour l'analyse de corpus jurisprudentiels“. Thesis, Nîmes, 2021. http://www.theses.fr/2021NIME0008.
Der volle Inhalt der QuelleJudicial decisions contain deterministic information (whose content is recurrent from one decision to another) and random information (probabilistic). Both types of information come into play in a judge's decision-making process. The former can reinforce the decision insofar as deterministic information is a recurring and well-known element of case law (ie past business results). The latter, which are related to rare or exceptional characters, can make decision-making difficult, since they can modify the case law. The purpose of this thesis is to propose a deep learning model that would highlight these two types of information and study their impact (contribution) in the judge’s decision-making process. The objective is to analyze similar decisions in order to highlight random and deterministic information in a body of decisions and quantify their importance in the judgment process
Patel, Namrata. „Mise en œuvre des préférences dans des problèmes de décision“. Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT286/document.
Der volle Inhalt der QuelleIntelligent ‘services’ are increasingly used on e-commerce platforms to provide assistance to customers. In this context, preferences have gained rapid interest for their utility in solving problems related with decision making. Research on preferences in AI has shed light on various ways of tackling this problem, ranging from the acquisition of preferences to their formal representation and eventually their proper manipulation. Following a recent trend of stepping back and looking at decision-support systems from the user’s point of view, i.e. designing them on the basis of psychological, linguistic and personal considerations, we take up the task of developing an “intelligent” tool which uses comparative preference statements for personalised decision support. We tackle and contribute to different branches of research on preferences in AI: (1) their acquisition (2) their formal representation and manipulation (3) their implementation. We first address a bottleneck in preference acquisition by proposing a method of acquiring user preferences, expressed in natural language (NL), which favours their formal representation and further manipulation. We then focus on the theoretical aspects of handling comparative preference statements for decision support. We finally describe our tool for product recommendation that uses: (1) a review-based analysis to generate a product database, (2) an interactive preference elicitation unit to guide users to express their preferences, and (3) a reasoning engine that manipulates comparative preference statements to generate a preference-based ordering on outcomes as recommendations
Shahab, Amin. „Using electroencephalograms to interpret and monitor the emotions“. Thèse, 2017. http://hdl.handle.net/1866/21285.
Der volle Inhalt der QuelleDakoure, Caroline. „Study and experimentation of cognitive decline measurements in a virtual reality environment“. Thesis, 2020. http://hdl.handle.net/1866/24311.
Der volle Inhalt der QuelleAt a time when digital technology has become an integral part of our daily lives, we can ask ourselves how our well-being is evolving. Highly immersive virtual reality allows the development of environments that promote relaxation and can improve the cognitive abilities and quality of life of many people. The first aim of this study is to reduce the negative emotions and improve the cognitive abilities of people suffering from subjective cognitive decline (SCD). To this end, we have developed a virtual reality environment called Savannah VR, where participants followed an avatar across a savannah. We recruited nineteen people with SCD to participate in the virtual savannah experience. The Emotiv Epoc headset captured their emotions for the entire virtual experience. The results show that immersion in the virtual savannah reduced the negative emotions of the participants and that the positive effects continued afterward. Participants also improved their cognitive performance. Confusion often occurs during learning when students do not understand new knowledge. It is a state that is also very present in people with dementia because of the decline in their cognitive abilities. Detecting and overcoming confusion could thus improve the well-being and cognitive performance of people with cognitive impairment. The second objective of this paper is, therefore, to develop a tool to detect confusion. We conducted two experiments and obtained a machine learning model based on brain signals to recognize four levels of confusion (90% accuracy). In addition, we created another model to recognize the cognitive function related to the confusion (82% accuracy).
So, Florence. „Modelling causality in law = Modélisation de la causalité en droit“. Thesis, 2020. http://hdl.handle.net/1866/25170.
Der volle Inhalt der QuelleThe machine learning community’s interest in causality has significantly increased in recent years. This trend has not yet been made popular in AI & Law. It should be because the current associative ML approach reveals certain limitations that causal analysis may overcome. This research paper aims to discover whether formal causal frameworks can be used in AI & Law. We proceed with a brief account of scholarship on reasoning and causality in science and in law. Traditionally, normative frameworks for reasoning have been logic and rationality, but the dual theory has shown that human decision-making depends on many factors that defy rationality. As such, statistics and probability were called for to improve the prediction of decisional outcomes. In law, causal frameworks have been defined by landmark decisions but most of the AI & Law models today do not involve causal analysis. We provide a brief summary of these models and then attempt to apply Judea Pearl’s structural language and the Halpern-Pearl definitions of actual causality to model a few Canadian legal decisions that involve causality. Results suggest that it is not only possible to use formal causal models to describe legal decisions, but also useful because a uniform schema eliminates ambiguity. Also, causal frameworks are helpful in promoting accountability and minimizing biases.