Dissertations / Theses on the topic 'Intelligence artificielle – Information'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Intelligence artificielle – Information.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Vittaut, Jean-Noël. "LeJoueur : un programme de General Game Playing pour les jeux à information incomplète et-ou imparfaite." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080102.
Full textThis thesis is a contribution to the General Game Playing domain (GGP), a problematic of Artificial Intelligence (AI) aiming at developing autonomous agents that are able to play at a large variety of games called General Games. GGP is different from search algorithms allowing to play with a good level at specific games and opens the possibility to evaluate the efficiency of AI methods without prior knowledge form experts. An important aspect of our work lies on the utilization of an implicit game tree representation as a set of logic rules, an explicit representation being too large to be stored in memory. In this context, we have proposed an efficient method of rule instantiation allowing the computation of a logic circuit. A parallelization of the circuit evaluation allowed us to significantly accelerate the game tree exploration. We have proposed an adaptation of Monte-Carlo Tree Search for GGP and a method using RAVE (Rapid Action Value Estimation) in the beginning of the exploration when few estimations are available
Renoux, Jennifer. "Contribution to multiagent planning for active information gathering." Caen, 2015. https://hal.archives-ouvertes.fr/tel-01206920.
Full textIn this thesis, we address the problem of performing event exploration. We define event exploration as the process of exploring a topologically known environment to gather information about dynamic events in this environment. Multiagent systems are commonly used for information gathering applications, but bring important challenges such as coordination and communication. This thesis proposes a new fully decentralized model of multiagent planning for information gathering. In this model, called MAPING, the agents use an extended belief state that contains not only their own beliefs but also approximations of other agents' beliefs. With this extended belief state they are able to quantify the relevance of a piece of information for themselves but also for others. They can then decide to explore a specific area or to communicate a specific piece of information according to the action that brings the most information to the system in its totality. The major drawback of this model is its complexity: the size of the belief states space increases exponentially with the number of agents and the size of the environment. To overcome this issue, we also suggest a solving algorithm that uses the well-known adopted assumption of variable independence. Finally we consider the fact that event exploration is usually an open-ended problem. Therefore the agents need to check again their beliefs even after they reached a good belief state. We suggest a smoothing function that enables the agents to forget gradually old observations that can be obsolete
Ebadat, Ali Reza. "Toward robust information extraction models for multimedia documents." Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0022.
Full textDue to the huge amounts of multimedia documents that have been generated, researchers studied approaches to manage them. Our goal is to facilitate this process by extracting information from any text related to such documents. Moreover, we want techniques robust enough to handle noisy and small data. We use simple and nowledge-light techniques as a guarantee of robustness. Indeed, we use statistical analysis of text and some techniques inspired from Information Retrieval. In this thesis, we experimentally show that simple techniques without a priori knowledge can be useful to effectively extract information from text. In our case, such results have indeed been achieved by choosing suited representation for the data instead of requiring complex processings
Aboudib, Ala. "Neuro-inspired Architectures for the Acquisition and Processing of Visual Information." Thesis, Télécom Bretagne, 2016. http://www.theses.fr/2016TELB0419/document.
Full textComputer vision and machine learning are two hot research topics that have witnessed major breakthroughs in recent years. Much of the advances in these domains have been the fruits of many years of research on the visual cortex and brain function. In this thesis, we focus on designing neuro-inspired architectures for processing information along three different stages of the visual cortex. At the lowest stage, we propose a neural model for the acquisition of visual signals. This model is adapted to emulating eye movements and is closely inspired by the function and the architecture of the retina and early layers of the ventral stream. On the highest stage, we address the memory problem. We focus on an existing neuro-inspired associative memory model called the Sparse Clustered Network. We propose a new information retrieval algorithm that offers more flexibility and a better performance over existing ones. Furthermore, we suggest a generic formulation within which all existing retrieval algorithms can fit. It can also be used to guide the design of new retrieval approaches in a modular fashion. On the intermediate stage, we propose a new way for dealing with the image feature correspondence problem using a neural network model. This model deploys the structure of Sparse Clustered Networks, and offers a gain in matching performance over state-of-the-art, and provides a useful insight on how neuro-inspired architectures can serve as a substrate for implementing various vision tasks
Roussel, Stéphanie. "Apports de la logique mathématique pour la modélisation de l'information échangée dans des systèmes multiagents interactifs." Toulouse, ISAE, 2010. http://www.theses.fr/2010ESAE0012.
Full textRybnik, Mariusz. "Contribution to the modelling and the exploitation of hybrid multiple neural networks systems : application to intelligent processing of information." Paris 12, 2004. https://athena.u-pec.fr/primo-explore/search?query=any,exact,990003948290204611&vid=upec.
Full textFor a great number of actually encountered problems (complex processes modelization, pattern recognition, medical diagnosis support, fault detection) data is presented in form of database. The data is next transformed and processed. This work is concentrated on the development of semi-automatic data processing structures. Proposed approach is based on iterative decomposition of an initial problem. The main idea is to decompose initia!ly complex problems in order to obtain simplification simultaneously on structural level and processing level. Thus, the principal idea of present work is con nected to task decomposition techniques called "Divide to Conquer". A key point of our approach is the integration of Complexity Estimation techniques
Rybnik, Mariusz Madani Kurosh. "Contribution to the modelling and the exploitation of hybrid multiple neural networks systems application to intelligent processing of information /." Créteil : Université de Paris-Val-de-Marne, 2007. http://doxa.scd.univ-paris12.fr:8080/theses-npd/th0394829.htm.
Full textVersion électronique uniquement consultable au sein de l'Université Paris 12 (Intranet). Titre provenant de l'écran-titre. Bibliogr. : 116 réf.
Yameogo, Relwende Aristide. "Risques et perspectives du big data et de l'intelligence artificielle : approche éthique et épistémologique." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMLH10.
Full textIn the 21st century, the use of big data and AI in the field of health has gradually expanded, although it is accompanied by problems linked to the emergence of practices based on the use of digital traces. The aim of this thesis is to evaluate the use of big data and AI in medical practice, to discover the processes generated by digital tools in the field of health and to highlight the ethical problems they pose.The use of ICTs in medical practice is mainly based on the use of EHR, prescription software and connected objects. These uses raise many problems for physicians who are aware of the risk involved in protecting patients' health data. In this work, we are implementing a method for designing CDSS, the temporal fuzzy vector space. This method allows us to model a new clinical diagnostic score for pulmonary embolism. Through the "Human-trace" paradigm, our research allows us not only to measure the limitation in the use of ICT, but also to highlight the interpretative biases due to the delinking between the individual caught in his complexity as a "Human-trace" and the data circulating about him via digital traces. If big data, coupled with AI can play a major role in the implementation of CDSS, it cannot be limited to this field. We are also studying how to set up big data and AI development processes that respect the deontological and medical ethics rules associated with the appropriation of ICTs by the actors of the health system
Amo, Sandra De. "Contraintes dynamiques et schémas transactionnels." Paris 13, 1995. http://www.theses.fr/1995PA132002.
Full textBayatani, Mohsen. "De la cybernétique aux sciences de la cognition." Lyon 3, 2007. http://www.theses.fr/2007LYO31005.
Full textContrary to what has often been said, cognitive sciences are only secondarily linked to the recent amazing boom of computer technology. They first appeared in the United States along with the cybernetic movement in the 40's. The main objective of this movement was to create a general science of the brain and to build a mind neurology. It finally reaches its goal in 1956 by creating the cognitive sciences, which account for the new brain sciences. In this essay we have highlighted the fact that in living systems, the “relation life” is associated with the “emotional life” which allows the living to identify and recognize real objects through the feelings they generate, hence the adaptative advantages conscient cognition brings to the organism. Therefore contrary to symbolic representation used in computers, conscient representation has a emotional dimension. That is to say that the human thought is not a simple calculation and that the mind cannot be reduced to a small number of logical operations. Then we tried to close in on the philosophical implications of the cybernetic analysis of the living system. Here the living being is considered as a system of communication in its relationship whit the environment and the other organisms. Therefore, the vital phenomenons, such as thought and intelligence are explained cyberneticaly and through the transmission of the sensory information in the neural circuits. Thereby, a new conception of the living and of life is installing and opposes itself to traditional philosophers
Izza, Yacine. "Informatique ubiquitaire : techniques de curage d'informations perverties On the extraction of one maximal information subset that does not conflit with multiple contexts Extraction d'un sous-ensemble maximal qui soit cohérent avec des contextes mutuellement contradictoires On computing one max-inclusion consensus On admissible consensuses Boosting MCSes enumeration." Thesis, Artois, 2018. http://www.theses.fr/2018ARTO0405.
Full textThis thesis studies a possible approach of artificial intelligence for detecting and filtering inconsistent information in knowledge bases of intelligent objects and components in ubiquitous computing. This approach is addressed from a practical point of view in the SAT framework;it is about implementing a techniques of filtering inconsistencies in contradictory bases. Several contributions are made in this thesis. Firstly, we have worked on the extraction of one maximal information set that must be satisfiable with multiple assumptive contexts. We have proposed an incremental approach for computing such a set (AC-MSS). Secondly, we were interested about the enumeration of maximal satisfiable sets (MSS) or their complementary minimal correction sets (MCS) of an unsatisfiable CNF instance. In this contribution, a technique is introduced that boosts the currently most efficient practical approaches to enumerate MCS. It implements a model rotation paradigm that allows the set of MCS to be computed in an heuristically efficient way. Finally, we have studied a notion of consensus to reconcile several sources of information. This form of consensus can obey various preference criteria, including maximality one. We have then developed an incremental algorithm for computing one maximal consensus with respect to set-theoretical inclusion. We have also introduced and studied the concept of admissible consensus that refines the initial concept of consensus
Dias, Gaël. "Information Digestion." Habilitation à diriger des recherches, Université d'Orléans, 2010. http://tel.archives-ouvertes.fr/tel-00669780.
Full textMurphy, Killian. "Predictive maintenance of network equipment using machine learning methods." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS013.
Full textWith the improvement of computation power necessary for advanced applications of Machine Learning (ML), Network Fault Prediction (NFP) experiences a renewed scientific interest. The ability to predict network equipment failure is increasingly identified as an effective means to improve network reliability. This predictive capability can be used, to mitigate or to enact predictive maintenance on incoming network failures. This could contribute to establishing zero-failure networks and allow safety-critical applications to run over higher dimension and heterogeneous networks.In this PhD thesis, we propose to contribute to the NFP field by focusing on network alarm prediction. First, we present a comprehensive survey on NFP using Machine Learning (ML) methods entirely dedicated to telecommunication networks, and determine new directions for research in the field. Second, we propose and study a set of Machine Learning performance metrics (maintenance cost reduction and Quality of Service improvement) adapted to NFP in the context of network maintenance. Third, we describe the complete data processing architecture, including the network and software infrastructure, and the necessary data preprocessing pipeline that was implemented at SPIE ICS, Networks and Systems Integrator. We also describe the alarm or failure prediction problem model precisely. Fourth, we establish a benchmark of the different ML solutions applied to our dataset. We consider Decision Tree-based methods, Multi-Layer Perceptron and Support Vector Machines. We test the generalization of performance prediction across equipment types as well as normal ML generalization of the proposed models and parameters.Then, we apply sequential - Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) - ML architectures with success on our sequential SNMP dataset. Finally, we study the impact of the definition of the prediction horizon (and associated arbitrary timeframes) on the ML model prediction performance
Budnyk, Ivan. "Contribution to the Study and Implementation of Intelligent Modular Self-organizing Systems." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00481367.
Full textBayatani, Mohsen Parrochia Daniel. "De la cybernétique aux sciences de la cognition." [S.l.] : [s.n.], 2007. http://thesesbrain.univ-lyon3.fr/sdx/theses/lyon3/2006/bayatani_m.
Full textRuuska, Boquist Philip. "Utveckling av artificiell intelligens med genetiska tekniker och artificiella neurala nätverk." Thesis, University of Skövde, School of Humanities and Informatics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-3082.
Full text
Att använda artificiella neurala nätverk i datorspel blir ett allt mer populärt sätt att styra de datorstyrda agenterna då detta gör att agenterna får ett mer mänskligt beteende och förmågan att generalisera och möta nya situationer och klara sig igenom dessa på ett sätt som andra typer av artificiell intelligens inte alltid kan hantera. Svårigheten med denna teknik är att träna nätverket vilket ofta kräver en lång tid av inlärning och många olika träningfall. Genom att använda genetiska algoritmer för att träna upp nätverken så kan mycket av det både tid och prestandakrävande arbetet undvikas. Denna rapport kommer att undersöka möjligheten att använda genetiska tekniker för att träna artificiella neurala nätverk i en miljö anpassad till och med fokus på spel. Att använda genetiska tekniker för att träna artificiella neurala nätverk är en bra inlärningsteknik för problem där det enkelt går att skapa en passande fitnessfunktion och där andra inlärningstekniker kan vara svåra att använda. Det är dock ingen teknik som helt tar bort arbetet från utvecklare utan istället flyttar det mer åt att utveckla fitnessfunktionen och modifiera variabler.
Dal, col Laura. "On distributed control analysis and design for Multi-Agent systems subject to limited information." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0034/document.
Full textMulti-agent systems are dynamical systems composed of multiple interacting elements known as agents . Each agent is a dynamical system with two characteristics. First, it is capable of autonomous action—that is, it is able to evolve according to a self-organised behavior, which is not influenced by the external environment. Second, it is able to exchange information with other agents in order to accomplish complex tasks, such as coordination, cooperation, and conflict resolution. One commonly studied problem in multi-agent systems is synchronization. The agents are synchronized when their time evolutions converge to a common trajectory. Many real-world applications, such as flocking and formation control, can be cast as synchronization problems. Agent synchronization can be achieved using different approaches. In this thesis, we propose distributed and centralized control paradigms for the synchronization of multi-agent systems. We develop necessary and sufficient conditions for the synchronization of multi-agent systems, composed by identical linear time-invariant agents, us- ing a Lyapunov-based approach. Then we use these conditions to design distributed synchronization controllers. Then, we extend this result to multi-agent systems subject to external disturbances enforcing disturbance rejection with 퐻 ∞ control techniques. Furthermore, we extend the analysis to multi-agent systems with actuator constraints using LMI-based anti-windup techniques. We test the proposed control design strategies in simulated examples among which two are inspired by real-world applications. In the first, we study airplane formation control as a synchronization problem. In the second, we analyze the delivery of video streams as a synchronization problem and we compare the results to existing controllers
Blet, Loïc. "Configuration automatique d’un solveur générique intégrant des techniques de décomposition arborescente pour la résolution de problèmes de satisfaction de contraintes." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0085/document.
Full textConstraint programming integrates generic solving algorithms within declarative languages based on constraints : these languages allow us to describe combinatorial problems as a set of variables which have to take their values in domains while satisfying constraints. Numerous real-life problems can be modelled in such a way, as for instance, planification problems, scheduling problems, . . . These problems are NP-complete in the general case of finite domains. We introduce in this work a generic solving algorithm parameterized by : — a strategy for exploring the search space, to be chosen from the following six, chronological backtracking, conflict directed backjumping, conflict directed backjumping with reordering, dynamic backtracking, decision repair, and backtracking with tree decomposition ; — a variable ordering heuristic, to be chosen from the following two, min-domain/ddeg and min-domain/wdeg ; — a constraint propagation technique, to be chosen from the following two, forward checking and maintaining arc consistency. Thus, this algorithm leads to 24 different configurations ; some corresponding to already known algorithms, others being new. These 24 configurations have been com- pared experimentally on a benchmark of more than a thousand instances, each configuration being executed several times to take into account the non-determinism of the executions, and a statistical test has been used to compare performances. This experimental evaluation allowed us to better understand the complementarity of the different solving mechanisms, with a focus on the ability to exploit the structure of the instances to speed up the solving process. We identify 13 complementary configurations such that every instance of our benchmark is well solved by at least one of the 13 configurations. A second contribution of this work is to introduce a selector able to choose automatically the best configuration of our generic solver for each new instance to be solved : we describe each instance by a set of features and we use machine learning techniques to build a model to choose a configuration based on these features. Knowing that the learning process is generally harder when there are many configurations to choose from, we state the problem of choosing a subset of configurations that can be picked as a set covering problem and we compare two criterion : the first one aims to maximize the number of instances solved by at least one configuration and the second one aims to maximize the number of instances for which there is a good configuration available. We show experimentally that the second strategy obtains generally better results and that the selector obtains better performances than each of the 24 initial configurations
Vittaut, Jean-Noël. "LeJoueur : un programme de General Game Playing pour les jeux à information incomplète et-ou imparfaite." Electronic Thesis or Diss., Paris 8, 2017. http://www.theses.fr/2017PA080102.
Full textThis thesis is a contribution to the General Game Playing domain (GGP), a problematic of Artificial Intelligence (AI) aiming at developing autonomous agents that are able to play at a large variety of games called General Games. GGP is different from search algorithms allowing to play with a good level at specific games and opens the possibility to evaluate the efficiency of AI methods without prior knowledge form experts. An important aspect of our work lies on the utilization of an implicit game tree representation as a set of logic rules, an explicit representation being too large to be stored in memory. In this context, we have proposed an efficient method of rule instantiation allowing the computation of a logic circuit. A parallelization of the circuit evaluation allowed us to significantly accelerate the game tree exploration. We have proposed an adaptation of Monte-Carlo Tree Search for GGP and a method using RAVE (Rapid Action Value Estimation) in the beginning of the exploration when few estimations are available
Kotevska, Olivera. "Learning based event model for knowledge extraction and prediction system in the context of Smart City." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM005/document.
Full textBillions of “things” connected to the Internet constitute the symbiotic networks of communication devices (e.g., phones, tablets, and laptops), smart appliances (e.g., fridge, coffee maker and so forth) and networks of people (e.g., social networks). So, the concept of traditional networks (e.g., computer networks) is expanding and in future will go beyond it, including more entities and information. These networks and devices are constantly sensing, monitoring and generating a vast amount of data on all aspects of human life. One of the main challenges in this area is that the network consists of “things” which are heterogeneous in many ways, the other is that their state of the interconnected objects is changing over time, and there are so many entities in the network which is crucial to identify their interdependency in order to better monitor and predict the network behavior. In this research, we address these problems by combining the theory and algorithms of event processing with machine learning domains. Our goal is to propose a possible solution to better use the information generated by these networks. It will help to create systems that detect and respond promptly to situations occurring in urban life so that smart decision can be made for citizens, organizations, companies and city administrations. Social media is treated as a source of information about situations and facts related to the users and their social environment. At first, we tackle the problem of identifying the public opinion for a given period (year, month) to get a better understanding of city dynamics. To solve this problem, we proposed a new algorithm to analyze complex and noisy textual data such as Twitter messages-tweets. This algorithm permits an automatic categorization and similarity identification between event topics by using clustering techniques. The second challenge is combing network data with various properties and characteristics in common format that will facilitate data sharing among services. To solve it we created common event model that reduces the representation complexity while keeping the maximum amount of information. This model has two major additions: semantic and scalability. The semantic part means that our model is underlined with an upper-level ontology that adds interoperability capabilities. While the scalability part means that the structure of the proposed model is flexible in adding new entries and features. We validated this model by using complex event patterns and predictive analytics techniques. To deal with the dynamic environment and unexpected changes we created dynamic, resilient network model. It always chooses the optimal model for analytics and automatically adapts to the changes by selecting the next best model. We used qualitative and quantitative approach for scalable event stream selection, that narrows down the solution for link analysis, optimal and alternative best model. It also identifies efficient relationship analysis between data streams such as correlation, causality, similarity to identify relevant data sources that can act as an alternative data source or complement the analytics process
Ferré, Arnaud. "Représentations vectorielles et apprentissage automatique pour l’alignement d’entités textuelles et de concepts d’ontologie : application à la biologie." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS117/document.
Full textThe impressive increase in the quantity of textual data makes it difficult today to analyze them without the assistance of tools. However, a text written in natural language is unstructured data, i.e. it cannot be interpreted by a specialized computer program, without which the information in the texts remains largely under-exploited. Among the tools for automatic extraction of information from text, we are interested in automatic text interpretation methods for the entity normalization task that consists in automatically matching text entitiy mentions to concepts in a reference terminology. To accomplish this task, we propose a new approach by aligning two types of vector representations of entities that capture part of their meanings: word embeddings for text mentions and concept embeddings for concepts, designed specifically for this work. The alignment between the two is done through supervised learning. The developed methods have been evaluated on a reference dataset from the biological domain and they now represent the state of the art for this dataset. These methods are integrated into a natural language processing software suite and the codes are freely shared
Francis, Fanch. "De la prédiction à la détection d’évènements : L’analyse des mégadonnées au service du renseignement de sources ouvertes." Thesis, Lille 3, 2019. https://pepite-depot.univ-lille.fr/RESTREINT/EDSHS/2019/2019LIL3H046.pdf.
Full textUnderstanding the dynamics of a conflict in order to anticipate its evolution is of major interest for open source military intelligence (OSINT) and police intelligence, particularly in the context of Intelligence Led Policing. If the ambition to predict the events of a conflict is not realistic, the ambition to detect them is an important and achievable objective. The human and social sciences, particularly the information and communication sciences combined with the science of data and documents, make it possible to exploit digital social networks in such a way as to make event detection and monitoring a more appropriate objective and method than the standard "protest event analysis" in the context of modern wars and the connected society. At the same time, this requires a renewed intelligence cycle.Based on data from the social network Twitter, collected during the Ukrainian crisis, this thesis shows the relevance of conflict detection and monitoring using our DETEVEN method. This method not only identifies relevant events in a conflict, but also facilitates their monitoring and interpretation. It is based on the detection of statistical anomalies and the adaptation of protest event analysis to social media. Our method is particularly effective on what we define as connected theatres of operation (CTOs) characteristic of new hybrid warfare contexts and on operations of misinformation or influence. These detected events were analytically exploited using a platform designed for an analyst, allowing effective data visualization. In a crisis situation, especially in a "social movement war", where each user becomes a de facto social sensor, information control is a strategic issue. This thesis therefore shows how information literacy is an important issue for individuals and groups
Mouaddib, Noureddine. "Gestion des informations nuancées : une proposition de modèle et de méthode pour l'identification nuancée d'un phénomène." Nancy 1, 1989. http://www.theses.fr/1989NAN10475.
Full textFeutry, Clément. "Two sides of relevant information : anonymized representation through deep learning and predictor monitoring." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS479.
Full textThe work presented here is for a first part at the cross section of deep learning and anonymization. A full framework was developed in order to identify and remove to a certain extant, in an automated manner, the features linked to an identity in the context of image data. Two different kinds of processing data were explored. They both share the same Y-shaped network architecture despite components of this network varying according to the final purpose. The first one was about building from the ground an anonymized representation that allowed a trade-off between keeping relevant features and tampering private features. This framework has led to a new loss. The second kind of data processing specified no relevant information about the data, only private information, meaning that everything that was not related to private features is assumed relevant. Therefore the anonymized representation shares the same nature as the initial data (e.g. an image is transformed into an anonymized image). This task led to another type of architecture (still in a Y-shape) and provided results strongly dependent on the type of data. The second part of the work is relative to another kind of relevant information: it focuses on the monitoring of predictor behavior. In the context of black box analysis, we only have access to the probabilities outputted by the predictor (without any knowledge of the type of structure/architecture producing these probabilities). This monitoring is done in order to detect abnormal behavior that is an indicator of a potential mismatch between the data statistics and the model statistics. Two methods are presented using different tools. The first one is based on comparing the empirical cumulative distribution of known data and to be tested data. The second one introduces two tools: one relying on the classifier uncertainty and the other relying on the confusion matrix. These methods produce concluding results
Déguernel, Ken. "Apprentissage de structures musicales en contexte d'improvisation." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0011.
Full textCurrent musical improvisation systems are able to generate unidimensional musical sequences by recombining their musical contents. However, considering several dimensions (melody, harmony...) and several temporal levels are difficult issues. In this thesis, we propose to combine probabilistic approaches with formal language theory in order to better assess the complexity of a musical discourse, both from a multidimensional and multi-level point of view in the context of improvisation where the amount of data is limited. First, we present a system able to follow the contextual logic of an improvisation modelled by a factor oracle whilst enriching its musical discourse with multidimensional knowledge represented by interpolated probabilistic models. Then, this work is extended to create another system using a belief propagation algorithm representing the interaction between several musicians, or between several dimensions, in order to generate multidimensional improvisations. Finally, we propose a system able to improvise on a temporal scenario with multi-level information modelled with a hierarchical grammar. We also propose a learning method for the automatic analysis of hierarchical temporal structures. Every system is evaluated by professional musicians and improvisers during listening sessions
Moreau, Alain. "Contribution au traitement des informations subjectives dans les systemes experts." Valenciennes, 1987. https://ged.uphf.fr/nuxeo/site/esupversions/78bf2889-467a-43cb-81db-54626699a4c5.
Full textRavi, Mondi. "Confiance et incertitude dans les environnements distribués : application à la gestion des donnéeset de la qualité des sources de données dans les systèmes M2M (Machine to Machine)." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM090/document.
Full textTrust and uncertainty are two important aspects of many distributed systems. For example, multiple sources of information can be available for the same type of information. This poses the problem to select the best source that can produce the most certain information and to resolve incoherence amongst the available information. Managing trust and uncertainty together forms a complex problem and through this thesis we develop a solution to this. Trust and uncertainty have an intrinsic relationship. Trust is primarily related to sources of information while uncertainty is a characteristic of the information itself. In the absence of trust and uncertainty measures, a system generally suffers from problems like incoherence and uncertainty. To improve on this, we hypothesize that the sources with higher trust levels will produce more certain information than those with lower trust values. We then use the trust measures of the information sources to quantify uncertainty in the information and thereby infer high level conclusions with greater certainty.A general trend in the modern distributed systems is to embed reasoning capabilities in the end devices to make them smart and autonomous. We model these end devices as agents of a Multi Agent System. Major sources of beliefs for such agents are external information sources that can possess varying trust levels. Moreover, the incoming information and beliefs are associated with a degree of uncertainty. Hence, the agents face two-fold problems of managing trust on sources and presence of uncertainty in the information. We illustrate this with three application domains: (i) The intelligent community, (ii) Smart city garbage collection, and (iii) FIWARE : a European project about the Future Internet that motivated the research on this topic. Our solution to the problem involves modelling the devices (or entities) of these domains as intelligent agents that comprise a trust management module, an inference engine and a belief revision system. We show that this set of components can help agents to manage trust on the other sources and quantify uncertainty in the information and then use this to infer more certain high level conclusions. We finally assess our approach using simulated and real data pertaining to the different application domains
Li, Junkang. "Games with incomplete information : complexity, algorithmics, reasoning." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMC270.
Full textIn this dissertation, we study games with incomplete information. We begin by establishing a complete landscape of the complexity of computing optimal pure strategies for different subclasses of games, when games are given explicitly as input. We then study the complexity when games are represented compactly (e.g.\ by their game rules). For this, we design two formalisms for such compact representations. Then we concentrate on games with incomplete information, by first proposing a new formalism called combinatorial game with incomplete information, which encompasses games of no chance (apart from a random initial drawing) and with only public actions. For such games, this new formalism captures the notion of information and knowledge of the players in a game better than extensive form. Next, we study algorithms and their optimisations for solving combinatorial games with incomplete information; some of these algorithms are applicable beyond these games. In the last part, we present a work in progress that concerns the modelling of recursive reasoning and different types of knowledge about the behaviour of the opponents in games with incomplete information
Hörnsten, Jessica, Niclas Lindgren, and Simon Skidmore. "Utopi eller dystopi? : Föreställningar kring artificiell intelligens." Thesis, Umeå universitet, Institutionen för informatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-159462.
Full textFuchs, Béatrice. "Représentation des connaissances pour le raisonnement à partir de cas : le système ROCADE." Saint-Etienne, 1997. http://www.theses.fr/1997STET4017.
Full textJiu, Mingyuan. "Spatial information and end-to-end learning for visual recognition." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0038/document.
Full textIn this thesis, we present our research on visual recognition and machine learning. Two types of visual recognition problems are investigated: action recognition and human body part segmentation problem. Our objective is to combine spatial information such as label configuration in feature space, or spatial layout of labels into an end-to-end framework to improve recognition performance. For human action recognition, we apply the bag-of-words model and reformulate it as a neural network for end-to-end learning. We propose two algorithms to make use of label configuration in feature space to optimize the codebook. One is based on classical error backpropagation. The codewords are adjusted by using gradient descent algorithm. The other is based on cluster reassignments, where the cluster labels are reassigned for all the feature vectors in a Voronoi diagram. As a result, the codebook is learned in a supervised way. We demonstrate the effectiveness of the proposed algorithms on the standard KTH human action dataset. For human body part segmentation, we treat the segmentation problem as classification problem, where a classifier acts on each pixel. Two machine learning frameworks are adopted: randomized decision forests and convolutional neural networks. We integrate a priori information on the spatial part layout in terms of pairs of labels or pairs of pixels into both frameworks in the training procedure to make the classifier more discriminative, but pixelwise classification is still performed in the testing stage. Three algorithms are proposed: (i) Spatial part layout is integrated into randomized decision forest training procedure; (ii) Spatial pre-training is proposed for the feature learning in the ConvNets; (iii) Spatial learning is proposed in the logistical regression (LR) or multilayer perceptron (MLP) for classification
Guastella, Davide Andrea. "Dynamic learning of the environment for eco-citizen behavior." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30160.
Full textThe development of sustainable smart cities requires the deployment of Information and Communication Technology (ICT) to ensure better services and available information at any time and everywhere. As IoT devices become more powerful and low-cost, the implementation of an extensive sensor network for an urban context can be expensive. This thesis proposes a technique for estimating missing environmental information in large scale environments. Our technique enables providing information whereas devices are not available for an area of the environment not covered by sensing devices. The contribution of our proposal is summarized in the following points: * limiting the number of sensing devices to be deployed in an urban environment; * the exploitation of heterogeneous data acquired from intermittent devices; * real-time processing of information; * self-calibration of the system. Our proposal uses the Adaptive Multi-Agent System (AMAS) approach to solve the problem of information unavailability. In this approach, an exception is considered as a Non-Cooperative Situation (NCS) that has to be solved locally and cooperatively. HybridIoT exploits both homogeneous (information of the same type) and heterogeneous information (information of different types or units) acquired from some available sensing device to provide accurate estimates in the point of the environment where a sensing device is not available. The proposed technique enables estimating accurate environmental information under conditions of uncertainty arising from the urban application context in which the project is situated, and which have not been explored by the state-of-the-art solutions: * openness: sensors can enter or leave the system at any time without the need for any reconfiguration; * large scale: the system can be deployed in a large, urban context and ensure correct operation with a significative number of devices; * heterogeneity: the system handles different types of information without any a priori configuration. Our proposal does not require any input parameters or reconfiguration. The system can operate in open, dynamic environments such as cities, where a large number of sensing devices can appear or disappear at any time and without any prior notification. We carried out different experiments to compare the obtained results to various standard techniques to assess the validity of our proposal. We also developed a pipeline of standard techniques to produce baseline results that will be compared to those obtained by our multi-agent proposal
Marroquín, Cortez Roberto Enrique. "Context-aware intelligent video analysis for the management of smart buildings." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCK040/document.
Full textTo date, computer vision systems are limited to extract digital data of what the cameras "see". However, the meaning of what they observe could be greatly enhanced by environment and human-skills knowledge.In this work, we propose a new approach to cross-fertilize computer vision with contextual information, based on semantic modelization defined by an expert.This approach extracts the knowledge from images and uses it to perform real-time reasoning according to the contextual information, events of interest and logic rules. The reasoning with image knowledge allows to overcome some problems of computer vision such as occlusion and missed detections and to offer services such as people guidance and people counting. The proposed approach is the first step to develop an "all-seeing" smart building that can automatically react according to its evolving information, i.e., a context-aware smart building.The proposed framework, named WiseNET, is an artificial intelligence (AI) that is in charge of taking decisions in a smart building (which can be extended to a group of buildings or even a smart city). This AI enables the communication between the building itself and its users to be achieved by using a language understandable by humans
Bannour, Sondes. "Apprentissage interactif de règles d'extraction d'information textuelle." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD113/document.
Full textNon communiqué
Al-Hakawati, al-Dakkak Oumayma. "Extraction automatique de paramètres formantiques guidée par le contexte et élaboration de règles de synthèse." Grenoble INPG, 1988. http://www.theses.fr/1988INPG0056.
Full textSassatelli, Lucile. "Codes LDPC multi-binaires hybrides et méthodes de décodage itératif." Phd thesis, Université de Cergy Pontoise, 2008. http://tel.archives-ouvertes.fr/tel-00819413.
Full textChafik, Sanaa. "Machine learning techniques for content-based information retrieval." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL008.
Full textThe amount of media data is growing at high speed with the fast growth of Internet and media resources. Performing an efficient similarity (nearest neighbor) search in such a large collection of data is a very challenging problem that the scientific community has been attempting to tackle. One of the most promising solutions to this fundamental problem is Content-Based Media Retrieval (CBMR) systems. The latter are search systems that perform the retrieval task in large media databases based on the content of the data. CBMR systems consist essentially of three major units, a Data Representation unit for feature representation learning, a Multidimensional Indexing unit for structuring the resulting feature space, and a Nearest Neighbor Search unit to perform efficient search. Media data (i.e. image, text, audio, video, etc.) can be represented by meaningful numeric information (i.e. multidimensional vector), called Feature Description, describing the overall content of the input data. The task of the second unit is to structure the resulting feature descriptor space into an index structure, where the third unit, effective nearest neighbor search, is performed.In this work, we address the problem of nearest neighbor search by proposing three Content-Based Media Retrieval approaches. Our three approaches are unsupervised, and thus can adapt to both labeled and unlabeled real-world datasets. They are based on a hashing indexing scheme to perform effective high dimensional nearest neighbor search. Unlike most recent existing hashing approaches, which favor indexing in Hamming space, our proposed methods provide index structures adapted to a real-space mapping. Although Hamming-based hashing methods achieve good accuracy-speed tradeoff, their accuracy drops owing to information loss during the binarization process. By contrast, real-space hashing approaches provide a more accurate approximation in the mapped real-space as they avoid the hard binary approximations.Our proposed approaches can be classified into shallow and deep approaches. In the former category, we propose two shallow hashing-based approaches namely, "Symmetries of the Cube Locality Sensitive Hashing" (SC-LSH) and "Cluster-based Data Oriented Hashing" (CDOH), based respectively on randomized-hashing and shallow learning-to-hash schemes. The SC-LSH method provides a solution to the space storage problem faced by most randomized-based hashing approaches. It consists of a semi-random scheme reducing partially the randomness effect of randomized hashing approaches, and thus the memory storage problem, while maintaining their efficiency in structuring heterogeneous spaces. The CDOH approach proposes to eliminate the randomness effect by combining machine learning techniques with the hashing concept. The CDOH outperforms the randomized hashing approaches in terms of computation time, memory space and search accuracy.The third approach is a deep learning-based hashing scheme, named "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). The UDN2H approach proposes to index individually the output of each neuron of the top layer of a deep unsupervised model, namely a Deep Autoencoder, with the aim of capturing the high level individual structure of each neuron output.Our three approaches, SC-LSH, CDOH and UDN2H, were proposed sequentially as the thesis was progressing, with an increasing level of complexity in terms of the developed models, and in terms of the effectiveness and the performances obtained on large real-world datasets
Agarwal, Rachit. "Towards enhancing information dissemination in wireless networks." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0020.
Full textIn public warning message systems, information dissemination across the network is a critical aspect that has to be addressed. Dissemination of warning messages should be such that it reaches as many nodes in the network in a short time. In communication networks those based on device to device interactions, dissemination of the information has lately picked up lot of interest and the need for self organization of the network has been brought up. Self organization leads to local behaviors and interactions that have global effects and helps in addressing scaling issues. The use of self organized features allows autonomous behavior with low memory usage. Some examples of self organization phenomenon that are observed in nature are Lateral Inhibition and Flocking. In order to provide self organized features to communication networks, insights from such naturally occurring phenomenon is used. Achieving small world properties is an attractive way to enhance information dissemination across the network. In small world model rewiring of links in the network is performed by altering the length and the direction of the existing links. In an autonomous wireless environment such organization can be achieved using self organized phenomenon like Lateral inhibition and Flocking and beamforming (a concept in communication). Towards this, we first use Lateral Inhibition, analogy to Flocking behavior and beamforming to show how dissemination of information can be enhanced. Lateral Inhibition is used to create virtual regions in the network. Then using the analogy of Flocking rules, beam properties of the nodes in the regions are set. We then prove that small world properties are achieved using average path length metric. However, the proposed algorithm is applicable to static networks and Flocking and Lateral Inhibition concepts, if used in a mobile scenario, will be highly complex in terms of computation and memory. In a mobile scenario such as human mobility aided networks, the network structure changes frequently. In such conditions dissemination of information is highly impacted as new connections are made and old ones are broken. We thus use stability concept in mobile networks with beamforming to show how information dissemination process can be enhanced. In the algorithm, we first predict the stability of a node in the mobile network using locally available information and then uses it to identify beamforming nodes. In the algorithm, the low stability nodes are allowed to beamform towards the nodes with high stability. The difference between high and low stability nodes is based on threshold value. The algorithm is developed such that it does not require any global knowledge about the network and works using only local information. The results are validated using how quickly more number of nodes receive the information and different state of the art algorithms. We also show the effect of various parameters such as number of sources, number of packets, mobility parameters and antenna parameters etc. on the information dissemination process in the network. In realistic scenarios however, the dynamicity in the network is not only related to mobility. Dynamic conditions also arise due to change in density of nodes at a given time. To address effect of such scenario on the dissemination of information related to public safety in a metapopulation, we use the concepts of epidemic model, beamforming and the countrywide mobility pattern extracted from the D4D dataset. Here, we also propose the addition of three latent states to the existing epidemic model (SIR model). We study the transient states towards the evolution of the number of devices having the information and the difference in the number of devices having the information when compared with different cases to evaluate the results. Through the results we show that enhancements in the dissemination process can be achieved in the addressed scenario
Almer, Jasmine, and Julia Ivert. "Artificiell Intelligens framtidsutsikter inom sjukvården : En studie om studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413725.
Full textArtificiell Intelligens är ett område vilket utvecklats radikalt senaste åren och som konstant fortsätter att utvecklas inom flera branscher. I denna uppsats utförs en kvalitativ fallstudie, vilken behandlar studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården och dess användning i framtiden. Genom utförda intervjuer av Uppsala Universitets studerande sjuksköterskor, analyseras det empiriska materialet med hjälp av teorin Technology Acceptance Model (TAM), för att slutligen ta fram ett resultat vad det gäller ett framtida användandet av Artificiell Intelligens inom sjukvården. Analysen resulterade i två tydliga områden gällande användningen av AI inom sjukvården: beslutsfattande AI respektive icke-beslutsfattande AI, där intervjudeltagarnas attityder urskiljdes mellan de två indelningarna. De studerande sjuksköterskornas attityder gentemot beslutsfattande AI var tämligen negativ, dels på grund av de bristande faktorer som identifierades gällande ett ansvarsutkrävande, samt den minskade patientkontakten systemet kan komma att medföra. Attityderna gentemot icke-beslutsfattande AI ansågs i kontrast mycket positiva. Dels på grund av den effektivisering systemet möjligtvis kan medföra genom att använda AI-teknik som ett hjälpmedel eller komplement samt de förbättringsområden som inträder relaterat till arbetsrollen. Ett exempel på förbättringsområde som framkom var att skapa mer tid för vård och omsorg, något som sjuksköterskestudenterna menar på att yrket faktiskt är till för. Avslutningsvis diskuteras resultatet från analysen vilket intressanta resonemang om etik och moral, arbetsrollen i fråga samt vidare forskning förs på tal.
Sjöblom, Sebastian, and Martin Eriksson. "Läkarstudenters attityder till artificiell intelligens inom sjukvården." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413812.
Full textArtificial intelligence (AI) is today a much debated technology with many postulated uses in a wide range of scientific principles. In medicine, computers have for a long time helped to analyze data and helped medical practitioners. Today, the issue is more relevant than ever as medical AI is being launched in clinical settings and many practitioners believe that the technology will have a major impact in the future. Based on this background, we have wanted to investigate Sweden's medical students' attitude to AI as a tool in healthcare. This has been done through a quantitative survey based on a modification of the model Technology Acceptance Model 2 (TAM2) to measure attitudes to this technology. The results from the survey have been analyzed using t-tests and regression analysis to answer our research questions. The analysis of the results shows, among other things, that Sweden's medical students' attitude to AI in healthcare is positive and that they want to use the technology.
Pesquerel, Fabien. "Information per unit of interaction in stochastic sequential decision making." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/LIBRE/EDMADIS/2023/2023ULILB048.pdf.
Full textIn this thesis, we wonder about the rate at which one can solve an unknown stochastic problem.To this purpose we introduce two research fields known as Bandit and Reinforcement Learning.In these two settings, a learner must sequentially makes decision that will affect a reward signal that the learner receive.The learner does not know the environment with which it is interaction, yet wish to maximize its average reward in the long run.More specifically, we are interested in studying some form of stochastic decision problem under the average-reward criterion in which a learning algorithm interacts sequentially with a dynamical system, without any reset, in a single and infinite sequence of observations, actions, and rewards while trying to maximize its total accumulated rewards over time.We first introduce Bandit, in which the set of decision is constant and introduce what is meant by solving the problem.Amongst those learners, some are better than all the others, and called optimal.We first focus on how to make the most out of each interaction with the system by revisiting an optimal algorithm, and reduce its numerical complexity.Therefore, the information extracted from each sample, per-time-step, is larger since the optimality remains.Then we study an interesting structured problem in which one can exploit the structure without estimating it.Afterward we introduce Reinforcement Learning, in which the decision a learner can make depend on a notion of state.Each time a learner makes a decision, it receives a reward and the state change according to transition law on the set of states.In some setting, known as ergodic, an optimal rate of solving is known and we introduce a knew algorithm that we can prove to be optimal and show to be numerically efficient.In a final chapter, we make a step in the direction of removing the ergodic assumption by considering the a priori simpler problem where the transitions are known.Yet, correctly understanding the rate at which information can be acquired about an optimal solution is already not easy
Neunreuther, Éric. "Contribution à la modélisation des systèmes intégrés de production à intelligence distribuée : application à la distribution du contrôle et de la gestion technique sur les équipements de terrain." Nancy 1, 1998. http://www.theses.fr/1998NAN10183.
Full textAbiza, Yamina. "Personnalisation de la distribution et de la présentation des informations des bases de vidéo interactive diffusées." Nancy 1, 1997. http://www.theses.fr/1997NAN10249.
Full textIn this thesis we deal with the issues of the design and personalization of data-oriented interactive video applications (i. E. Multimedia/hypermedia documents applications with predominance of audio and video data) in the emerging residential multimedia informatoin services (i. E. Interactive television and second generation telematics) More specifically, we are concerned with the server-push information services (i. E. Distributed dynamic and broadcast information sources) to be deployed in heteroyenous environments with shared ressources and destinated to users having different information needs and preferences. In this context, they are a lot of possible aspects to personalize. Here we focus on two particular related aspects : structure-based information filtering and personalization of contents presentation modalities. The techniques to achieve these personalization aspects are tightly related to the design of a given information source. Our approach to personalization is based on the definition of a conceptual data model, HB_Model, composed of : 1) a base model to represent documents organization and internal structure in video interactive sources, 2) a versionning model, HB_Versions, to represents documents contents with multiple alternative representation forms or modalities, and 3) a model for wiews definition, HB_Views, to represent relatively stable users information needs. Personalized information delivery from a given server - based on structure criteria - is archived by the materialization of individual users' views specifications using newly available information on this server. Personalization of contents representation modalitics is archived by the intentional specification of documents contents configuration in the form of a CSP (Constraint Satisfaction Problem) which reflects the constraints of the interaction and presentation contexts caracteristics on the choice of the presentation modality for each content and garantees the coherence of presentation modalities combinations. Finally, we show how our propositions articulate and fit into the architecture of a personalized, server-push interactive video information service
Despres, Sylvie. "Un apport a la conception de systemes a base de connaissances : les operations de deduction floues." Paris 6, 1988. http://www.theses.fr/1988PA066197.
Full textHindawi, Mohammed. "Sélection de variables pour l’analyse des données semi-supervisées dans les systèmes d’Information décisionnels." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0015/document.
Full textFeature selection is an important task in data mining and machine learning processes. This task is well known in both supervised and unsupervised contexts. The semi-supervised feature selection is still under development and far from being mature. In general, machine learning has been well developed in order to deal with partially-labeled data. Thus, feature selection has obtained special importance in the semi-supervised context. It became more adapted with the real world applications where labeling process is costly to obtain. In this thesis, we present a literature review on semi-supervised feature selection, with regard to supervised and unsupervised contexts. The goal is to show the importance of compromising between the structure from unlabeled part of data, and the background information from their labeled part. In particular, we are interested in the so-called «small labeled-sample problem» where the difference between both data parts is very important. In order to deal with the problem of semi-supervised feature selection, we propose two groups of approaches. The first group is of «Filter» type, in which, we propose some algorithms which evaluate the relevance of features by a scoring function. In our case, this function is based on spectral-graph theory and the integration of pairwise constraints which can be extracted from the data in hand. The second group of methods is of «Embedded» type, where feature selection becomes an internal function integrated in the learning process. In order to realize embedded feature selection, we propose algorithms based on feature weighting. The proposed methods rely on constrained clustering. In this sense, we propose two visions, (1) a global vision, based on relaxed satisfaction of pairwise constraints. This is done by integrating the constraints in the objective function of the proposed clustering model; and (2) a second vision, which is local and based on strict control of constraint violation. Both approaches evaluate the relevance of features by weights which are learned during the construction of the clustering model. In addition to the main task which is feature selection, we are interested in redundancy elimination. In order to tackle this problem, we propose a novel algorithm based on combining the mutual information with maximum spanning tree-based algorithm. We construct this tree from the relevant features in order to optimize the number of these selected features at the end. Finally, all proposed methods in this thesis are analyzed and their complexities are studied. Furthermore, they are validated on high-dimensional data versus other representative methods in the literature
Neverova, Natalia. "Deep learning for human motion analysis." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI029/document.
Full textThe research goal of this work is to develop learning methods advancing automatic analysis and interpreting of human motion from different perspectives and based on various sources of information, such as images, video, depth, mocap data, audio and inertial sensors. For this purpose, we propose a several deep neural models and associated training algorithms for supervised classification and semi-supervised feature learning, as well as modelling of temporal dependencies, and show their efficiency on a set of fundamental tasks, including detection, classification, parameter estimation and user verification. First, we present a method for human action and gesture spotting and classification based on multi-scale and multi-modal deep learning from visual signals (such as video, depth and mocap data). Key to our technique is a training strategy which exploits, first, careful initialization of individual modalities and, second, gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. Moving forward, from 1 to N mapping to continuous evaluation of gesture parameters, we address the problem of hand pose estimation and present a new method for regression on depth images, based on semi-supervised learning using convolutional deep neural networks, where raw depth data is fused with an intermediate representation in the form of a segmentation of the hand into parts. In separate but related work, we explore convolutional temporal models for human authentication based on their motion patterns. In this project, the data is captured by inertial sensors (such as accelerometers and gyroscopes) built in mobile devices. We propose an optimized shift-invariant dense convolutional mechanism and incorporate the discriminatively-trained dynamic features in a probabilistic generative framework taking into account temporal characteristics. Our results demonstrate, that human kinematics convey important information about user identity and can serve as a valuable component of multi-modal authentication systems
Boucher, Alain. "Une approche décentralisée et adaptative de la gestion d'informations en vision ; application à l'interprétation d'images de cellules en mouvement." Phd thesis, Université Joseph Fourier (Grenoble), 1999. http://tel.archives-ouvertes.fr/tel-00004805.
Full textAimé, Xavier. "Gradients de prototypicalité, mesures de similarité et de proximité sémantique : une contribution à l'Ingénierie des Ontologies." Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00660916.
Full textOlsson, Per, and Andreas Backman. "Etik & artificiell intelligens inom svenskbanksektor : En kvalitativ granskning av storbankernas etik." Thesis, Uppsala universitet, Institutionen för informatik och media, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-357786.
Full textÅström, Emil. "AI-motor : Artificiell intelligens för spel." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-22234.
Full text