Tesis sobre el tema "Données artificielles"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Données artificielles".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Amadou, Kountché Djibrilla. "Localisation dans les bâtiments des personnes handicapées et classification automatique de données par fourmis artificielles". Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4021/document.
The concept of « smart » invades more and more our daily life. A typical example is the smartphone, which becames by years an essential device. Soon, it’s the city, the car and the home which will become « smart ». The intelligence is manifested by the ability for the environment to interact and to take decisons in its relationships with users and other environments. This needs information on state changes occurred on both sides. Sensor networks allow to collect these data, to apply on them some pre-processings and to transmit them. Sensor network, towards some of their caracteristics are closed to Swarm Intelligence in the sense that small entities with reduced capababilities can cooperate automatically, in unattended, decentralised and distributed manner in order to accomplish complex tasks. These bio-inspired methods have served as basis for the resolution of many problems, mostly optimization and this insipired us to apply them on problems met in Ambient Assisted Living and on the data clustering problem. AAL is a sub-field of context-aware services, and its goals are to facilitate the everyday life of elderly and disable people. These systems determine the context and then propose different kind of services. We have used two important elements of the context : the position and the disabilty. Although positioning has very good precision outdoor, it faces many challenges in indoor environments due to the electromagnetic wave propagation in harsh conditions, the cost of systems, interoperabilty, etc. Our works have been involved in positioning disabled people in indoor environment by using wireless sensor network for determining the caracteristics of the electromagnetic wave (signal strenght, time, angle) for estimating the position by geometric methods (triangulation, lateration), fingerprinting methods (k-nearest neighbours), baysiens filters (Kalman filter). The application is to offer AAL services like navigation. Therefore we extend the definition of sensor node to take into account any device, in the environment, capable of emiting and receiving a signal. Also, we have studied the possibility of using Pachycondylla Apicalis for data clustering and for indoor localization by casting this last problem as data clustering problem. Finally we have proposed a system based on a middleware architecture
Lavergne, Julien. "Algorithme de fourmis artificielles pour la construction incrémentale et la visualisation interactive de grands graphes de voisinage". Thesis, Tours, 2008. http://www.theses.fr/2008TOUR4049.
We present in this work a new incremental algorithm for building proximity graphs for large data sets in order to solve a clustering problem. It is inspired from the self-assembly behavior observed in real ants where ants progressively become attached to an existing support and then successively to other attached ants. Each artificial ant represents one data. The way ants move and build a graph depends on the similarity between the data. A graph, built with our method, is well suitable for visualization and interactively exploration depending on the needs of the domain expert. He can visualize the global shape of the graph and locally explore the neighborhood relations with a content-based navigation. Finally, we present different applications of our work as the interactive clustering, the automatic graph construction of documents and an immersion in a virtual reality environment for discovering knowledge in data
Langlois, Vincent. "Couple de friction métallique de nouvelle génération en arthroplastie totale primaire de hanche : historique, données actuelles et résultats préliminaires d'une série de 54 cas". Bordeaux 2, 2001. http://www.theses.fr/2001BOR23022.
Gusarov, Nikita. "Performances des modèles économétriques et de Machine Learning pour l’étude économique des choix discrets de consommation". Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALE001.
This thesis is a cross-disciplinary study of discrete choice modeling, addressing both econometrics and machine learning (ML) techniques applied to individual choice modeling. The problematic arises from insufficient points of contact among users (economists and engineers) and data scientists, who pursue different objectives, although using similar techniques. To bridge this interdisciplinary gap, the PhD work proposes a unified framework for model performance analysis. It facilitates the comparison of data analysis techniques under varying assumptions and transformations.The designed framework is suitable for a variety of econometrics and ML models. It addresses the performance comparison task from the research procedure perspective, incorporating all the steps potentially affecting the performance perceptions. To demonstrate the framework’s capabilities we propose a series of 3 applied studies. In those studies the model performance is explored face to the changes in (1) sample size and balance, resulting from data collection; (2) changes in preferences structure within population, reflecting incorrect behavioral assumptions; and (3) model selection, directly intertwined with the performance perception
Giraudel, Jean-Luc. "Exploration des données et prédiction en écologie par des méthodes d'intelligence artificielle". Toulouse 3, 2001. http://www.theses.fr/2001TOU30138.
Pichlova, Markéta. "Méthodes d'intelligence artificielle pour l'analyse des données en provenance de pistons instrumentés". Paris 6, 2005. http://www.theses.fr/2005PA066104.
Turmeaux, Teddy. "Contraintes et fouille de données". Orléans, 2004. http://www.theses.fr/2004ORLE2048.
Ugon, Adrien. "Fusion symbolique et données polysomnographiques". Paris 6, 2013. http://www.theses.fr/2013PA066187.
In recent decades, medical examinations required to diagnose and guide to treatmentbecame more and more complex. It is even a current practice to use several examinationsin different medical specialties to study a disease through multiple approaches so as todescribe it more deeply. The interpretation is difficult because the data is both heterogeneous and also veryspecific, with skilled domain of knowledge required to analyse it. In this context, symbolic fusion appears to be a possible solution. Indeed, it wasproved to be very effective in treating problems with low or high levels of abstraction ofinformation to develop a high level knowledge. This thesis demonstrates the effectiveness of symbolic fusion applied to the treatmentof polysomnographic data for the development of an assisted diagnosis tool of Sleep ApneaSyndrome. Proper diagnosis of this sleep disorder requires a polysomnography. This medicalexamination consists of simultaneously recording of various physiological parametersduring a night. Visual interpretation is tedious and time consuming and there commonlyis some disagreement between scorers. The use of a reliable support-to-diagnosis toolincreases the consensus. This thesis develops stages of the development of such a tool
Gross-Amblard, David. "Approximation dans les bases de données contraintes". Paris 11, 2000. http://www.theses.fr/2000PA112304.
El, Alam Iyad. "Management des compétences : nouvelles technologies et intelligence artificielle". Aix-Marseille 3, 2007. http://www.theses.fr/2007AIX32077.
The study focuses on the systemic and mathematical modelisation of the skills adjustment problems, both in the classical approaches of operational research models and in new viewpoints stemming from neural networks and artificial intelligence. This modelisation is conducted within the framework of a decision support system of evaluation and skills match through the contribution of the modelisation of micro- and macro-competencies. The context of artificial intelligence will be that of multilayer neural networks, and of the so-called Fuzzy ART, with the aim of proposing a system which we have called CRMM (Competencies Research Matching Model). The system will be implemented in large-sized organizations in need of large numbers of personnel subject to frequent post changes. The assumption on which this study is based tends to demonstrate the possibility to improve the decision-making of human resources or operational managers through a better exploitation of competency-related data. This factor should obviously be placed at the core of new strategic problematics linked to competencies and training
Dupont, Xavier. "Programmation par contraintes sur les flux de données". Caen, 2014. http://www.theses.fr/2014CAEN2016.
In this thesis, we investigate the generalisation of constraint programming on finite variables to stream variables. First, the concepts of streams, infinite sequences and infinite words have been extensively studied in the litterature, and we propose a state of the art that covers language theory, classical and temporal logics, as well as the numerous formalisms that are strongly related to those. The comparison with temporal logics is a first step towards the unification of formalisms over streams, and because the temporal logics are themselves numerous, the classification of these allows the extrapolation of our contributions to other contexts. The second goal involves identifying the features of the existing formalisms that lend themselve to the techniques of constraint programming over finite variables. Compared to the expressivity of temporal logics, that of our formalism is more limited. This stems from the fact that constraint programming allows only the conjunction of constraints, and requires encapsulating disjunction into constraint propagators. Nevertheless, our formalism allows a gain in concision and the reuse of the concept of propagator in a temporal setting. The question of the generalisation of these results to more expressive logics is left open
Zarri, Gian Piero. "Utilisation de techniques relevant de l'intelligence artificielle pour le traitement de données biographiques complexes". Paris 11, 1985. http://www.theses.fr/1985PA112342.
The aim of this thesis is to provide a general description of RESEDA, an « intelligent » Information Retrieval system dealing with biographical data and using techniques borrowed from Knowledge Engineering and Artificial Intelligence (AI). All the system’s “knowledge” is represented in purely declarative form. This is the case both for the “fact database” and the “rule base”; the fact database contains the data, in the usual sens of the word that the system has to retrieve. Together, the fact and rule bases make up RESEDA’s “knowledge base”. Information in the knowledge base is depicted using a single knowledge representation language (“metalanguage”), which makes use of quantified variables when describing date in the rule base; the metalanguage is a particularly powerful realization of an AI type “case grammar”. For reasons of computational efficiency, the low-level (“level zero”) inferencing (retrieving) is carried out in RESEDA by using only the resources of the system’s match machine. This machine owes a large part of its power to the judicious use of temporal data in efficiently indexing the fact database. Only high-level inferences require the creation of real “inference engines”. RESEDA’s inference engine hat the general characteristics a) of being “event driven” in its initialization; b) of solving problems by constructing a “choice tree”. Traversal of the choice tree is performed depth-first with systematic backtracking. The high-level inference operations, relying on information in the rule base and making use of the inference engine, that are implemented in the system, are known as “transformations” and “hypotheses”. The “hypotheses” enable new causal relationships to be established between events in the fact database which are a priori totally disjointed; the system is thus equipped with an, albeit elementary, learning capability
Mathieu, Olivier. "Application des méthodes de l'intelligence artificielle à l'analyse de données en physique des particules". Aix-Marseille 2, 1990. http://www.theses.fr/1990AIX22058.
Chu, Chengbin. "Nouvelles approches analytiques et concept de mémoire artificielle pour divers problèmes d'ordonnancement". Metz, 1990. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1990/Chu.Chengbin.SMZ9021.pdf.
Dubois, Gilles. "Apport de l'intelligence artificielle à la coopération de systèmes d'information automatisée". Lyon 3, 1997. http://www.theses.fr/1997LYO33004.
Recent advances in distributed systems, computer network and database technology have changed the information processing needs of organizations. Current information systems should integrate various heterogeneous sources of data and knowledge according to distributed logical and physical requirements. An automated information system is perceived as a set of autonomous components which work in a synergistic manner by exchanging information expertise and coordinating their activities. In order for this exchange to be judicious, the individual systems must agree on the meaning of their exchanged information to solve conflicts due to heterogeneity. We have chosen an object oriented model as canonical model. The object model overcomes component heterogeneity and respects the autonomy of local systems in a distributed context. The cooperation structure uses artificial intelligence techniques to solve both structural and semantic conflicts. A dynamic description of information sources deals with local evolution and is involved in global queries treatment. An extension of the proposal exploits agents interactions to bring cognitive capabilities to the cooperation structure. The contribution of multi-agent systems to information system cooperation is argued. Technical choices to implement a prototype in an object oriented environment are described
Boudellal, Toufik. "Extraction de l'information à partir des flux de données". Saint-Etienne, 2006. http://www.theses.fr/2006STET4014.
The aim of this work is an attempt to resolve a mining data streams specified problem. It is an adaptative analysis of data streams. The web generation proposes new challenges due to the complexity of data structures. As an example, the data issued from virtual galleries, credit card transactions,. . . Generally, such data are continuous in time, and their sizes are dynamic. We propose a new algorithm based on measures applied to adaptative data streams. The interpretation of results is possible due to such measures. In fact, we compare our algorithm experimentally to other adapted approaches that are considered fundamental in the field. A modified algorithm that is more useful in applications is also discussed. This thesis finishes with a suggestions set about our future work relating to noises data streams and another set of suggestions about the future needfully work
Saleh, Imad. "Rapport entre les bases de données relationnelles et l'intelligence artificielle : étude et conception du modèle H-Relation". Paris, EHESS, 1990. http://www.theses.fr/1990EHES0057.
Saïs, Fatiha. "Intégration sémantique de données guidée par une ontologie". Paris 11, 2007. http://www.theses.fr/2007PA112300.
This thesis deals with semantic data integration guided by an ontology. Data integration aims at combining autonomous and heterogonous data sources. To this end, all the data should be represented according to the same schema and according to a unified semantics. This thesis is divided into two parts. In the first one, we present an automatic and flexible method for data reconciliation with an ontology. We consider the case where data are represented in tables. The reconciliation result is represented in the SML format which we have defined. Its originality stems from the fact that it allows representing all the established mappings but also information that is imperfectly identified. In the second part, we present two methods of reference reconciliation. This problem consists in deciding whether different data descriptions refer to the same real world entity. We have considered this problem when data is described according to the same schema. The first method, called L2R, is logical: it translates the schema and the data semantics into a set of logical rules which allow inferring correct decisions both of reconciliation and no reconciliation. The second method, called N2R, is numerical. It translates the schema semantics into an informed similarity measure used by a numerical computation of the similarity of the reference pairs. This computation is expressed in a non linear equation system solved by using an iterative method. Our experiments on real datasets demonstrated the robustness and the feasibility of our approaches. The solutions that we bring to the two problems of reconciliation are completely automatic and guided only by an ontology
Salehi, Mehrdad. "Developing a Model and a Language to Identify and Specify the Integrity Constraints in Spatial Datacubes". Doctoral thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26325/26325.pdf.
Texte en anglais avec résumés en anglais et en français. Bibliogr.: f. 185-197. Publié aussi en version électronique dans la Collection Mémoires et thèses électroniques.
Collard, Martine. "Fouille de données, Contributions Méthodologiques et Applicatives". Habilitation à diriger des recherches, Université Nice Sophia Antipolis, 2003. http://tel.archives-ouvertes.fr/tel-01059407.
Mballo, Cherif. "Ordre, codage et extension du critère de Kolmogorov-Smirnov pour la segmentation de données symboliques". Paris 9, 2005. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2005PA090034.
We adapt Kolmogorov-Smirnov's binary splitting criterion to interval, diagram and taxonomical data for decision tree induction. This criterion requires an order on the values of the objects. It is based on the cumulative distribution function. We order these data in different ways. The approximation of the theoretical distribution function by the empirical distribution function makes it possible to adapt this criterion to these data. In segmentation, the variable to explain is usually qualitative. In our case, it can be a symbolic variable of interval, diagram or taxonomical type. Different coding criteria of these types of variables are proposed. This criterion is compared with two criteria (entropy and Gini). Two assignment methods are examined: the first assigns an object entirely to one node and the second assigns it to both children nodes generated by a split. This last method takes into account the position of the data to be classified with regard to the selected data for the cut. We present an algorithm to explain the correlations inside the classes of a partition obtained on a classical variable and a practical application on a Luxemburg border zone workers
Boudane, Abdelhamid. "Fouille de données par contraintes". Thesis, Artois, 2018. http://www.theses.fr/2018ARTO0403/document.
In this thesis, We adress the well-known clustering and association rules mining problems. Our first contribution introduces a new clustering framework, where complex objects are described by propositional formulas. First, we extend the two well-known k-means and hierarchical agglomerative clustering techniques to deal with these complex objects. Second, we introduce a new divisive algorithm for clustering objects represented explicitly by sets of models. Finally, we propose a propositional satisfiability based encoding of the problem of clustering propositional formulas without the need for an explicit representation of their models. In a second contribution, we propose a new propositional satisfiability based approach to mine association rules in a single step. The task is modeled as a propositional formula whose models correspond to the rules to be mined. To highlight the flexibility of our proposed framework, we also address other variants, namely the closed, minimal non-redundant, most general and indirect association rules mining tasks. Experiments on many datasets show that on the majority of the considered association rules mining tasks, our declarative approach achieves better performance than the state-of-the-art specialized techniques
Attik, Mohammed. "Traitement intelligent de données par réseaux de neurones artificiels : application à la valorisation des systèmes d'information géographiques". Nancy 1, 2006. http://docnum.univ-lorraine.fr/public/SCD_T_2006_0211_ATTIK.pdf.
The purpose of this thesis is: (i) establish predictive maps on ore deposits, (ii) select a subset of descriptive features that effectively contribute to the building of these predictive maps, (iii) identify and interpret dependencies between the selected features, (iv) place the features into a hierarchy that indicates their importance. A real-life data of Geographical Information System provided by the French geological survey (BRGM) have been used in the accomplished experiments. In order to establish predictive maps, we have used neural network ensemble which is a very successful technique where outputs of a set of separately trained neural network are combined to form one unified prediction. This technique generates several predictive maps following the used aggregation function. In addition, to understand domain data, we have focused on selecting a subset of relevant features. We have proposed an improvement of existing features selection techniques that are based on the principle of Optimal Brain Damage (OBD) as well as those of Optimal Brain Surgeon (OBS) and Mutual Information (MI). We have also proposed novel solutions to understand data that combine ensemble feature selection approach with either concept lattices or statistic techniques. The latter solutions help discovering all relevant features and organizing them into hierarchy according to their concurrencies in the selected subsets of features. Moreover, we have addressed the problem of clustering-based analysis of data provided with multiple labels. The proposed approach uses new measures that extend the scope of the recall and precision measures in information retrieval (IR) to the processing of multi-label data. Experiments have been carried out on data pertaining to geographical information system and documentary system have highlighted the accuracy of our approach for knowledge extraction
Nair, Benrekia Noureddine Yassine. "Classification interactive multi-label pour l’aide à l’organisation personnalisée des données". Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=bb2e3d25-7f53-4b66-af04-a9fb5e80ea28.
The growing importance given today to personalized contents led to the development of several interactive classification systems for various novel applications. Nevertheless, all these systems use a single-label item classification which greatly constrains the user's expressiveness. The major problem common to all developers of an interactive multi-label system is: which multi-label classifier should we choose? Experimental evaluations of recent interactive learning systems are mainly subjective. The importance of their conclusions is consequently limited. To draw more general conclusions for guiding the selection of a suitable learning algorithm during the development of such a system, we extensively study the impact of the major interactivity constraints (learning from few examples in a limited time) on the classifier predictive and time-computation performances. The experiments demonstrate the potential of an ensemble learning approach Random Forest of Predictive Clustering Trees(RF-PCT). However,the strong constraint imposed by the interactivity on the computation time has led us to propose a new hybrid learning approach FMDI-RF+ which associates RF-PCT with an efficient matrix factorization approach for dimensionality reduction. The experimental results indicate that RF-FMDI+ is as accurate as RF-PCT in the predictions with a significant advantage to FMDI-RF + for the speed of computation
Poussevin, Mickael. "Apprentissage de représentation pour des données générées par des utilisateurs". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066040/document.
In this thesis, we study how representation learning methods can be applied to user-generated data. Our contributions cover three different applications but share a common denominator: the extraction of relevant user representations. Our first application is the item recommendation task, where recommender systems build user and item profiles out of past ratings reflecting user preferences and item characteristics. Nowadays, textual information is often together with ratings available and we propose to use it to enrich the profiles extracted from the ratings. Our hope is to extract from the textual content shared opinions and preferences. The models we propose provide another opportunity: predicting the text a user would write on an item. Our second application is sentiment analysis and, in particular, polarity classification. Our idea is that recommender systems can be used for such a task. Recommender systems and traditional polarity classifiers operate on different time scales. We propose two hybridizations of these models: the former has better classification performance, the latter highlights a vocabulary of surprise in the texts of the reviews. The third and final application we consider is urban mobility. It takes place beyond the frontiers of the Internet, in the physical world. Using authentication logs of the subway users, logging the time and station at which users take the subway, we show that it is possible to extract robust temporal profiles
Obenson, Philip. "Contribution à l'étude de l'impact de la logique des prédicats du premier ordre et de l'intelligence artificielle sur les bases de données relationnelles : Application aux bases de données bibliographiques". Université de Franche-Comté. UFR des sciences et techniques, 1987. http://www.theses.fr/1987BESA2014.
Jullien, Christian. "Le-cool : un langage orienté objet à héritage multiple permettant la manipulation des concepts et des données en intelligence artificielle". Paris 6, 1986. http://www.theses.fr/1986PA066264.
Bousquet, Cédric. "Raisonnement terminologique et fouille de données en pharmacovigilance : de nouvelles approches basées sur la terminologie MedDRA". Paris 6, 2004. http://www.theses.fr/2004PA066024.
Chelghoum, Kamel. "Un modèle de données sémantique pour la C. A. O". Lyon 1, 1989. http://www.theses.fr/1989LYO10173.
Bouabdallaoui, Yassine. "Introduction de l'intelligence artificielle dans le secteur de la construction : études de cas du Facility Management". Electronic Thesis or Diss., Centrale Lille Institut, 2021. http://www.theses.fr/2021CLIL0022.
The industry of Facility Management (FM) has known a rapid advancement through the last decades which leads to a largeexpansion of FM activities. The FM organisations have evolved from the traditional role of providing maintenance services toinclude complex and interconnected activities involving people, processes and technologies. As a consequence of thisexponential growth, facility managers are dealing with growing and varied challenges ranging from energy efficiency andenvironmental challenges to service customisation and customer satisfaction. The development of Artificial Intelligence (AI)is offering academics and practitioners a new set of tools to address these challenges. AI is enabling multiple solutions suchas automation, improving predictability and forecasting and offering services customisation. The Facility Managementindustry can benefit from these new techniques to better manage their assets and improve their processes. However, theintegration of AI into the FM ecosystem is a challenging task that needs to overcome the gap between the business driversand the AI. To unlock the full potential of data analytics and AI in the FM industry, significant work is needed to overcomethe issues of data quality and data management in the FM sector. The overall aim of this thesis is to conceptualise thetheoretical and practical understanding and implementation of artificial intelligence and data-driven technologies into FacilityManagement activities to leverage data and optimise facilities usage. Promises of AI implementations were presented alongwith the challenges and the barriers limiting the development of AI in the FM sector. To resolve these issues, a frameworkwas proposed to improve data management and leverage AI in FM. Multiple case studies were selected to address thisframework. The selected case studies covered predictive maintenance, virtual assistant and natural language processingapplications. The results of this work demonstrated the potential of AI to address FM challenges such in maintenancemanagement and waste management. However, multiple barriers limiting the development of AI in the FM sector wereidentified including data availability issues
Berrabah, Djamel. "Etude de la cohérence globale des contraintes dans les bases de données". Paris 5, 2006. http://www.theses.fr/2006PA05S003.
The task of data modelling is always a delicate activity and requires a good experiment of designers. The aim of this task is a conceptual schema creation. A conceptual schema can result from database schemas integration process, database reverse engineering or simply from interested reality design. The conceptual schema is a set of data structures and constraints in order to represent, as well as possible, the real word. The current means to define constraint neither represent a great number of constraints nor ensure their total coherence. Thus the validity of data is not checked. In addition, total coherence study of these constraints (the detection of possible conflicts and their localization) is necessary. ‘vVe propose in this thesis an approach to study this coherence. To this end, we formalize constraints, defined in the conceptual schema, in mathematical inequalities form combined with expressions in first-order predicate logic. The result of this formalization is a logical program. To do this, we proposed a meta-schema to save the conceptual schema in its totality. Then, we apply reasoning on the logical program in order to detect and localize possible conflicts. If conflicts exist, the conceptual schema is considered invalid and must be corrected. If the conceptual schema is valid, it is translated in a target language according to the selected environment. Our translation is complete since it takes into account the totality of the defined constraints
Deba, El Abbassia. "Transformation de modèles de données : intégration de la métamodélisation et de l'approche grammaticale". Toulouse 3, 2007. http://thesesups.ups-tlse.fr/220/.
Following recent discoveries about the several roles of non-coding RNAs (ncRNAs), there is now great interest in identifying these molecules. Numerous techniques have been developed to localize these RNAs in genomic sequences. We use here an approach which supposes the knowledge of a set of structural elements called signature that discriminate an ncRNA family. In this work, we combine several pattern-matching techniques with the weighted constraint satisfaction problem framework. Together, they make it possible to model our biological problem, to describe accurately the signatures and to give the solutions a cost. We conceived filtering techniques as well as novel pattern-matching algorithms. Furthermore, we designed a software called DARN! that implements our approach and another tool that automatically creates signatures. These tools make it possible to localize efficiently new ncRNAs
Lakhal, Lotfi. "Contribution à l'étude des interfaces pour non-informaticien dans la manipulation de bases de données relationnelles". Nice, 1986. http://www.theses.fr/1986NICE4067.
Weber, Christophe. "Développement de méthodes d'analyse avancées de données expérimentales sur les phénomènes d'encrassement d'échangeurs thermiques en conditions réelles de fonctionnement". Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4045.
The lack of dedicated tools, enabling at industrials to act effectively on the fouling phenomena in heat exchanger is the origin of this study. The purpose of the thesis is to develop a methodology in order to assess - in situ -at the characteristic parameters of the fouling effects, implement and then validate the data analysis methods to extract a fouling effects prediction tools from the knowledge of a limited number of operating data. This approach realized for different identified and instrumented thermohydraulic systems favors the fouling process in heat exchangers on charged water. We will focus on the phenomena of fouling of heat exchangers in a real environment, with emphasis on the development of methodologies to identify the fouling kinetic's and on the approach for validation of practical and concrete cases. Finally, application of a maintenance program by different cleaning strategies limiting the degradation of the efficiency of the facilities studied will be conducted: it will identify the most appropriate strategies. The purpose of the initiative, intended to run for any installation, is to develop an expert tool from reduced amount of information. This tool assesses to evaluate the kinetics of fouling of the thermal equipment for a future period and develop a maintenance practice from the perspective of reducing energy costs and costs of intervention
Renaux, Pierre. "Extraction d'informations à partir de documents juridiques : application à la contrefaçon de marques". Caen, 2006. http://www.theses.fr/2006CAEN2019.
Our research framework focuses on the extraction and analysis of induced knowledge from legal corpus databases describing the nominative trade-mark infringement. This discipline deals with all the constraints arising from the different domains of knowledge discovery from documents: the electronic document, databases, statistics, artificial intelligence and human computer interaction. Meanwhile, the accuracy of these methods are closely linked with the quality of the data used. In our research framework, each decision is supervised by an author (the magistrate) and relies on a contextual writing environment, thus limiting the information extraction process. Here we are interesteding in decisions which direct the document learning process. We observe their surrounding, find their strategic capacity and offer adapted solutions in order to determine a better document representation. We suggest an explorative and supervised approach for calculating the data quality by finding properties which corrupt the knowledge quality. We have developped an interactive and collaborative platform for modelling all the processes concluding to the knowledge extraction in order to efficiently integrate the expert's know-how and practices
Allesiardo, Robin. "Bandits Manchots sur Flux de Données Non Stationnaires". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS334/document.
The multi-armed bandit is a framework allowing the study of the trade-off between exploration and exploitation under partial feedback. At each turn t Є [1,T] of the game, a player has to choose an arm kt in a set of K and receives a reward ykt drawn from a reward distribution D(µkt) of mean µkt and support [0,1]. This is a challeging problem as the player only knows the reward associated with the played arm and does not know what would be the reward if she had played another arm. Before each play, she is confronted to the dilemma between exploration and exploitation; exploring allows to increase the confidence of the reward estimators and exploiting allows to increase the cumulative reward by playing the empirical best arm (under the assumption that the empirical best arm is indeed the actual best arm).In the first part of the thesis, we will tackle the multi-armed bandit problem when reward distributions are non-stationary. Firstly, we will study the case where, even if reward distributions change during the game, the best arm stays the same. Secondly, we will study the case where the best arm changes during the game. The second part of the thesis tacles the contextual bandit problem where means of reward distributions are now dependent of the environment's current state. We will study the use of neural networks and random forests in the case of contextual bandits. We will then propose meta-bandit based approach for selecting online the most performant expert during its learning
Khiari, Mehdi. "Découverte de motifs n-aires utilisant la programmation par contraintes". Caen, 2012. http://www.theses.fr/2012CAEN2015.
Until recently, data mining and Constraint Programming have been developed separately one from the other. This thesis is one of the first to address the relationships between these two areas of computer science, in particular using constraint programming techniques for constraint-based mining. The data mining community has proposed generic approaches to discover local patterns under constraints, and this issue is rather well-mastered. However, these approaches do not take into consideration that the interest of a pattern often depends on the other patterns. Such a pattern is called n-ary pattern or pattern set. Few works on mining n-ary patterns were conducted and the proposed approaches are ad hoc. This thesis proposes an unified framework for modeling and solving n-ary constraints in data mining. First, the n-ary pattern extraction problem is modeled as a Constraint Satisfaction Problem (CSP). Then, a high-level declarative language for mining n-ary patterns is proposed. This language allows to express a wide range of n-ary constraints. Several solving methods are developed and compared. The main advantages of this framework are its declarative and generic sides. To the best of our knowledge, it is the first generic and flexible framework for modeling and mining n-ary patterns
Szathmary, Laszlo. "Méthodes symboliques de fouille de données avec la plate-forme Coron". Phd thesis, Université Henri Poincaré - Nancy I, 2006. http://tel.archives-ouvertes.fr/tel-00336374.
Les contributions principales de cette thèse sont : (1) nous avons développé et adapté des algorithmes pour trouver les règles d'association minimales non-redondantes ; (2) nous avons défini une nouvelle base pour les règles d'associations appelée “règles fermées” ; (3) nous avons étudié un champ de l'ECBD important mais relativement peu étudié, à savoir l'extraction des motifs rares et des règles d'association rares ; (4) nous avons regroupé nos algorithmes et une collection d'autres algorithmes ainsi que d'autres opérations auxiliaires d'ECBD dans une boîte à outils logicielle appelée Coron.
Nguyen, Gia Toan Delobel Claude. "Quelques fonctionnalités de bases de données avancées". S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00321615.
Chevalier, Jules. "Raisonnement incrémental sur des flux de données". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES008/document.
In this thesis, we propose an architecture for incremental reasoning on triple streams. To ensure scalability, it is composed of independent modules; thus allowing parallel reasoning. That is, several instances of a same rule can be simultaneously executed to enhance performance. We also focused our efforts to limit the duplicates spreading in the system, a recurrent issue for reasoning. To achieve this, we design a shared triplestore which allows each module to filter duplicates as soon as possible. The triples passes through the different independent modules of the architecture allows the reasoner to receive triple streams as input. Finally, our architecture is of agnostic nature regarding the fragment used for the inference. We also present three inference modes for our architecture: the first one infers all the implicit knowledge as fast as possible; the second mode should be used when the priority has to be defined for the inference of a specific type of knowledge; the third one proposes to maximize the amount of triples inferred per second. We implemented this architecture through Slider, an incremental reasoning natively supporting the fragments ρdf and RDFS: It can easily be extended to more complex fragments. Our experimentations show a 65% improvement over the reasoner OWLIM-SE. However, the recently published reasoner RDFox exhibits better performance, although this one does not provide prioritized inference. We also conducted experimentations showing that the use of incremental reasoning over batch-based reasoning offers systematically better performance for all the ontologies and fragments used
Malek, Chakib. "Diagnostic du paysage à partir de données satellitaires : application au Sahel-Oudalan (Burkina Faso)". Paris 7, 1989. http://www.theses.fr/1989PA070022.
Based on the combination of intermediate results from different and justified dates-in relation to the purposes of the study - in an annual cycle, theextraction of "objective" landscape units lean on a multi- temporal approach. An "optimal" unsupervised of the regional area is first done on the base of each date, in which the number of clusters is objectively determined by a criterion of stability. The, in a dyna- mic and temporal context, the "real" landscape units are obtained by the image-conjunction of the units resulting from the different dates, justified by the physical nature of remote sensing data, and by the clustering criteria. The detailed stydy (classification, interpretation, analysis) of each so generated unit, lead to its dynamic diagnosis. Which allow the elaboration of a "checkup" of these units, leading to the location of the area needing, with top priorities, some inter- ventions, and to direct the types of actions the actors of development have to undertake
Gal, Jocelyn. "Application d’algorithmes de machine learning pour l’exploitation de données omiques en oncologie". Electronic Thesis or Diss., Université Côte d'Azur (ComUE), 2019. http://theses.univ-cotedazur.fr/2019AZUR6026.
The development of computer science in medicine and biology has generated a large volume of data. The complexity and the amount of information to be integrated for optimal decision-making in medicine have largely exceeded human capacities. These data includes demographic, clinical and radiological variables, but also biological variables and particularly omics (genomics, proteomics, transcriptomics and metabolomics) characterized by a large number of measured variables relatively to a generally small number of patients. Their analysis represents a real challenge as they are frequently "noisy" and associated with situations of multi-colinearity. Nowadays, computational power makes it possible to identify clinically relevant models within these sets of data by using machine learning algorithms. Through this thesis, our goal is to apply supervised and unsupervised learning methods, to large biological data, in order to participate in the optimization of the classification and therapeutic management of patients with various types of cancer. In the first part of this work a supervised learning method is applied to germline immunogenetic data to predict the efficacy and toxicity of immune checkpoint inhibitor therapy. In the second part, different unsupervised learning methods are compared to evaluate the contribution of metabolomics in the diagnosis and management of breast cancer. Finally, the third part of this work aims to expose the contribution that simulated therapeutic trials can make in biomedical research. The application of machine learning methods in oncology offers new perspectives to clinicians allowing them to make diagnostics faster and more accurately, or to optimize therapeutic management in terms of efficacy and toxicity
Pillet, Constance-Aurore. "Transformation progressive du texte en données à l'aide de méthodes linguistiques et évaluation de cet apport de la linguistique sur l'efficacité du Text Mining". Paris 9, 2003. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2003PA090007.
Cembrzynski, Thierry. "Conception et réalisation d'outils statistiques et d'intelligence artificielle pour l'aide à la planification du réseau de transport d'Électricité de France /". [Le Chesnay] : Institut national de recherche en informatique et en automatique, 1988. http://catalogue.bnf.fr/ark:/12148/cb34940505d.
Sarkis, Georges. "Communication entre les systèmes de CAO et les systèmes experts à base de connaissance en bâtiment dans un environnement d'intelligence artificielle". Marne-la-vallée, ENPC, 1992. http://www.theses.fr/1992ENPC9202.
Reyssier-Danzart, Annie. "Serebral, un système expert pour l'aide à l'interrogation d'une base de données". Paris 11, 1985. http://www.theses.fr/1985PA112284.
Flexible query of data bases has provoked the interest of researchers in artificial intelligence. We have developed a tool called SEREBRAL that makes it possible to query a data base intelligently. This data base contains descriptions of jobs offered by computer science firms. SEREBRAL is made up of three modules: BASEXP, an expert system that allows a user to carry out a search for a job in computer science, an interface with the data base, a relational data base ORACLE (distributed by ORACLE Corporation) in which descriptions of job vacancies are stored. The main concern of the research carried out was the development of an expert system called BASEXP. This system of knowledge representation and knowledge exploitation uses schemes and rules in association with credibility factors. It can be used in various fields like that which was examined in this study. Ln order to query the data base, we developed an interface that generates the SQL queries, These queries adapt themselves to the constraints generated by BASEXP in the field of computing jobs. The interface then transmits these queries to the DBMS. Finally, it communicates the responses of the base to the user. SEREBRAL is now operational and can dialogue with the job seeker, for whom a job profile can be drawn up so that he can receive those job offers stored in the base that correspond to his profile
Lebastard, Franck. "Driver : une couche objet virtuelle persistante pour le raisonnement sur les bases de données relationnelles". Lyon, INSA, 1993. http://www.theses.fr/1993ISAL0030.
This thesis presents DRIVER, a persistent virtual object layer, that permits to use, in a same chosen object formalism, both the information contained in relational databases and the knowledge of a higher-level system, such as our expert system shell SMECI. A user-defined mapping assigns an object representation to data of connected bases; it permits to handle and to utilize them exactly as other objects in the expert system environment, for example during reasoning. DRIVER can also supply some environment objects with persistence, according to user's wishes
Ravi, Mondi. "Confiance et incertitude dans les environnements distribués : application à la gestion des donnéeset de la qualité des sources de données dans les systèmes M2M (Machine to Machine)". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM090/document.
Trust and uncertainty are two important aspects of many distributed systems. For example, multiple sources of information can be available for the same type of information. This poses the problem to select the best source that can produce the most certain information and to resolve incoherence amongst the available information. Managing trust and uncertainty together forms a complex problem and through this thesis we develop a solution to this. Trust and uncertainty have an intrinsic relationship. Trust is primarily related to sources of information while uncertainty is a characteristic of the information itself. In the absence of trust and uncertainty measures, a system generally suffers from problems like incoherence and uncertainty. To improve on this, we hypothesize that the sources with higher trust levels will produce more certain information than those with lower trust values. We then use the trust measures of the information sources to quantify uncertainty in the information and thereby infer high level conclusions with greater certainty.A general trend in the modern distributed systems is to embed reasoning capabilities in the end devices to make them smart and autonomous. We model these end devices as agents of a Multi Agent System. Major sources of beliefs for such agents are external information sources that can possess varying trust levels. Moreover, the incoming information and beliefs are associated with a degree of uncertainty. Hence, the agents face two-fold problems of managing trust on sources and presence of uncertainty in the information. We illustrate this with three application domains: (i) The intelligent community, (ii) Smart city garbage collection, and (iii) FIWARE : a European project about the Future Internet that motivated the research on this topic. Our solution to the problem involves modelling the devices (or entities) of these domains as intelligent agents that comprise a trust management module, an inference engine and a belief revision system. We show that this set of components can help agents to manage trust on the other sources and quantify uncertainty in the information and then use this to infer more certain high level conclusions. We finally assess our approach using simulated and real data pertaining to the different application domains
Pujo, Pascal. "Développement d'une interface conviviale pour l'interrogation en langage naturel d'une base de données avec utilisation des concepts et des moyens de l'intelligence artificielle". Paris 11, 1989. http://www.theses.fr/1989PA112255.
Randriamanantena, Herimino Paoly. "Utilisation de données satellitaires dans les modèles météorologiques". Toulouse, INPT, 1992. http://www.theses.fr/1992INPT030H.