Dissertationen zum Thema „Exploitation des données“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Exploitation des données" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Ba, Mouhamadou Lamine. „Exploitation de la structure des données incertaines“. Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0013.
Der volle Inhalt der QuelleThis thesis addresses some fundamental problems inherent to the need of uncertainty handling in multi-source Web applications with structured information, namely uncertain version control in Web-scale collaborative editing platforms, integration of uncertain Web sources under constraints, and truth finding over structured Web sources. Its major contributions are: uncertainty management in version control of treestructured data using a probabilistic XML model; initial steps towards a probabilistic XML data integration system for uncertain and dependent Web sources; precision measures for location data and; exploration algorithms for an optimal partitioning of the input attribute set during a truth finding process over conflicting Web sources
Arnaud, Bérenger. „Exploitation et partage de données hétérogènes et dynamiques“. Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20025/document.
Der volle Inhalt der QuelleIn the context of numeric data, the software development costs entail a number of cost factors. In contrast, adapting generic tools has its own set of costs, requiring developer's integration and final user's adaptation. The aim of our approach is to consider the different points of interaction with the data to improve the exploitation of data, whether provided or generated from collaboration.The definitions and problems related to data are dependent upon the domain from which the data come and the treatment that have been applied to them. In this work we have opted for a holistic approach where we consider the range of angles. The result is a summary of the emergent concepts and domain equivalences.The first contribution consists of improving collaborative document mark-up. Two improvements are proposed by out tool – Coviz –. 1) Resource tagging which is unique to each user, who organises their own labels according to their personal poly-hierarchy. Each user may take into consideration other users approaches through sharing of tags. The system supplies additional context through a harvesting of documents in open archives. 2) The tool applies the concept of facets to the interface and then combines them to provide a search by keyword or characteristic selection. This point is shared by all users and the actions of an individual user impact the whole group.The major contribution, which is confidential, is a framework christened DIP for Data Interaction and Presentation. Its goal is to increase the freedom of expression of the user over the interaction and access to data. It reduces the hardware and software constrains by adding a new access point between the user and the raw data as well as generic pivots. From a final point of view the user gains in expression of filtering, in sharing, in state persistence of the navigator, in automation of day-to-day tasks, etc.DIP has been stress tested under real-life conditions of users and limited resources with the software KeePlace. Acknowledgement is given to KeePlace who initiated this thesis
Khelil, Amar. „Elaboration d'un système de stockage et exploitation de données pluviométriques“. Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0034.
Der volle Inhalt der QuelleThe Lyon District Urban Area (CO. UR. LY. ) may be explained from an hydrological point of view as a 600 km2 area equipped with a sewerage system estimated by 2 000 km of pipes. Due to the complexity of the sewerage network of the area, it must therefore be controlled by an accurate and reliable system of calculation to avoid any negative consequences of its function. The capacity of the present computerising system SERAIL, allows an overall simulation of the functioning of drainage / sewerage system. This model requires an accurate information of the rainfall rate which was not previously available. Therefore a 30 rain gages network (with cassette in sit recording) was set up within the Urban District Area in 1983. This research however introduces the experiment of three steps: 1) to install the network; 2) to build up a data checking and storage system; 3) to analyse the data. The characteristic nature of this work deals with the data analysis system. It allows to extract easily and analyse any rainfall event important to the hydrologist. Two aims were defined: 1) to get a better understanding of the phenomena (punctual representations ); 2) to build up models. In order to achieve the second aim, it was necessary to think about the fitting of the propounded models and their limits which led to the setting up of several other programmes for checking and comparison. For example a complete analysis of a rainfall event is given with comments and conclusion
Ponchateau, Cyrille. „Conception et exploitation d'une base de modèles : application aux data sciences“. Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2018. http://www.theses.fr/2018ESMA0005/document.
Der volle Inhalt der QuelleIt is common practice in experimental science to use time series to represent experimental results, that usually come as a list of values in chronological order (indexed by time) and generally obtained via sensors connected to the studied physical system. Those series are analyzed to obtain a mathematical model that allow to describe the data and thus to understand and explain the behavio rof the studied system. Nowadays, storage and analyses technologies for time series are numerous and mature, but the storage and management technologies for mathematical models and their linking to experimental numerical data are both scarce and recent. Still, mathematical models have an essential role to play in the interpretation and validation of experimental results. Consequently, an adapted storage system would ease the management and re-usability of mathematical models. This work aims at developing a models database to manage mathematical models and provide a “query by data” system, to help retrieve/identify a model from an experimental time series. In this work, I will describe the conception (from the modeling of the system, to its software architecture) of the models database and its extensions to allow the “query by data”. Then, I will describe the prototype of models database,that I implemented and the results obtained by tests performed on the latter
Letessier, Pierre. „Découverte et exploitation d'objets visuels fréquents dans des collections multimédia“. Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0014.
Der volle Inhalt der QuelleThe main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion
De, Vettor Pierre. „A Resource-Oriented Architecture for Integration and Exploitation of Linked Data“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1176/document.
Der volle Inhalt der QuelleIn this thesis, we focus on data integration of raw data coming from heterogeneous and multi-origin data sources on the Web. The global objective is to provide a generic and adaptive architecture able to analyze and combine this heterogeneous, informal, and sometimes meaningless data into a coherent smart data set. We define smart data as significant, semantically explicit data, ready to be used to fulfill the stakeholders' objective. This work is motivated by a live scenario from the French {\em Audience Labs} company. In this report, we propose new models and techniques to adapt the combination and integration process to the diversity of data sources. We focus on transparency and dynamicity in data source management, scalability and responsivity according to the number of data sources, adaptability to data source characteristics, and finally consistency of produced data (coherent data, without errors and duplicates). In order to address these challenges, we first propose a meta-models in order to represent the variety of data source characteristics, related to access (URI, authentication) extraction (request format), or physical characteristics (volume, latency). By relying on this coherent formalization of data sources, we define different data access strategies in order to adapt access and processing to data source capabilities. With help form these models and strategies, we propose a distributed resource oriented software architecture, where each component is freely accessible through REST via its URI. The orchestration of the different tasks of the integration process can be done in an optimized way, regarding data source and data characteristics. This data allows us to generate an adapted workflow, where tasks are prioritized amongst other in order to fasten the process, and by limiting the quantity of data transfered. In order to improve the data quality of our approach, we then focus on the data uncertainty that could appear in a Web context, and propose a model to represent uncertainty in a Web context. We introduce the concept of Web resource, based on a probabilistic model where each resource can have different possible representations, each with a probability. This approach will be the basis of a new architecture optimization allowing to take uncertainty into account during our combination process
Fournier, Jonathan. „Exploitation de données tridimensionnelles pour la cartographie et l'exploration autonome d'environnements urbains“. Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24421/24421.pdf.
Der volle Inhalt der QuelleLê, Laetitia Minh Mai. „Exploitation des données spectrales dans la sécurisation du circuit des médicaments anticancéreux“. Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112148/document.
Der volle Inhalt der QuelleMost of the anticancer drugs are defined by a narrow therapeutic margin; therefore medical errors can have major consequences on patients. Thus, it’s necessary to guarantee the good drug at the good dose by the implementation of a quality control of the preparation before administration. These potentially carcinogenic, mutagenic or teratogenic drugs present a risk for exposed people especially healthcare workers.The aim of this study was to develop tools which can optimize the safety of the cytotoxic medication circuit in hospitals, for the patient as much as for healthcare workers. In order to respond to these problematics, analytical tools have been associated with different methods of data interpretation of chemometric and risk management.To improve healthcare workers’ safety, environmental monitoring looking for traces of platinum compound cytotoxic drugs were performed to identify the most contaminated areas. Based on these contaminations and working conditions, a methodology of multi-criteria risk analysis has been developed to quantify the risk of exposure of healthcare workers. Regarding the risk, various corrective measures were considered. Thus, studies based on the detergent efficiency of decontamination protocols used to clean workplace surfaces and cytotoxic vials were conducted.In parallel, assays were performed on two anticancer molecules to secure cytotoxic preparations before administration: 5-fluorouracile and gemcitabine. Regarding their non-destructive, non-invasive properties and therefore, more secured handling, Raman and near infrared spectroscopy were explored. Spectral data (spectral zones and pretreatments) were optimized by multivariate analyses ComDim to develop models of regression PLS predicting the concentration of the active ingredient in solution. Results showed the feasibility and the complementarity of these two spectroscopies in the quantitative determination of the cytotoxic drugs.These works participate in the continuous approach of quality assurance implemented in numerous health institutions. We hope that they will contribute to durably decrease risks associated to cytotoxic drugs for both patients and healthcare workers
Correa, Beltran William. „Découverte et exploitation de proportions analogiques dans les bases de données relationnelles“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S110/document.
Der volle Inhalt der QuelleIn this thesis, we are interested in the notion of analogical proportions in a relational database context. An analogical proportion is a statement of the form “A is to B as C is to D”, expressing that the relation beween A and B is the same as the relation between C and D. For instance, one may say that “Paris is to France as Rome is to Italy”. We studied the problem of imputing missing values in a relational database by means of analogical proportions. A classification algorithm based on analogical proportions has been modified in order to impute missing values. Then, we studied how analogical classifiers work in order to see if their processing could be simplified. We showed how some typeof analogical proportions is more useful than the others when performing classification. We then proposed an algorithm using this information, which allowed us to considerably reduce the size of the training set used by the analogical classificationalgorithm, and hence to reduce its execution time. In the second part of this thesis, we payed a particular attention to the mining of combinations of four tuples bound by an analogical relationship. For doing so, we used several clustering algorithms, and we proposed some modifications to them, in order tomake each obtained cluster represent a set of analogical proportions. Using the results of the clustering algorithms, we studied how to efficiently retrieve the analogical proportions in a database by means of queries. For doing so, we proposed to extend the SQL query language in order to retrieve from a database the quadruples of tuples satisfying an analogical proportion. We proposed severalquery evaluation strategies and experimentally compared their performances
Procopio, Pietro. „Foreground implications in the scientific exploitation of CMB data“. Paris 7, 2009. http://www.theses.fr/2009PA077252.
Der volle Inhalt der QuelleThe first part of my thesis work focus on the CMB photon distribution function. I show the implementations and the updating phases characterizing a numerical integration code (KYPRIX) for the solution of the Kompaneets equation in cosmological context. Physical implementations were also performed: the introduction of the cosmological constant; the choice of the primordial chemical abundances of H and He is now possible; the ionization fractions for the species involved have been introduced; it was created an optional interface that links KYPRIX with codes, like RECFAST, in order to calculate a recombination history of the ionization fraction of H and He. Ail of the physical implementations contributed to perform more realistic simulation of the spectral distortion of the CMB. During my second stage at APC I performed several tests on the Planck Sky Model. The tests involved the latest two release of Galactic emission model, the Galactic foreground template derived by WMAP data and a clean CMB anisotropy map. The last release of the PSM total intensity prediction of the Galactic processes showed results consistent with the previous ones for almost ail the frequencies tested, while it still needs some tuning at 23 GHz, where synchrotron emission and free-free emission are more prominent. I started using SMICA (component separation techniques) during my first stage at APC, in 2007. 1 used SMICA, and another filter (FFT filter) I developed, for a reprocessing of the IRIS mapset. A FFT filter was developed for this purpose and I used the filter only on localized regions, not on the full-sky maps. The dramatic improvements obtained on the IRIS maps are clearly visible just by eye
Chauris, Hervé. „Exploitation de la cohérence locale des données sismiques pour l'imagerie du sous-sol“. Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00535531.
Der volle Inhalt der QuelleEl, Sarraj Lama. „Exploitation d'un entrepôt de données guidée par des ontologies : application au management hospitalier“. Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4331.
Der volle Inhalt der QuelleThis research is situated in the domain of Data Warehouses (DW) personalization and concerns DW assistance. Specifically, we are interested in assisting a user during an online analysis processes to use existing operational resources. The application of this research concerns hospital management, for hospitals governance, and is limited to the scope of the Program of Medicalization of Information Systems (PMSI). This research was supported by the Public Hospitals of Marseille (APHM). Our proposal is a semantic approach based on ontologies. The support system implementing this approach, called Ontology-based Personalization System (OPS), is based on a knowledge base operated by a personalization engine. The knowledge base is composed of three ontologies: a domain ontology, an ontology of the DW structure, and an ontology of resources. The personalization engine allows firstly, a personalized search of resources of the DW based on users profile, and secondly for a particular resource, an expansion of the research by recommending new resources based on the context of the resource. To recommend new resources, we have proposed three possible strategies. To validate our proposal, a prototype of the OPS system was developed, a personalization engine has been implemented in Java. This engine exploit an OWL knowledge composed of three interconnected OWL ontologies. We illustrate three experimental scenarios related to PMSI and defined with APHM domain experts
Fraisse, Bernard. „Automatisation, traitement du signal et recueil de données en diffraction x et analyse thermique : Exploitation, analyse et représentation des données“. Montpellier 2, 1995. http://www.theses.fr/1995MON20152.
Der volle Inhalt der QuelleTchienehom, Pascaline Laure. „Modélisation et exploitation de profils : accès sémantique à des ressources“. Toulouse 1, 2006. http://www.theses.fr/2006TOU10026.
Der volle Inhalt der QuelleResources access is a broader view of information access where resources can be extended to any kind of persons, things and actions. Heterogeneity of resources has led to development of several access methods. These methods rely on the description of resources that we call profile, and also on the definition of using ru les of those profiles in order to achieve a specific task (retrieval, filtering, etc. ). Profiles and their using rules differ from one application to another. For applications cooperation, there is a real need of a flexible and homogenous framework for the modelling and use of profiles. Our research work aims at prodiving solutions in those two aspects, thanks to a profile generic model and methods for semantic analysis and matching of instances of this model. In order to validate our proposals, an assistant tool for profiles construction, visualization and semantic analysis has been implemented. Furthermore, an evaluation of methods for profiles semantic analysis and matching has been carried out
Letessier, Pierre. „Découverte et exploitation d'objets visuels fréquents dans des collections multimédia“. Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0014/document.
Der volle Inhalt der QuelleThe main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion
Mardoc, Emile. „Exploitation des outils statistiques pour l'intégration des données omiques en biologie végétale et animale“. Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2023. http://www.theses.fr/2023UCFA0118.
Der volle Inhalt der QuelleThe recent advancements in biological data production technologies have led to the proliferation of omics data, such as genomic (DNA), transcriptomic (mRNA), proteomic (proteins), metabolomic (metabolites) data, etc. These data theoretically provide the opportunity to describe the most complex biological processes implemented by all biological systems interacting with their environment. Thus, the methodological challenge is to integrate, i.e. simultaneously analyze, these data from diverse nature and source to address various scientific questionnings. In this context, the objective of this thesis is to propose a methodological approach to integrate omics data produced in different contexts and apply it to various concrete biological questions in plants and animals.A six-step workflow has been developed to prepare and conduct the integration of omics data, decidated to biologists non-expert in multi-omics integration. The workflow outlines the steps to be followed before integrating omics data, these steps corresponding to 1- data acquisition and structuring in matrix form, 2- defining the biological question and the associated integrative strategy, 3- choosing the integrative tool suitable for the selected question and the data, 4- data pre-processing, 5- preliminary data analysis, 6- multi-omics integration. Regarding the integration step (6), among 13 selected tools presented in the manuscript, we made use of the mixOmics tool and developed the cimDiablo_v2 function to integrate data through dimension reduction.These methodological developments were proposed with the aim of being adaptable to various biological contexts, i.e. to address different biological questions classified into 3 integrative strategies (description, selection, prediction), by integrating different types of omics data (genomic, transcriptomic, proteomic, etc.) and at various levels (by species, individuals, tissues, genes, experimental conditions, etc.). These developments were then tested on several biological datasets as a proof of concept : firstly, on plant data (poplar and cereals) to identify interaction profiles between DNA methylation and gene expression for different geographical populations of individuals (poplar) and developmental stages of grain (cereals), and secondly, on animal data (bovine) to identify molecular signatures of tissue or chemical composition of bovine carcasses by selecting proteins strongly associated with body composition phenotypes.In plants, we 1- ranked the major factors of omics data variability, 2- grouped genes based on their methylation and expression profiles, and 3- identified strongly expressed or methylated master regulator genes for different populations (in poplar) or stages of grain development (in cereals), and studied their biological functions. In animals (bovine), we proposed a list of candidate proteins for 7 phenotypes related to body composition, and thus, to feed conversion efficiency, which can be used for future predictive studies
Shahzad, Muhammad Kashif. „Exploitation dynamique des données de production pour améliorer les méthodes DFM dans l'industrie Microélectronique“. Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00771672.
Der volle Inhalt der QuelleGrenet, Pierre. „Système de reconstruction cinématique corps entier : fusion et exploitation de données issues de MEMS“. Poitiers, 2011. http://theses.univ-poitiers.fr/26762/2011-Grenet-Pierre-These.pdf.
Der volle Inhalt der QuelleDemocratization of MEMS has enabled the development of attitude control : group of sensors enabling to measure ones orientation relatively to Earth's frame. There are several types of sensors used in attitude control : accelerometers, magnetometers and gyroscopes. Only the combination accelerometers - magnetmeters - gyros allows a satisfying estimation of the kinematics orientation without any a priori knowledge on the mouvement, that is to say, the estimation of the orientation in presence of an unknown acceleration. The aim of the thesis is, within the context of whole body motion capture, to add a priori knowledge in order to reduce the number of gyroscopes used, or even to eliminate them completely. This a priori knowledge is the fact that sensors are attached to a skeleton and so that their relative motions can only be combination of rotations. To test the efficiency of this method, we first apply it to a simple pendulum with one degree of freedom, then a pendulum with three degrees of freedom, then on groups of segments (shoulder - arm - forearm for example) and finally on a whole body system. This thesis develops the theory and the results obtained for different methodologies of resolution
Lepage, Arnaud. „Exploitation de données spatiales mesurées par interférométrie laser pour l'analyse modale expérimentale des structures“. Besançon, 2002. http://www.theses.fr/2002BESA2071.
Der volle Inhalt der QuelleDifferent experimental modal analysis methods based on laser interferometry measurements (espi) are presented. The main advantage of these techniques is to provide high spatial resolution vibration information recorded simultaneously and contact-free. A first hybrid approach acts as a complement of a classical analysis using sensors with espi records. Under specific assumptions, mode shapes are directly estimated from operational responses measured by the optical system. A second method is presented combining modal appropriation techniques and espi measurements. This approach allows to obtain normal modes of the structure associated conservative system, even in case of modal coupling. Finally, a method based on measurement of transfer functions by optical way is then presented. Modal identification is performed using spatial domain data condensation technique and allows to obtain complex poles of the concerned dissipative system. These different approaches have been validated using simulated and experimental test cases and applied on automotive body panels. The use of these methods is especially well adapted in the view of validation and updating of structural finite elements models
Ferreira, Franck. „Exploitation des données du radar de TRMM pour l'estimation de la pluie depuis l'espace“. Paris 6, 2001. http://www.theses.fr/2001PA066089.
Der volle Inhalt der QuelleRaynaud, Jean-Louis. „Exploitation simultanée des données spatiales et fréquentielles dans l'identification modale linéaire et non-linéaire“. Besançon, 1986. http://www.theses.fr/1987BESA2013.
Der volle Inhalt der QuelleFierro, Gutierrez Ana Carolina Elisa. „Exploitation de données de séquences et de puces à ADN pour l’étude du transcriptome“. Evry-Val d'Essonne, 2007. http://www.theses.fr/2007EVRY0036.
Der volle Inhalt der QuelleThe expression of a genome can be observed at the RNA level, the transcriptome. Two large-scale approaches were chosen to characterize the transcriptome of Xenopus tropicals during metamorphosis: cDNA sequencing (ESTs) and DNA micro-arrays. This thesis describes the analysis of ESTs and the creation of a gene index, accessible through a specially designed web application. The analysis of micro-arrays is centered around data pre-processing and data acquisition of expression profiles from a complex experimental design. Our pluridisciplinar approach satisfied the needs of biologists and enabled the selection of appropriate tools for each step of the data analysis. From a methodological point of view, this thesis is representative of a general workflow for transcriptome studies with a complex eukaryotic genome. In addition, information that increases the knowledge on X. Tropicalis biology was obtained
Elmi, Saïda. „An Advanced Skyline Approach for Imperfect Data Exploitation and Analysis“. Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0011/document.
Der volle Inhalt der QuelleThe main purpose of this thesis is to study an advanced database tool named the skyline operator in the context of imperfect data modeled by the evidence theory. In this thesis, we first address, on the one hand, the fundamental question of how to extend the dominance relationship to evidential data, and on the other hand, it provides some optimization techniques for improving the efficiency of the evidential skyline. We then introduce efficient approach for querying and processing the evidential skyline over multiple and distributed servers. ln addition, we propose efficient methods to maintain the skyline results in the evidential database context wben a set of objects is inserted or deleted. The idea is to incrementally compute the new skyline, without reconducting an initial operation from the scratch. In the second step, we introduce the top-k skyline query over imperfect data and we develop efficient algorithms its computation. Further more, since the evidential skyline size is often too large to be analyzed, we define the set SKY² to refine the evidential skyline and retrieve the best evidential skyline objects (or the stars). In addition, we develop suitable algorithms based on scalable techniques to efficiently compute the evidential SKY². Extensive experiments were conducted to show the efficiency and the effectiveness of our approaches
Piras, Patrick. „Développement et exploitation d'une base de données moléculaire pour la séparation d'énantiomères par chromatographie liquide“. Aix-Marseille 3, 1994. http://www.theses.fr/1994AIX30097.
Der volle Inhalt der QuelleDellal, Ibrahim. „Gestion et exploitation de larges bases de connaissances en présence de données incomplètes et incertaines“. Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0016/document.
Der volle Inhalt der QuelleIn the era of digitilization, and with the emergence of several semantic Web applications, many new knowledge bases (KBs) are available on the Web. These KBs contain (named) entities and facts about these entities. They also contain the semantic classes of these entities and their mutual links. In addition, multiple KBs could be interconnected by their entities, forming the core of the linked data web. A distinctive feature of these KBs is that they contain millions to trillions of unreliable RDF triples. This uncertainty has multiple causes. It can result from the integration of data sources with various levels of intrinsic reliability or it can be caused by some considerations to preserve confidentiality. Furthermore, it may be due to factors related to the lack of information, the limits of measuring equipment or the evolution of information. The goal of this thesis is to improve the usability of modern systems aiming at exploiting uncertain KBs. In particular, this work proposes cooperative and intelligent techniques that could help the user in his decision-making when his query returns unsatisfactory results in terms of quantity or reliability. First, we address the problem of failing RDF queries (i.e., queries that result in an empty set of responses).This type of response is frustrating and does not meet the user’s expectations. The approach proposed to handle this problem is query-driven and offers a two fold advantage: (i) it provides the user with a rich explanation of the failure of his query by identifying the MFS (Minimal Failing Sub-queries) and (ii) it allows the computation of alternative queries called XSS (maXimal Succeeding Sub-queries), semantically close to the initial query, with non-empty answers. Moreover, from a user’s point of view, this solution offers a high level of flexibility given that several degrees of uncertainty can be simultaneously considered.In the second contribution, we study the dual problem to the above problem (i.e., queries whose execution results in a very large set of responses). Our solution aims at reducing this set of responses to enable their analysis by the user. Counterparts of MFS and XSS have been defined. They allow the identification, on the one hand, of the causes of the problem and, on the other hand, of alternative queries whose results are of reasonable size and therefore can be directly and easily used in the decision making process.All our propositions have been validated with a set of experiments on different uncertain and large-scale knowledge bases (WatDiv and LUBM). We have also used several Triplestores to conduct our tests
Naboulsi, Diala. „Analysis and exploitation of mobile traffic datasets“. Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0084/document.
Der volle Inhalt der QuelleMobile devices are becoming an integral part of our everyday digitalized life. In 2014, the number of mobile devices, connected to the internet and consuming traffic, has even exceeded the number of human beings on earth. These devices constantly interact with the network infrastructure and their activity is recorded by network operators, for monitoring and billing purposes. The resulting logs, collected as mobile traffic datasets, convey important information concerning spatio-temporal traffic dynamics, relating to large populations with millions of individuals. The thesis sheds light on the potential carried by mobile traffic datasets for future cellular networks. On one hand, we target the analysis of these datasets. We propose a usage patterns characterization framework, capable of defining meaningful categories of mobile traffic profiles and classifying network usages accordingly. On the other hand, we exploit mobile traffic datasets to evaluate two dynamic networking solutions. First, we focus on the reduction of energy consumption over typical Radio Access Networks (RAN). We introduce a power control mechanism that adapts the RAN's power configuration to users demands, while maintaining a geographical coverage. We show that our scheme allows to significantly reduce power consumption over the network infrastructure. Second, we study the problem of topology management of future Cloud-RAN (C-RAN). We propose a mobility-driven dynamic association scheme of the C-RAN components, which takes into account users traffic demand. The introduced strategy is observed to lead to important savings in the network, in terms of handovers
Mokhtari, Noureddine. „Extraction et exploitation d'annotations sémantiques contextuelles à partir de texte“. Nice, 2010. http://www.theses.fr/2010NICE4045.
Der volle Inhalt der QuelleThis thesis falls within the framework of the European project SevenPro (Semantic Virtual Engineering Environment for Product Design) whose aim is to improve the engineering process of production in manufacturing companies, through acquisition, formalization and exploitation of knowledge. We propose a methodological approach and software for generating contextual semantic annotations from text. Our approach is based on ontologies and Semantic Web technologies. In the first part, we propose a model of the concept of "context" for the text. This modeling can be seen as a projection of various aspects of "context" covered by the definitions in literature. We also propose a model of contextual semantic annotations, with the definition of different types of contextual relationships that may exist in the text. Then, we propose a generic methodology for the generation of contextual semantic annotations based on domain ontology that operates at best with the knowledge contained in texts. The novelty in the methodology is that it uses language automatic processing techniques and grammar extraction (automatically generated) field relations, concepts and values of property in order to produce semantic annotations associated with contextual relations. In addition, we take into account the context of occurrence of semantic annotations for their generation. A system that supports this methodology has been implemented and evaluated
Debarbieux, Denis. „Modélisation et requêtes des documents semi-structurés : exploitation de la structure de graphe“. Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2005. http://tel.archives-ouvertes.fr/tel-00619303.
Der volle Inhalt der QuelleCyr, Isabel. „Exploitation des données RSO de RADARSAT pour la cartographie de la vulnérabilité de la nappe souterraine“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38058.pdf.
Der volle Inhalt der QuelleMartin, Florent. „Pronostic de défaillances de pompes à vide - Exploitation automatique de règles extraites par fouille de données“. Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENA011.
Der volle Inhalt der QuelleThis thesis presents a symbolic rule-based method that addresses system prognosis. It also details a successful application to complex vacuum pumping systems. More precisely, using historical vibratory data, we first model the behavior of the pumps by extracting a given type of episode rules, namely the First Local Maximum episode rules (FLM-rules). The algorithm that extracts FLM-rules also determines automatically their respective optimal temporal window, i.e. the temporal window in which the probability of observing the premiss and the conclusion of a rule is maximum. A subset of the extracted FLM-rules is then selected in order to further predict pumping system failures in a vibratory data stream context. Our contribution consists in selecting the most reliable FLM-rules, continuously matching them in a data stream of vibratory data and building a forecast time interval using the optimal temporal windows of the FLM-rules that have been matched
Baudet, Aurélie. „Développement de la thérapie par différenciation des leucémies aigues myéloïdes par exploitation des données des transcriptomes“. Montpellier 2, 2006. http://www.theses.fr/2006MON20101.
Der volle Inhalt der QuelleVitamin D3 (VD) regulates myeloid differentiation. Combinating non calcemiant agonist of the Vitamin D Receptor (VD) allows efficient differentiation of Acute Myeloid Leukemia (AML) cells from cell lines or patient’s samples. Expression profile establishment for 96 genes allows definition of a marker’s cluster, predictive of AML’s response to differentiation agents. Implication of nuclear receptor coregulator NCOA4 was characterised in transcriptional control of AML differentiation. Single coregulator modulated during retinoid and VD induced differentiation of a AML4 model, its expression specifically responds to VD induced monocytic differentiation. In addition, I demonstrated that NCOA4 is implied in VDR activity control
Mazurie, Aurélien. „Des gènes aux réseaux génétiques : exploitation des données transcriptomiques, inférence et caractérisation de structures de régulation“. Paris 6, 2005. http://www.theses.fr/2005PA066030.
Der volle Inhalt der QuelleMuñoz-Baca, Guadalupe. „Stockage et exploitation de dossiers médicaux multimédia au moyen d'une base de données généralisée : projet TIGRE“. Université Joseph Fourier (Grenoble), 1987. http://tel.archives-ouvertes.fr/tel-00324082.
Der volle Inhalt der QuelleGrassot, Lény. „Mobilités évènementielles et espace urbain : Exploitation des donnés de téléphonie mobile pour la modélisation des grands évènements urbains“. Rouen, 2016. http://www.theses.fr/2016ROUEL015.
Der volle Inhalt der QuelleThis research is devoted to the apprehension, the detection, the understanding and the analysis of large urban planned events through mobile phone data, provided by French telecom operator Orange. The three cases studied are the Armada de Rouen 2008, the Braderie de Lille 2011 and the Armada de Rouen 2013. The aim of this thesis is to study and evaluate the impacts on urban spatial patterns thanks to modelling and simulation methodologies. To tackle the huge amount of data statistical methods, spatial analysis, and a new agent based model (GAMA) have been used. The achievement of this research lead us to highlight the role of spatial (attractiveness, concentration, etc. ) and temporal patterns (rhythms, urban pulses, etc. ) of urban spaces during the ongoing agenda of a popular large planned event. The outcomes of this research underline the relevance of the mobile phone data to understand the short-lived functioning as well as the routine of the city during major events. Moreover impacts in terms of mobility and social behavior must be taken into account
Mulligan, Kyle. „Stratégies robustes pour le suivi et la prédiction de l'endommagement de structures composites à l'aide de piézocéramiques embarquées“. Thèse, Université de Sherbrooke, 2013. http://savoirs.usherbrooke.ca/handle/11143/156.
Der volle Inhalt der QuelleBouali, Tarek. „Platform for efficient and secure data collection and exploitation in intelligent vehicular networks“. Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS003/document.
Der volle Inhalt der QuelleNowadays, automotive area is witnessing a tremendous evolution due to the increasing growth in communication technologies, environmental sensing & perception aptitudes, and storage & processing capacities that we can find in recent vehicles. Indeed, a car is being a kind of intelligent mobile agent able to perceive its environment, sense and process data using on-board systems and interact with other vehicles or existing infrastructure. These advancements stimulate the development of several kinds of applications to enhance driving safety and efficiency and make traveling more comfortable. However, developing such advanced applications relies heavily on the quality of the data and therefore can be realized only with the help of a secure data collection and efficient data treatment and analysis. Data collection in a vehicular network has been always a real challenge due to the specific characteristics of these highly dynamic networks (frequent changing topology, vehicles speed and frequent fragmentation), which lead to opportunistic and non long lasting communications. Security, remains another weak aspect in these wireless networks since they are by nature vulnerable to various kinds of attacks aiming to falsify collected data and affect their integrity. Furthermore, collected data are not understandable by themselves and could not be interpreted and understood if directly shown to a driver or sent to other nodes in the network. They should be treated and analyzed to extract meaningful features and information to develop reliable applications. In addition, developed applications always have different requirements regarding quality of service (QoS). Several research investigations and projects have been conducted to overcome the aforementioned challenges. However, they still did not meet perfection and suffer from some weaknesses. For this reason, we focus our efforts during this thesis to develop a platform for a secure and efficient data collection and exploitation to provide vehicular network users with efficient applications to ease their travel with protected and available connectivity. Therefore, we first propose a solution to deploy an optimized number of data harvesters to collect data from an urban area. Then, we propose a new secure intersection based routing protocol to relay data to a destination in a secure manner based on a monitoring architecture able to detect and evict malicious vehicles. This protocol is after that enhanced with a new intrusion detection and prevention mechanism to decrease the vulnerability window and detect attackers before they persist their attacks using Kalman filter. In a second part of this thesis, we concentrate on the exploitation of collected data by developing an application able to calculate the most economic itinerary in a refined manner for drivers and fleet management companies. This solution is based on several information that may affect fuel consumption, which are provided by vehicles and other sources in Internet accessible via specific APIs, and targets to economize money and time. Finally, a spatio-temporal mechanism allowing to choose the best available communication medium is developed. This latter is based on fuzzy logic to assess a smooth and seamless handover, and considers collected information from the network, users and applications to preserve high quality of service
Lienhart, Yann. „Analyse et exploitation de données de criblages de réactions chimiques pour la recherche de voies de synthèse“. Strasbourg, 2011. http://www.theses.fr/2011STRA6223.
Der volle Inhalt der QuelleChemistry databases are centered on chemicals and their reaction data is extracted from scientific literature. They are usually given in non-standardized conditions and in general nothing is available about reactions that did not occur or exhibit a low yield. Thus, it can be difficult to compare reactions and chemical transformations because of the specific experimental conditions and the missing data. This thesis has been funded by NovAlix (CIFRE), a company specialized in organic chemistry synthesis and structural biology. In order to explore the chemical space and improve the chemical synthesis process, two high-throughput reactions screening methods using gas chromatography and mass spectrometry have been set up in NovAlix. The Magellan information system developed during this thesis use a chemical reaction-centric database and is built on Java Enterprise Edition open-source technologies. It allows collecting high-throughput data from mass spectrometry and gas chromatography experimentations realized in standardized conditions. Then, the Magellan application enables the chemist to read and store experimental results, provides tools to help data analysis and query the collected data via a rich-client user interface
Vallaeys, Karen. „Exploitation des données endodontiques en tomographie volumique : de la microtomographie in vitro à la scanographie in vivo“. Thesis, Brest, 2017. http://www.theses.fr/2017BRES0144/document.
Der volle Inhalt der QuelleCone Beam Computerized Tomography (CBCT) is a highly relevant three-dimensional imaging technology for use in dentistry. Our work aims to show its interests and specific applications in endodontics. After having redefined the possible deleterious per and post-operative consequences of endodontic treatments and explained the principles of CBCT, we first explore the effects of in vitro canal preparation, using high resolution microtomography and then, in a second time, the problematic and the interests of the creation of reliable and precise three-dimensional reconstructions. This last part deals with the notions of CBCT image processing before explaining the approach adopted to develop a three-dimensional classification of endodontic periapical lesions in digital and physical form
Aribia, Karima. „Gestion et exploitation d'une base de données expérimentales pour le renforcement en cisaillement à l'aide de MCA“. Mémoire, École de technologie supérieure, 2007. http://espace.etsmtl.ca/547/1/ARIBIA_Karima.pdf.
Der volle Inhalt der QuelleGuenneau, Flavien. „Relaxation magnétique nucléaire de systèmes couplés et exploitation des données unidimensionnelles au moyen d'un logiciel convivial (RMNYW)“. Nancy 1, 1999. http://docnum.univ-lorraine.fr/public/SCD_T_1999_0093_GUENNEAU.pdf.
Der volle Inhalt der QuelleEtchebès, Pascale. „Exploitation automatique d'une base de données d'images à partir des informations textuelles jointes sur des bases cognitives“. Besançon, 2003. http://www.theses.fr/2003BESA1019.
Der volle Inhalt der QuelleOur project of study is inspired from our profesional experience in the setting up of industrial images databases. We conducted a mission of computerization of a part of the photographic collection of the Chantiers de l'Atlantique in Saint-Nazaire (France). From this professional experience we drawn up our project. The industrial photography shows the limits of the usual approach of description which consists in listing the words as if the word and its written mark were linked up with a stable and constituted referent. Our conception of NLP technologies leads us beyond words, terms and language. Our approach is essentially conceptual. The concept refers to the construction of the reference which goes with the word : the object, the action, the technology, the feeling, and this at a given period (the universe of shipbuilding may have considerably evolved as well as the sense of the words). Our thesis consists in proposing the principles of setting up of an industrial ontology, its areas, its limits, its activities, its agents, its products, taking into account the fact that we are working on a media which is image and which justifies a break with software and information solutions proposed up to now and which only took into account the text. The work is faintly lexicologic or terminologic. It is not that linguistic data processing is dismissed of our process. The problem will be deeply questioned when we will find ourselves in natural language interfaces. Our thesis is composed of five parts. It is illustrated by photographies from the Chantiers de l'Atlantique, which have been chosen for their explicative power
Gimenez, Rollin. „Exploitation de données optiques multimodales pour la cartographie des espèces végétales suivant leur sensibilité aux impacts anthropiques“. Electronic Thesis or Diss., Toulouse, ISAE, 2023. http://www.theses.fr/2023ESAE0030.
Der volle Inhalt der QuelleAnthropogenic impacts on vegetated soils are difficult to characterize using optical remote sensing devices. However, these impacts can lead to serious environmental consequences. Their indirect detection is made possible by the induced alterations to biocenosis and plant physiology, which result in optical property changes at plant and canopy levels. The objective of this thesis is to map plant species based on their sensitivity to anthropogenic impacts using multimodal optical remote sensing data. Various anthropogenic impacts associated with past industrial activities are considered (presence of hydrocarbons in the soil, polymetallic chemical contamination, soil reworking and compaction, etc.) in a complex plant context (heterogeneous distribution of multiple species from different strata). Spectral, temporal and/or morphological information is used to identify genera and species and characterise their health status to define and map their sensitivity to the various anthropogenic impacts. Hyperspectral airborne images, Sentinel-2 time series and digital elevation models are then used independently or combined. The proposed scientific approach consists of three stages. The first one involves mapping anthropogenic impacts at site level by combining optical remote sensing data with data supplied by the site operator (soil analyses, activity maps, etc.). The second stage seeks to develop a vegetation mapping method using optical remote sensing data suitable to complex contexts like industrial sites. Finally, the variations in biodiversity and functional response traits derived from airborne hyperspectral images and digital elevation models are analysed in relation to the impact map during the third stage. The species identified as invasive species, as well as those related to agricultural and forestry practices, and biodiversity measures provide information about biological impacts. Vegetation strata mapping and characterisation of tree height, linked to secondary succession, are used to detect physical impacts (soil reworking, excavations). Finally, the consequences of induced stress on the spectral signature of susceptible species allow the identification of chemical impacts. Specifically, in the study context, the spectral signatures of Quercus spp., Alnus glutinosa, and grass mixtures vary with soil acidity, while those of Platanus x hispanica and shrub mixtures exhibit differences due to other chemical impacts
Ouaknine, Arthur. „Deep learning for radar data exploitation of autonomous vehicle“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT007.
Der volle Inhalt der QuelleAutonomous driving requires a detailed understanding of complex driving scenes. The redundancy and complementarity of the vehicle’s sensors provide an accurate and robust comprehension of the environment, thereby increasing the level of performance and safety. This thesis focuses the on automotive RADAR, which is a low-cost active sensor measuring properties of surrounding objects, including their relative speed, and has the key advantage of not being impacted by adverse weather conditions.With the rapid progress of deep learning and the availability of public driving datasets, the perception ability of vision-based driving systems (e.g., detection of objects or trajectory prediction) has considerably improved. The RADAR sensor is seldom used for scene understanding due to its poor angular resolution, the size, noise, and complexity of RADAR raw data as well as the lack of available datasets. This thesis proposes an extensive study of RADAR scene understanding, from the construction of an annotated dataset to the conception of adapted deep learning architectures.First, this thesis details approaches to tackle the current lack of data. A simple simulation as well as generative methods for creating annotated data will be presented. It will also describe the CARRADA dataset, composed of synchronised camera and RADAR data with a semi-automatic method generating annotations on the RADAR representations.This thesis will then present a proposed set of deep learning architectures with their associated loss functions for RADAR semantic segmentation. The proposed architecture with the best results outperforms alternative models, derived either from the semantic segmentation of natural images or from RADAR scene understanding,while requiring significantly fewer parameters. It will also introduce a method to open up research into the fusion of LiDAR and RADAR sensors for scene understanding.Finally, this thesis exposes a collaborative contribution, the RADIal dataset with synchronised High-Definition (HD) RADAR, LiDAR and camera. A deep learning architecture is also proposed to estimate the RADAR signal processing pipeline while performing multitask learning for object detection and free driving space segmentation simultaneously
El, Haddadi Anass. „Fouille multidimensionnelle sur les données textuelles visant à extraire les réseaux sociaux et sémantiques pour leur exploitation via la téléphonie mobile“. Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1378/.
Der volle Inhalt der QuelleCompetition is a fundamental concept of the liberal economy tradition that requires companies to resort to Competitive Intelligence (CI) in order to be advantageously positioned on the market, or simply to survive. Nevertheless, it is well known that it is not the strongest of the organizations that survives, nor the most intelligent, but rather, the one most adaptable to change, the dominant factor in society today. Therefore, companies are required to remain constantly on a wakeful state to watch for any change in order to make appropriate solutions in real time. However, for a successful vigil, we should not be satisfied merely to monitor the opportunities, but before all, to anticipate risks. The external risk factors have never been so many: extremely dynamic and unpredictable markets, new entrants, mergers and acquisitions, sharp price reduction, rapid changes in consumption patterns and values, fragility of brands and their reputation. To face all these challenges, our research consists in proposing a Competitive Intelligence System (CIS) designed to provide online services. Through descriptive and statistics exploratory methods of data, Xplor EveryWhere display, in a very short time, new strategic knowledge such as: the profile of the actors, their reputation, their relationships, their sites of action, their mobility, emerging issues and concepts, terminology, promising fields etc. The need for security in XPlor EveryWhere arises out of the strategic nature of information conveyed with quite a substantial value. Such security should not be considered as an additional option that a CIS can provide just in order to be distinguished from one another. Especially as the leak of this information is not the result of inherent weaknesses in corporate computer systems, but above all it is an organizational issue. With Xplor EveryWhere we completed the reporting service, especially the aspect of mobility. Lastly with this system, it's possible to: View updated information as we have access to our strategic database server in real-time, itself fed daily by watchmen. They can enter information at trade shows, customer visits or after meetings
Carminati, Federico. „Conception, réalisation et exploitation du traitement de données de l’expérience ALICE pour la simulation, la reconstruction et l’analyse“. Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=0ed58585-b62e-40b5-8849-710d1e15c6c2.
Der volle Inhalt der QuelleThe ALICE (A Large Ion Collider Experiment) at the CERN (Conseil Européenne pour la Recherche Nucléaire) LHC (Large Hadron Collider) facility uses an integrated software framework for the design of the experimental apparatus, the evaluation of its performance and the processing of the experimental data. Federico Carminati has designed this framework. It includes the event generators and the algorithms for particle transport describing the details of the interaction particles-matter (designed and implemented by Federico Carminati), the reconstruction of the particle trajectories and the final physics analysis
Peloton, Julien. „Data analysis and scientific exploitation of the CMB B-modes experiment, POLARBEAR“. Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC154.
Der volle Inhalt der QuelleOver the last two decades cosmology has been transformed from a data-starved to a data-driven, high precision science. N This transformation happened thanks to improved observational techniques, allowing to collect progressively bigger and more powerful data sets. Studies of the Cosmic Microwave Background (CMB) anisotropies have played, and continue on doing so, a particularly important and impactful role in this process. The huge data sets produced by recent CMB experiments pose new challenges for the field due to their volumes and complexity. Its successful resolution requires combining mathematical, statistical and computational methods aIl of which form a keystone of the modern CMB data analysis. In this thesis, I describe data analysis of the first data set produced by one of the most advanced, current CMB experiments, POLARBEAR and the major results it produced. The POLARBEAR experiment is a leading CMB B-mode polarization experiment aiming at detection and characterization of the so-called B-mode signature of the CMB polarization. This is one of the most exciting topics in the current CMB research, which only just has started yielding new insights onto cosmology in part thanks to the results discussed hereafter. In this thesis I describe first the modern cosmological model, focusing on the physics of the CMB, and in particular its polarization properties, and providing an overview of the past experiments and results. Subsequently, I present the POLARBEAR instrument, data analysis of its first year data set and the scientific results drawn from it, emphasizing my major contributions to the overall effort. In the last chapter, and in the context of the next generation CMB B-mode experiments, I present a more systematic study of the impact of the presence of the so-called E-to-B leakage on the performance forecasts of CMB B-modes experiments, by comparing several methods including the pure pseudospectrum method and the minimum variance quadratic estimator. In particular, I detail how the minimum variance quadratic estimator in the case of azimuthally symmetric patches can be used to estimate efficiently parameters, and I present an efficient implementation based on existing parallel algorithms for computing Spherical Harmonic Transforms
Nevers, Yannis Alain. „Exploitation de marqueurs évolutifs pour l'étude des relations génotype-phénotype : application aux ciliopathies“. Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ090/document.
Der volle Inhalt der QuelleIn the omics era, the study of genotype-phenotype relations requires the integration of a wide variety of data to describe diverse aspects of biological systems. Comparative genomics provides an original perspective, that of evolution, allowing the exploitation of the wide phenotypic diversity of living species. My thesis focused on the design of evolutionary markers to describe genes according to their evolutionary history. First, I built an exhaustive orthology resource, called OrthoInspector 3.0, to extract synthetic evolutionary information from genomic data. I then developed methods to explore the markers in relation to functional or phenotypic data. These methods have been incorporated in the OrthoInspector resource, as well as in the MyGeneFriends social network and applied to the study of ciliopathies, leading to the identification of 87 new ciliary genes
Doucet, Antoine. „Extraction, Exploitation and Evaluation of Document-based Knowledge“. Habilitation à diriger des recherches, Université de Caen, 2012. http://tel.archives-ouvertes.fr/tel-01070505.
Der volle Inhalt der QuelleZeyen, Patrick. „La base de données du Deep Sea Drilling Project : exploitation relationnelle et application à l'étude de la sédimentation néogène“. Lyon 1, 1991. https://tel.archives-ouvertes.fr/tel-02019925/document.
Der volle Inhalt der QuelleCordier, Mathilde. „Le recours aux soins dans la démence : la surmédicalisation en question. Exploitation des données de l’échantillon généraliste des bénéficiaires“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS201/document.
Der volle Inhalt der QuellePatients with dementia raise therapeutic challenges, as they constitute a heterogeneous population. As part of this management, the interest of antidementia drugs (cholinesterase inhibitors and memantine) is debated: the clinical efficacy seems questionable and the adverse effects appear to be significant. The 2010 recommendations gave to cliniciens the choice to prescribe or not these drugs. Since questions remain unanswered: 1 / what is the evolution of prescription rates of these drugs since these recommendations, in other words how the clinical expertise of cliniciens, one of the pillars the evidence based medicine, is expressed? 2 / what are the factors that remain today associated with prescribing these drugs or not? and 3 / Is there over-hospitalization related to their side effects?The question of medical overuse is a central point of our thesis problem. In this work, we answered these 3 questions which constituted our 3 objectives. We were able to show that cliniciens seemed less and less confident about antidementia drugs with a decrease in their prescription since 2010 and significant consequences in terms of avoided costs. When they continued to be prescribed, these treatments were mainly used in the youngest or most healthy patients. Finally, cholinesterase inhibitors, mainly rivastigmine, increased the risk of hospitalization via cardiac and digestive side effects. Our results argue against the prescription of antidementia drugs both from the point of view of morbidity and health expenditures. The question from the patient's point of view remains