Rozprawy doktorskie na temat „Interrogation de Données de Processus”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Interrogation de Données de Processus”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Kolmayer, Elisabeth. "Contribution à l'analyse des processus cognitifs mis en jeu dans l'interrogation d'une base de données documentaires". Paris 5, 1997. http://www.theses.fr/1997PA05H051.
Pełny tekst źródłaDuring the information retrieval process, two topic representations have to interact; one comes from the end-user, the other from the information system. We focus on this interaction. Cognitive psychology gives some tools to analyse the user's domain representation, as research on categorizations and about expert and novice knowledge has shown. An experimental approach with nurses and nursing auxiliaries analyses the effects of two expertise factors: experience and training level. We examine the domain representation in information system through indexing (indexing with descriptors and subject headings). We point out agreement and differences between knowledge organization such as in thesauri, in subject headings indexes and in the user's mind. But by interacting with an actual retrieval device, topic expertise plays a very small role; the important factor is the expertise of the own device. We focus then on the modelling of the information retrieval situation; the problem to face, for the end-user, is not to find the + best match ; between query and index terms, but to develop a representation of his information need convenient with the information system constraints. An information retrieval task is then concieved as a design problem to which the concepts of plans declarative and procedural- can be applied. An experiment tests the efficiency of such a modelling, analysing the seeking process of end-users and librarians interacting with an online catalog. Some ergonomic consequences are considered
Wable, Thierry. "Processus interactifs dans le dialogue Homme/Machine analyse des images identitaires, de la tâche et des dysfonctionnements lors d'une interrogation de base de données bibliographiques". Rouen, 1998. http://www.theses.fr/1998ROUEL288.
Pełny tekst źródłaThis linguistic study is a contribution to research on man / machine interaction in the language field. Our work relies on the analysis of experimental simulated dialogues involving a user and a manachine during a bibliographic data base inquiry. The human being asks an interface designed to help him to get the desired information ; that is the general task of the machine. This task is in practice a set of subtasks contributing to the main task and acting as a driving force. But the speaker occasionally escapes from this initial communication target and moves to a subdialogue which may generate dysfunctions. We have produced a survey and a catalogue of these dysfunctions together with processes for correction and avoidance. In this way we demonstrate that the dysfunctions can act as a contributor to the main task of the system, hence they can make the interaction successful. We also define the specific characters of this dialogue and explain how sense, identifying images (especially the construction and the representation of the interlocutor) and discursive forms management all contribute to the main objective. The results of this contribution should allow a better understanding of the interactive processes of the man / machine dialogue in order to improve the interface and optimize its tasks. This improvement requires a more efficient way to take into account current oroblems in all communication processes
Kobeissi, Meriana. "A conversational AI Framework for Cognitive Process Analysis". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS025.
Pełny tekst źródłaBusiness processes (BP) are the foundational pillars of organizations, encapsulating a range of structured activities aimed at fulfilling distinct organizational objectives. These processes, characterized by a plethora of tasks, interactions, and workflows, offer a structured methodology for overseeing crucial operations across diverse sectors. A pivotal insight for organizations has been the discernment of the profound value inherent in the data produced during these processes. Process analysis, a specialized discipline, ventures into these data logs, facilitating a deeper comprehension and enhancement of BPs. This analysis can be categorized into two perspectives: instance-level, which focuses on individual process executions, and process-level, which examines the overarching process.However, applying process analysis in practice poses challenges for users, involving the need to access data, navigate low-level APIs, and employ tool-dependent methods. Real-world application often encounters complexities and user-centric obstacles.Specifically, instance-level analysis demands users to access stored process execution data, a task that can be intricate for business professionals due to the requirement of mastering complex query languages like SQL and CYPHER. Conversely, process-level analysis of process data involves the utilization of methods and algorithms that harness process execution data extracted from information systems. These methodologies collectively fall under the umbrella of process mining techniques. The application of process mining confronts analysts with the intricate task of method selection, which involves sifting through unstructured method descriptions. Additionally, the application of process mining methods depends on specific tools and necessitates a certain level of technical expertise.To address these challenges, this thesis introduces AI-driven solutions, with a focus on integrating cognitive capabilities into process analysis to facilitate analysis tasks at both the instance level and the process level for all users. The primary objectives are twofold: Firstly, to enhance the accessibility of process execution data by creating an interface capable of automatically constructing the corresponding database query from natural language. This is complemented by proposing a suitable storage technique and query language that the interface should be designed around. In this regard, we introduce a graph metamodel based on Labeled Property Graph (LPG) for efficient data storage. Secondly, to streamline the discovery and accessibility of process mining techniques, we present a service-oriented architecture. This architecture comprises three core components: an LPG meta-model detailing process mining methods, a service-oriented REST API design tailored for these methods, and a component adept at matching user requirements expressed in natural language with appropriate services.For the validation of our graph metamodel, we utilized two publicly accessible process datasets available in both CSV and OCEL formats. These datasets were instrumental in evaluating the performance of our NL querying pipeline. We gathered NL queries from external users and produced additional ones through paraphrasing tools. Our service-oriented framework underwent an assessment using NL queries specifically designed for process mining service descriptions. Additionally, we carried out a use case study with external participants to evaluate user experience and to gather feedback. We publically provide the evaluation results to ensure reproducibility in the studied area
Peng, Botao. "Parrallel data series indexing and similarity search on modern hardware". Electronic Thesis or Diss., Université Paris Cité, 2020. http://www.theses.fr/2020UNIP5193.
Pełny tekst źródłaData series similarity search is a core operation for several data series analysis applications across many different domains. However, the state-of-the-art techniques fail to deliver the time performance required for interactive exploration, or analysis of large data series collections. In this Ph.D. work, we present the first data series indexing solutions that are designed to inherently take advantage of modern hardware, in order to accelerate similarity search processing times for both on-disk and in-memory data. In particular, we develop novel algorithms for multi-core, multi-socket, and Single Instruction Multiple Data (SIMD) architectures, as well as algorithms for Graphics Processing Units (GPUs). Our experiments on a variety of synthetic and real data demonstrate that our approaches are up to orders of magnitude faster than the state-of-the-art solutions for both disk-resident and in-memory data. More specifically, our on-disk solution can answer exact similarity search queries on 100GB datasets in ∼ 15 seconds, and our in-memory solution in as low as 36 milliseconds, which enables for the first time real-time, interactive data exploration on very large data series collections
Ykhlef, Mourad. "Interrogation des données semistructurées". Bordeaux 1, 1999. http://www.theses.fr/1999BOR1A640.
Pełny tekst źródłaYkhlef, Mourad. "Interrogation des données semistructurées". Bordeaux 1, 1999. http://www.theses.fr/1999BOR10670.
Pełny tekst źródłaAmann, Bernd. "Interrogation d'hypertextes". Paris, CNAM, 1994. http://www.theses.fr/1994CNAM0188.
Pełny tekst źródłaSouihli, Asma. "Interrogation des bases de données XML probabilistes". Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0046/document.
Pełny tekst źródłaProbabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
Souihli, Asma. "Interrogation des bases de données XML probabilistes". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0046.
Pełny tekst źródłaProbabilistic XML is a probabilistic model for uncertain tree-structured data, with applications to data integration, information extraction, or uncertain version control. We explore in this dissertation efficient algorithms for evaluating tree-pattern queries with joins over probabilistic XML or, more specifically, for approximating the probability of each item of a query result. The approach relies on, first, extracting the query lineage over the probabilistic XML document, and, second, looking for an optimal strategy to approximate the probability of the propositional lineage formula. ProApproX is the probabilistic query manager for probabilistic XML presented in this thesis. The system allows users to query uncertain tree-structured data in the form of probabilistic XML documents. It integrates a query engine that searches for an optimal strategy to evaluate the probability of the query lineage. ProApproX relies on a query-optimizer--like approach: exploring different evaluation plans for different parts of the formula and predicting the cost of each plan, using a cost model for the various evaluation algorithms. We demonstrate the efficiency of this approach on datasets used in a number of most popular previous probabilistic XML querying works, as well as on synthetic data. An early version of the system was demonstrated at the ACM SIGMOD 2011 conference. First steps towards the new query solution were discussed in an EDBT/ICDT PhD Workshop paper (2011). A fully redesigned version that implements the techniques and studies shared in the present thesis, is published as a demonstration at CIKM 2012. Our contributions are also part of an IEEE ICDE
Gabsi, Nesrine. "Extension et interrogation de résumés de flux de données". Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00613122.
Pełny tekst źródłaGabsi, Nesrine. "Extension et interrogation de résumé de flux de données". Paris, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00613122.
Pełny tekst źródłaIn the last few years, a new environment, in which data have to be collected and processed instantly when arriving, has emerged. To handle the large volume of data associated with this environment, new data processing model and techniques have to be set up ; they are referred as data stream management. Data streams are usually continuous, voluminous, and cannot be registered integrally as persistent data. Many research works have handled this issue. Therefore, new systems called DSMS (Data Stream Management Systems) appeared. The DSMS evaluates continuous queries on a stream or a window (finite subset of streams). These queries have to be specified before the stream's arrival. Nevertheless, in case of some applications, some data could be required after their expiration from the DSMS in-memory. In this case, the system cannot treat the queries as such data are definitely lost. To handle this issue, it is essential to keep a ummary of data stream. Many summaries algorithms have been developed. The selection of a summarizing method depends on the kind of data and the associated issue. In this thesis, we are first interested with the elaboration of a generic summary structure while coming to a compromise between the summary elaboration time and the quality of the summary. We introduce a new summary approach which is more efficient for querying very old data. Then, we focus on the uerying methods for these summaries. Our objective is to integrate the structure of generic summaries in the architecture of the existing DSMS. By this way, we extend the range of the possible queries. Thus, the processing of the queries on old stream data (expired data) becomes possible as well as queries on new stream data. To this end, we introduced two approaches. The difference between them is the role played by summary module when the query is evaluated
Ould, Yahia Sabiha. "Interrogation multi-critères d'une base de données spatio-temporelles". Troyes, 2005. http://www.theses.fr/2005TROY0006.
Pełny tekst źródłaThe study of the human behavior in driving situations is of primary importance for the improvement of drivers security. This study is complex because of the numerous situations in which the driver may be involved. The objective of the CASSICE project (Symbolic Characterization of Driving Situations) is to elaborate a tool in order to simplify the analysis task of the driver's behavior. In this paper, we will mainly take an interest in the indexation and querying of a multimedia database including the numerical data and the video sequences relating to a type of driving situations. We will put the emphasis on the queries to this database. They are often complex because they are formulated according to criteria depending on time, space and they use terms of the natural language
Lemoine, Frédéric. "Intégration, interrogation et analyse de données de génomique comparative". Paris 11, 2008. http://www.theses.fr/2008PA112180.
Pełny tekst źródłaOur work takes place within the « Microbiogenomics » project. Microbiogenomics aims at building a genomic prokaryotic data warehouse. This data warehouse gathers numerous data currently dispersed, in order to improve functional annotation of bacterial genomes. Within this project, our work contains several facets. The first one focuses mainly on the analyses of biological data. We are particularly interested in the conservation of gene order during the evolution of prokaryotic genomes. To do so, we designed a computational pipeline aiming at detecting the areas whose gene order is conserved. We then studied the relative evolution of the proteins coded by genes that are located in conserved areas, in comparison with the other proteins. This data were made available through the SynteView synteny visualization tool (http://www. Synteview. U-psud. Fr). Moreover, to broaden the analysis of these data, we need to cross them with other kinds of data, such as pathway data. These data, often dispersed and heterogeneous, are difficult to query. That is why, in a second step, we were interested in querying the Microbiogenomics data warehouse. We designed an architecture and some algorithms to query the data warehouse, while keeping the different points of view given by the sources. These algorithms were implemented in GenoQuery (http://www. Lri. Fr/~lemoine/GenoQuery), a prototype querying module adapted to a genomic data warehouse
Thomopoulos, Rallou. "Représentation et interrogation élargie de données imprécises et faiblement structurées". Paris, Institut national d'agronomie de Paris Grignon, 2003. http://www.theses.fr/2003INAP0018.
Pełny tekst źródłaThis work is part of a project applied to predictive microbiology, which is built on a database and on its querying system. The data used in the project are weakly structured, they may be imprecise, and cannot provide exact answers to every query, so that a flexible querying system is necessary for the querying of the database. We use the conceptual graph model in order to take into account weakly structured data, and fuzzy set theory, in order to represent imprecise data and fuzzy queries. The purpose of this work is to provide a combination of these two formalisms
Decleir, Cyril. "Indexation et interrogation de séquences audiovisuelles". Lyon, INSA, 1999. http://www.theses.fr/1999ISAL0109.
Pełny tekst źródłaA large amount of information is conveyed by video data. There exists nowadays a huge quantity of video information, and the problem which consists in retrieving a specific video data from this set is an important one. This work is devoted to the problem of indexing and querying video data using a database approach. We define a flexible abject oriented data model, which allows building video descriptions accordingly to the user's needs. Querying this model is supported by a rule-based constraint query language. The constraint aspect of this language allows managing easily the temporal aspects of video data. This work is clone in the framework of the Sésame project whose goal is to propose a global solution (hardware, software and theoretical aspects) to the video indexing problem
Touzet, David. "Interrogation continue des systèmes d'information de proximité". Rennes 1, 2004. http://www.theses.fr/2004REN10007.
Pełny tekst źródłaDubois, Jean-Christophe. "Vers une interrogation en langage naturel d'une base de données image". Nancy 1, 1998. http://www.theses.fr/1998NAN10044.
Pełny tekst źródłaOuksili, Hanane. "Exploration et interrogation de données RDF intégrant de la connaissance métier". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV069.
Pełny tekst źródłaAn increasing number of datasets is published on the Web, expressed in languages proposed by the W3C to describe Web data such as RDF, RDF(S) and OWL. The Web has become a unprecedented source of information available for users and applications, but the meaningful usage of this information source is still a challenge. Querying these data sources requires the knowledge of a formal query language such as SPARQL, but it mainly suffers from the lack of knowledge about the source itself, which is required in order to target the resources and properties relevant for the specific needs of the application. The work described in this thesis addresses the exploration of RDF data sources. This exploration is done according to two complementary ways: discovering the themes or topics representing the content of the data source, and providing a support for an alternative way of querying the data sources by using keywords instead of a query formulated in SPARQL. The proposed exploration approach combines two complementary strategies: thematic-based exploration and keyword search. Theme discovery from an RDF dataset consists in identifying a set of sub-graphs which are not necessarily disjoints, and such that each one represents a set of semantically related resources representing a theme according to the point of view of the user. These themes can be used to enable a thematic exploration of the data source where users can target the relevant theme and limit their exploration to the resources composing this theme. Keyword search is a simple and intuitive way of querying data sources. In the case of RDF datasets, this search raises several problems, such as indexing graph elements, identifying the relevant graph fragments for a specific query, aggregating these relevant fragments to build the query results, and the ranking of these results. In our work, we address these different problems and we propose an approach which takes as input a keyword query and provides a list of sub-graphs, each one representing a candidate result for the query. These sub-graphs are ordered according to their relevance to the query. For both keyword search and theme identification in RDF data sources, we have taken into account some external knowledge in order to capture the users needs, or to bridge the gap between the concepts invoked in a query and the ones of the data source. This external knowledge could be domain knowledge allowing to refine the user's need expressed by a query, or to refine the definition of themes. In our work, we have proposed a formalization to this external knowledge and we have introduced the notion of pattern to this end. These patterns represent equivalences between properties and paths in the dataset. They are evaluated and integrated in the exploration process to improve the quality of the result
Bélières, Bruno. "Vista : un langage métaphorique et visuel pour l'interrogation de bases de données". Tours, 1997. http://www.theses.fr/1997TOUR4019.
Pełny tekst źródłaGhazal, Moultazem. "Contribution à la gestion des données géographiques : Modélisation et interrogation par croquis". Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00504944.
Pełny tekst źródłaAbdessalem, Talel. "Approche des versions de base de données : représentation et interrogation des versions". Paris 9, 1997. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1997PA090024.
Pełny tekst źródłaZneika, Mussab. "Interrogation du web sémantique à l'aide de résumés de graphes de données". Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1010.
Pełny tekst źródłaThe amount of RDF data available increases fast both in size and complexity, making available RDF Knowledge Bases (KBs) with millions or even billions of triples something usual, e.g. more than 1000 datasets are now published as part of the Linked Open Data (LOD) cloud, which contains more than 62 billion RDF triples, forming big and complex RDF data graphs. This explosion of size, complexity and number of available RDF Knowledge Bases (KBs) and the emergence of Linked Datasets made querying, exploring, visualizing, and understanding the data in these KBs difficult both from a human (when trying to visualize) and a machine (when trying to query or compute) perspective. To tackle this problem, we propose a method of summarizing a large RDF KBs based on representing the RDF graph using the (best) top-k approximate RDF graph patterns. The method is named SemSum+ and extracts the meaningful/descriptive information from RDF Knowledge Bases and produces a succinct overview of these RDF KBs. It extracts from the RDF graph, an RDF schema that describes the actual contents of the KB, something that has various advantages even compared to an existing schema, which might be partially used by the data in the KB. While computing the approximate RDF graph patterns, we also add information on the number of instances each of the patterns represents. So, when we query the RDF summary graph, we can easily identify whether the necessary information is present and if it is present in significant numbers whether to be included in a federated query result. The method we propose does not require the presence of the initial schema of the KB and works equally well when there is no schema information at all (something realistic with modern KBs that are constructed either ad-hoc or by merging fragments of other existing KBs). Additionally, the proposed method works equally well with homogeneous (having the same structure) and heterogeneous (having different structure, possibly the result of data described under different schemas/ontologies) RDF graphs.Given that RDF graphs can be large and complex, methods that need to compute the summary by fitting the whole graph in the memory of a (however large) machine will not scale. In order to overcome this problem, we proposed, as part of this thesis, a parallel framework that allows us to have a scalable parallel version of our proposed method. This will allow us to compute the summaries of any RDF graph regardless of size. Actually, we generalized this framework so as to be usable by any approximate pattern mining algorithm that needs parallelization.But working on this problem, introduced us to the issue of measuring the quality of the produced summaries. Given that in the literature exist various algorithms that can be used to summarize RDF graphs, we need to understand which one is better suited for a specific task or a specific RDF KB. In the literature, there is a lack of widely accepted evaluation criteria or an extensive empirical evaluation. This leads to the necessity of a method to compare and evaluate the quality of the produced summaries. So, in this thesis, we provide a comprehensive Quality Framework for RDF Graph Summarization to cover the gap that exists in the literature. This framework allows a better, deeper and more complete understanding of the quality of the different summaries and facilitates their comparison. It is independent of the way RDF summarization algorithms work and makes no assumptions on the type or structure neither of the input nor of the final results. We provide a set of metrics that help us understand not only if this is a valid summary but also how a summary compares to another in terms of the specified quality characteristic(s). The framework has the ability, which was experimentally validated, to capture subtle differences among summaries and produce metrics that depict that and was used to provide an extensive experimental evaluation and comparison of our method
Soumana, Ibrahim. "Interrogation des sources de données hétérogènes : une approche pour l'analyse des requêtes". Thesis, Besançon, 2014. http://www.theses.fr/2014BESA1015/document.
Pełny tekst źródłaNo english summary available
Delot, Thierry. "Interrogation d'annuaires étendus : modèles, langage et optimisation". Versailles-St Quentin en Yvelines, 2001. http://www.theses.fr/2001VERS0028.
Pełny tekst źródłaChaintreau, Augustin. "Processus d'interaction dans les réseaux de données". Paris 6, 2006. http://www.theses.fr/2006PA066601.
Pełny tekst źródłaBerasaluce, Sandra. "Fouille de données et acquisition de connaissances à partir de bases de données de réactions chimiques". Nancy 1, 2002. http://docnum.univ-lorraine.fr/public/SCD_T_2002_0266_BERASALUCE.pdf.
Pełny tekst źródłaChemical reaction database, indispensable tools for synthetic chemists, are not free from flaws. In this thesis, we have tried to overcome the databases limits by adding knowledge which structures data. This allows us to consider new efficient modes for query these databases. In the end, the goal is to design systems having both functionalities of DB and KBS. In the knowledge acquisition process, we emphasized on the modelling of chemical objects. Thus, we were interested in synthetic methods which we have described in terms of synthetic objectives. Afterward, we based ourselves on the elaborated model to apply data mining techniques and to extract knowledge from chemical reaction databases. The experiments we have done on Resyn Assistant concerned the synthetic methods which construct monocycles and the functional interchanges and gave trends in good agreement with the domain knowledge
Akbarinia, Reza. "Techniques d'accès aux données dans des systèmes pair-à-pair". Nantes, 2007. http://www.theses.fr/2007NANT2060.
Pełny tekst źródłaThe goal of this thesis is to contribute to the development of new data access techniques for query processing services in P2P environments. We focus on novel techniques for two important kinds of queries: queries with currency guarantees and top-k queries. To improve data availability, most P2P systems rely on data replication, but without currency guarantees. However, for many applications which could take advantage of a P2P system (e. G. Agenda management), the ability to get the current data is very important. To support these applications, the query processing service must be able to efficiently detect and retrieve a current, i. E. Up-to-date, replica in response to a user requesting a data. The second problem which we address is supporting top-k queries which are very useful in large scale P2P systems, e. G. They can reduce the network traffic significantly. However, efficient execution of these queries is very difficult in P2P systems because of their special characteristics, in particular in DHTs. In this thesis, we first survey the techniques which have been proposed for query processing in P2P systems. We give an overview of the existing P2P networks, and compare their properties from the perspective of query processing. Second, we propose a complete solution to the problem of current data retrieval in DHTs. We propose a service called Update Management Service (UMS) which deals with updating replicated data and efficient retrieval of current replicas based on timestamping. Third, we propose novel solutions for top-k query processing in structured, i. E. DHTs, and unstructured P2P systems. We also propose new algorithms for top-k query processing over sorted lists which is a general model for top-k queries in many centralized, distributed and P2P systems, especially in super-peer networks. We validated our solutions through a combination of implementation and simulation and the results show very good performance, in terms of communication and response time
Valceschini-Deza, Nathalie. "Accès sémantique aux bases de données textuelles". Nancy 2, 1999. http://www.theses.fr/1999NAN21021.
Pełny tekst źródłaFallouh, Fouad. "Données complexes et relation universelle avec inclusions : une aide à la conception et à l'interrogation des bases de données". Lyon 1, 1994. http://www.theses.fr/1994LYO10217.
Pełny tekst źródłaAndreewsky, Marina. "Construction automatique d'un système de type expert pour l'interrogation de bases de données textuelles". Paris 11, 1989. http://www.theses.fr/1989PA112310.
Pełny tekst źródłaDiop, Cheikh Talibouya. "Etude et mise en oeuvre des aspects itératifs de l'extraction de règles d'association dans une base de données". Tours, 2003. http://www.theses.fr/2003TOUR4027.
Pełny tekst źródłaViallon, Vivian. "Processus empiriques, estimation non paramétrique et données censurées". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00119260.
Pełny tekst źródłaFankam, Nguemkam Chimène. "OntoDB2 : un système flexible et efficient de base de données à base ontologique pour le web sémantique et les données techniques". Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aéronautique, 2009. https://tel.archives-ouvertes.fr/tel-00452533.
Pełny tekst źródłaThe need to represent the semantics of data in various scientific fields (medicine, geography, engineering, etc…) has resulted in the definition of data referring to ontologies, also called ontology-based data. With the proliferation of domain ontologies, and the increasing volume of data to handle, has emerge the need to define systems capable of managing large size of ontology-based data. Such systems are called Ontology Based DataBase (OBDB) Management Systems. The main limitations of existing OBDB systems are (1) their rigidity, (2) lack of support for non standard data (spatial, temporal, etc…) and (3) their lack of effectiveness to manage large size data. In this thesis, we propose a new OBDB called OntoDB2, allowing (1) the support of ontologies based on different ontology models, (2) the extension of its model to meet specific applications requirements, and (3) an original management of ontology-based data facilitating scalability. Onto DB2 is based on the existence of a kernel ontology, and model-based techniques to enable a flexible extension of this kernel. We propose to represent only canonical data by transforming, under certain conditions, any given non-canonical data to its canonical representation. We propose to use the ontology query language to (1) to access non-canonical data thereby transform and, (2) index and pre-calculate the reasoning operations by using the mechanisms of the underlying DBMS
Jedidi, Anis. "MODÉLISATION GÉNÉRIQUE DE DOCUMENTS MULTIMÉDIA PAR DES MÉTADONNÉES : MÉCANISMES D'ANNOTATION ET D'INTERROGATION". Phd thesis, Université Paul Sabatier - Toulouse III, 2005. http://tel.archives-ouvertes.fr/tel-00424059.
Pełny tekst źródłaSandu, Popa Iulian. "Modélisation, interrogation et indexation de données de capteurs à localisation mobile dans un réseau routier". Versailles-St Quentin en Yvelines, 2009. http://www.theses.fr/2009VERS0015.
Pełny tekst źródłaNew technologies such as GPS, sensors and ubiquitous computing are pervading our society. The movement of people and vehicles may be sensed and recorded, thus producing large volumes of mobility data. The state-of-the-art database management systems fail to handle such complex data and their processing. This thesis addresses the problem of managing mobile location sensor data. We analyze the limitations of existing work in modeling, querying and indexing moving objects with sensors on road networks. Then, we propose new solutions to deal with these limitations. The main contributions of the thesis are a data model and a query language for moving sensor data, and an access method for in-network trajectories of moving objects. We have implemented these proposals as a spatio-temporal database management system extension and evaluated them
Kouomou, Choupo Anicet. "Améliorer la recherche par similarité dans une grande base d'images fixes par des techniques de fouille de données". Rennes 1, 2006. https://tel.archives-ouvertes.fr/tel-00524418.
Pełny tekst źródłaChbeir, Richard. "Modélisation de la description d'images : application au domaine médical". Lyon, INSA, 2001. http://theses.insa-lyon.fr/publication/2001ISAL0065/these.pdf.
Pełny tekst źródłaThe management of images remains a complex task that is currently a cause for several research works. In line with this, we are interested in this work with the problem of image retrieval in medical databases. This problem is mainly related to the complexity of image description or representation. In literature, three paradigms are proposed: 1- The context-oriented paradigm that describes the context of the image without considering its content, 2- The content-oriented paradigm considering the physical characteristics of the image such as colors, textures, shapes, etc. 3- The semantic-oriented paradigm trying to provide an interpretation of the image using keywords, legends, etc. In this thesis, we propose an original model able to describe all image characteristics. This model is structured according to two spaces: 1- External space containing factual information associated to the image such as the patient name, the acquisition date, image type, etc;, 2-Internal space considering the physical characteristics (color, texture, etc. ), the spatial characteristics (form, position), and the semantics (scene, interpretation, etc. ) of the image content. The model is elaborated with several levels of granularity that considers characteristics of the whole image and/or its salient objects. We provide as well a referential module and a rules module that maintains coherence between description spaces. We also propose a meta-model of relations. The purpose of this meta-model is to provide, in a precise way, the several types of relations between two objects in function of common characteristics (shape, color, position, etc. ). This meta-model contributes to define a powerful indexing mechanism. In order to validate our approach, we developed a prototype named MIMS (Medical Image System management) with a user-friendly interface for storage and retrieval of images based on icons and hypermedia. MIMS is web-accessible on http://mims. Myip. Org
Voglozin, W. Amenel. "Le résumé linguistique de données structurées comme support pour l'interrogation". Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00481049.
Pełny tekst źródłaCuppens, Frédéric. "Comment fournir des réponses coopératives aux requêtes à une base de données". Toulouse, ENSAE, 1988. http://www.theses.fr/1988ESAE0014.
Pełny tekst źródłaBonhomme, Christine. "Unlangage visuel dédié à l'interrogation et la manipulation de bases de données spatio-temporelles". Lyon, INSA, 2000. http://www.theses.fr/2000ISAL0049.
Pełny tekst źródłaThis thesis deals with LVIS, a visual query language for spatiotemporal databases and more specifically for Geographical Information Systems (GIS). The language follows a query-by-example philosophy. Visual representations of queries - or visual queries - are incrementally specified by means of two sets of icons: the first one contains the icons that represent the object types of the database to be queried; the second one contains the icons of a minimal set of operators that are useful to express some criteria. Visual queries are then translated into a intermediate textual language - named pivot language. This pivot language is independent of the GIS that will finally execute queries. The language is defined by three independent grammars. A first grammar defines the semantics of the language. The second grammar - or visual grammar - defines the visual semantics of the language. The last grammar defines the keywords of the pivot language and allows the queries to be translated into the query language of a GIS that is chosen by the end user. A prototype has been developed with the aim of testing the interactions of the language with the MapInfo GIS. The two main contributions in the field of visual querying are: (1) The formulation of spatiotemporal queries are handled by both the integration of temporal (Allen relationships) and spatiotemporal (life-cycle of objects) operators and the definition of new visual metaphors allowing to visually represent such queries. (2) The validation of the icons of the language is assumed by psycho-cognitive tests that have been subjected to potential users. These tests aim too at evaluating the user-friendliness of the language
Kezouit, Omar Abdelaziz. "Bases de données relationnelles et analyse de données : conception et réalisation d'un système intégré". Paris 11, 1987. http://www.theses.fr/1987PA112130.
Pełny tekst źródłaDib, Saker. "L'interrogation des bases de données relationnelles assistée par le graphe sémantique normalisé". Lyon 1, 1993. http://www.theses.fr/1993LYO10122.
Pełny tekst źródłaBen, Dhia Imen. "Gestion des grandes masses de données dans les graphes réels". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0087.
Pełny tekst źródłaIn the last few years, we have been witnessing a rapid growth of networks in a wide range of applications such as social networking, bio-informatics, semantic web, road maps, etc. Most of these networks can be naturally modeled as large graphs. Managing, analyzing, and querying such data has become a very important issue, and, has inspired extensive interest within the database community. In this thesis, we address the problem of efficiently answering distance queries in very large graphs. We propose EUQLID, an efficient algorithm to answer distance queries on very large directed graphs. This algorithm exploits some interesting properties that real-world graphs exhibit. It is based on an efficient variant of the seminal 2-hop algorithm. We conducted an extensive set of experiments against state-of-the-art algorithms which show that our approach outperforms existing approaches and that distance queries can be processed within hundreds of milliseconds on very large real-world directed graphs. We also propose an access control model for social networks which can make use of EUQLID to scale on very large graphs. This model allows users to specify fine-grained privacy policies based on their relations with other users in the network. We describe and demonstrate Primates as a prototype which enforces the proposed access control model and allows users to specify their privacy preferences via a graphical user-friendly interface
Jemaa, Adel. "Processus d’absorption, Innovation & Productivité : Analyse empirique sur données d’entreprises". Caen, 2014. http://www.theses.fr/2014CAEN0504.
Pełny tekst źródłaThe thesis deals with the conceptualization and assessment of the ability of firms to absorb external knowledge. It also discusses the impact of this capability on innovation and productivity. The first contribution of this thesis consists in modeling the absorption capacity as an integrated process in the innovation of the company processes. This process of absorption is defined and modeled in an original way through a network of interactions between different activities or capacities: the capacity for internal absorption, the access to external knowledge and the ability to cooperate capacity. The second contribution is to analytically treat the issue by integrating the absorption capacity and the cognitive distance through an innovation function simultaneously. This model allows to distinguish between a theoretical absorption capacity and an effective absorption capacity that takes into account the cognitive distance. The third contribution initially consists, on one hand, in measuring the intensity of these different capabilities and, on the other hand, in estimating the causal relationships between them. That is to say, the ability to determine the internal absorption ability to access external knowledge, which in turn, would determine the ability to cooperate. Secondly, the thesis focuses on the influence of the intensity of cooperation on business performance (output of innovation, labor productivity, TFP). Finally, the thesis discusses the impact of the performance of the company on its internal capacity for absorption
Do, Van-Cuong. "Analyse statistique de processus stochastiques : application sur des données d’orages". Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS526/document.
Pełny tekst źródłaThe work presented in this PhD dissertation concerns the statistical analysis of some particular cases of the Cox process. In a first part, we study the power-law process (PLP). Since the literature for the PLP is abundant, we suggest a state-of-art for the process. We consider the classical approach and recall some important properties of the maximum likelihood estimators. Then we investigate a Bayesian approach with noninformative priors and conjugate priors considering different parametrizations and scenarios of prior guesses. That leads us to define a family of distributions that we name H-B distribution as the natural conjugate priors for the PLP. Bayesian analysis with the conjugate priors are conducted via a simulation study and an application on real data. In a second part, we study the exponential-law process (ELP). We review the maximum likelihood techniques. For Bayesian analysis of the ELP, we define conjugate priors: the modified- Gumbel distribution and Gamma-modified-Gumbel distribution. We conduct a simulation study to compare maximum likelihood estimates and Bayesian estimates. In the third part, we investigate self-exciting point processes and we integrate a power-law covariate model to this intensity of this process. A maximum likelihood procedure for the model is proposed and the Bayesian approach is suggested. Lastly, we present an application on thunderstorm data collected in two French regions. We consider a strategy to define a thunderstorm as a temporal process associated with the charges in a particular location. Some selected thunderstorms are analyzed. We propose a reduced maximum likelihood procedure to estimate the parameters of the Hawkes process. Then we fit some thunderstorms to the power-law covariate self-exciting point process taking into account the associated charges. In conclusion, we give some perspectives for further work
Charles, Christophe. "SearchXQ : une méthode d'aide à la navigation fondée sur Ω-means, algorithme de classification non-supervisée. Application sur un corpus juridique français". Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1281.
Pełny tekst źródłaBedel, Olivier. "Geolis : un système d'information logique pour l'organisation et la recherche de données géolocalisées". Rennes 1, 2009. ftp://ftp.irisa.fr/techreports/theses/2009/bedel.pdf.
Pełny tekst źródłaIn this thesis, we propose a new paradigm for geographical data organization and retrieval. Our approach is based on Logical Information System (LIS) and their underlying theory: Logical Concept Analysis. First, we present a data model centered on the geographical object that allows to gather geographical objects in a flexible way. We define spatial logics that enable to describe the geometry of geographical objects and their spatial relations (topology and distance) and to organize and retrieve these objects thanks to logical inference. Then, we detail a data exploration combining dynamically interrogation, navigation and visualization. It relies on three complementary views over the explored dataset: the query, the selection and the navigation index. Last, we describe a prototype satisfying our proposal and we discuss two experiments led on real datasets
Naoum, Lamiaa. "Un modèle multidimensionnel pour un processus d'analyse en ligne de résumés flous". Nantes, 2006. http://www.theses.fr/2006NANT2101.
Pełny tekst źródłaBaujoin, Corinne. "Analyse et optimisation d’un système de gestion de bases de données hiérarchique-relationnel : proposition d’une interface d’interrogation". Compiègne, 1985. http://www.theses.fr/1985COMPI209.
Pełny tekst źródłaRipoche, Hugues. "Une construction interactive d'interprétations de données : application aux bases de données de séquences génétiques". Montpellier 2, 1995. http://www.theses.fr/1995MON20248.
Pełny tekst źródła