Teses / dissertações sobre o tema "Base de données historiques"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Base de données historiques".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Dumenieu, Bertrand. "Un système d'information géographique pour le suivi d'objets historiques urbains à travers l'espace et le temps". Paris, EHESS, 2015. http://www.theses.fr/2015EHES0157.
Texto completo da fonteGeographic information systems (GIS) are increasingly used for leading historical studies because of their ability to display, store and share geo-historical data. They provide an opportunity for exploring and analyzing spatialized phenomena and the interactions between such phenomena and spatial dynamics. To achieve this goal, GIS have to manage spatio-temporal data describing the transformations of geographical entities. These data are also highly imperfect since knowledge about the past is only available through imprecise or uncertain historical sources such as maps. To date, no GIS is able to integrate, manage and analyze such imperfect data. In this thesis, we focus on the integration of spatio-temporal data about urban space extracted from historical topographic maps on the city of Paris. We propose a process that allows to create spatio-temporal graphs from geohistorical vector data extracted from georeferenced maps of the city. After the analysis of the maps and the measure of their spatial and temporal imperfections, we propose a spatio-temporal model named geohistorical graph and a semi-automatic spatio-temporal data matching process able to build such graphs from vector data extracted from old topographic maps. Our method is tested and validated on the street networks of Paris extracted from maps covering the period from the late XVIIIth century to the late XlXth century
Casallas-Gutiérrez, Rubby. "Objets historiques et annotations pour les environnements logiciels". Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004982.
Texto completo da fonteBui, Quang Ngoc. "Aspects dynamiques et gestion du temps dans les systèmes de bases de données généralisées". Grenoble INPG, 1986. https://theses.hal.science/tel-00321849.
Texto completo da fonteJanbain, Imad. "Apprentissage Ρrοfοnd dans l'Ηydrοlοgie de l'Estuaire de la Seine : Recοnstructiοn des Dοnnées Ηistοriques et Ρrévisiοn Ηydraulique". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR033.
Texto completo da fonteThis PhD thesis explores the application of deep learning (DL) algorithms to address hydrological challenges in the Seine River basin, France’s second longest river. The Seine’s intricate hydraulic regime, shaped by variable rainfall, tributaries, human interventions, and tidal fluctuations, presents an ideal scenario for advanced computational techniques. DL models, particularly recurrent-based neural networks and attention mechanisms, were chosen for their ability to capture long-term temporal dependencies in time series data, outperforming traditional machine learning (ML) models and their reduced need for manual calibration compared to physical-based models.The research focuses on developing custom methodologies to enhance DL efficiency and optimize its application to specific challenges within the Seine River Basin. Key challenges include addressing complex interactions within the study area, predicting extreme flood events, managing data limitations, and reconstructing missing historical databases crucial for analyzing water level fluctuations in response to variables such as climatic changes. The objective is to uncover insights, bridge data gaps, and enhance flood prediction accuracy, particularly for extreme events, thereby advancing smarter water management solutions.Detailed across four articles, our contributions showcase the effectiveness of DL in various hydrological challenges and applications: filling missing water level data gaps that may span several months in hourly records, projecting water quality parameters over 15 years in the past, analyzing station interactions, and predicting extreme flood events on both large (up to 7 days ahead in daily data) and small scales (up to 24 hours in hourly data).Proposed techniques such as the Mini-Look-Back decomposition approach, automated historical reconstruction strategies, custom loss functions, and extensive feature engineering highlight the versatility and efficacy of DL models in overcoming data limitations and outperforming traditional methods. The research emphasizes interpretability alongside prediction accuracy, providing insights into the complex dynamics of hydrological systems. These findings underscore the potential of DL and the developed methodologies in hydrological applications while suggesting broader applicability across various fields dealing with time series data
Mechkour, Mourad. "Emir2 : un modèle étendu de présentation et de correspondance d'images pour la recherche d'informations : application a un corpus d'images historiques". Université Joseph Fourier (Grenoble), 1995. http://www.theses.fr/1995GRE10201.
Texto completo da fonteCauvin-Hardy, Clémence. "Optimisation de la gestion du patrimoine culturel et historique à l’aide des méthodologies avancées d’inspection". Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC057.
Texto completo da fonteThe objective of the thesis is to optimize the management of cultural and historical building heritage using advanced inspection methodologies with HeritageCare project.The answer to this problem is detailed in five chapters: (1) a state of the art of preventive management methodologies, the HeritageCare project and the identification of the state of degradation, (2) the implementation of the general methodology on preventive management is decomposed into 4 steps (anamnesis, diagnosis, therapy and control), (3) proposal of aggregation models (4), results of the application of the management approach preventive and finally (5) the application of models. These make it possible to prioritize the buildings on the basis of 37 criteria organized into sub-criteria and indicators, highlight the decision-making of the owners on the basis of a criticality matrix combining the values of the indicators, determine the useful life of the buildings with the deterioration curves, propose and prioritize maintenance actions based on a developed database.The methodology is illustrated by its application on fourteen buildings representing the French cultural and historical heritage
Pellen, Nadine. "Hasard, coïncidence, prédestination… et s’il fallait plutôt regarder du côté de nos aïeux ? : analyse démographique et historique des réseaux généalogiques et des structures familiales des patients atteints de mucoviscidose en Bretagne". Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS004S.
Texto completo da fonteThe population at the root of this study is composed of patients clinically diagnosed as suffering from cystic fibrosis and having lived in Brittany some time in the course of the past fifty years. Their ancestry was traced back with the help of genealogy centres and brought together more than 250 000 kinspeople. The resulting data base, built up from these patients’ genetic and genealogical characteristics, was then used to study how the demographic patterns of the past could explain the frequency and geographical distribution of cystic fibrosis as it appears in today’s Brittany. The carriers who share the same CF mutation are kindreds. The mapping of their common ancestors’ living places shows a differential distribution, depending on specific CF mutations. These genetic relatednesses enable us to trace back the route followed by the CF gene. At the ancestors’ level, we observed marital unions at an early age, particularly for women, and frequent remarriage, particularly for men. As a consequence, married couples were prolific, thus allowing more genetic transmissions. And the geographical stability that prevailed at the time of the wedding does not seem to produce genetic diversity. Moreover, we reckoned that in terms of life expectancy there might be some selective advantage to being a healthy carrier. Inbreeding - a cause frequently referred to as an explanation for the large number of CF affected patients in Brittany - was in no way a key factor in this study. Only 0,8 % were born from first or second cousin unions. At the ancestors’ level, we must go back to the 7th generation to see a higher proportion of close kinship. Therefore, more often than consanguinity, endogamy tends to carry on a certain degree of genetic homogeneity. CF frequency of occurrence and its Breton distribution today can be accounted for by the presence of a harmful gene combined with high fertility, a relatively settled population with a limited availability of possible partners, and the selective advantage this harmful gene was for healthy carriers. This study helps to increase historical, geographical and social knowledge of CF throughout successive generations. Lt enables us to have a collective more than individual approach of the CF mutation. Lt also has t a prospective effect as a tool for the testing center and the staff
Spéry, Laurent. "Historique et mise à jour de données géographiques : application au cadastre français". Avignon, 1999. http://www.theses.fr/1999AVIG1020.
Texto completo da fonteEckert, Nicolas. "Couplage données historiques - modélisation numérique pour la prédétermination des avalanches : une approche bayésienne". Phd thesis, AgroParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003404.
Texto completo da fonteFrau, Roberto. "Utilisation des données historiques dans l'analyse régionale des aléas maritimes extrêmes : la méthode FAB". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1051/document.
Texto completo da fonteThe protection of coastal areas against the risk of flooding is necessary to safeguard all types of waterside structures and, in particular, nuclear power plants. The prevention of flooding is guaranteed by coastal protection commonly built and verified thanks to the definition of the return level’s concept of a particular extreme event. Return levels linked to very high return periods (up to 1000 years) are estimated through statistical methods based on the Extreme Value Theory (EVT). These statistical approaches are applied to time series of a particular extreme variable observed and enables the computation of its occurrence probability. In the past, return levels of extreme coastal events were frequently estimated by applying statistical methods to time series of local observations. Local series of sea levels are typically observed in too short a period (for sea levels about 50 years) in order to compute reliable estimations linked to high return periods. For this reason, several approaches are used to enlarge the size of the extreme data samples and to reduce uncertainties of their estimations. Currently, one of the most widely used methods in coastal engineering is the Regional Analysis. Regional Analysis is denoted by Weiss (2014) as a valid means to reduce uncertainties in the estimations of extreme events. The main idea of this method is to take advantage of the wide spatial availability of observed data in different locations in order to form homogeneous regions. This enables the estimation of statistical distributions of enlarged regional data samples by clustering all extreme events occurred in one or more sites of the region. Recent investigations have highlighted the importance of using past events when estimating extreme events. When historical data are available, they cannot be neglected in order to compute reliable estimations of extreme events. Historical data are collected from different sources and they are identified as data that do not come from time series. In fact, in most cases, no information about other extreme events occurring before and after a historical observation is available. This, and the particular nature of each historical data, do not permit their use in a Regional Analysis. A statistical methodology that enables the use of historical data in a regional context is needed in order to estimate reliable return levels and to reduce their associated uncertainties. In this manuscript, a statistical method called FAB is developed enabling the performance of a Regional Analysis using historical data. This method is formulated for POT (Peaks Over Threshold) data. It is based on the new definition of duration of local and regional observation period (denominated credible duration) and it is able to take into account all the three typical kinds of historical data (exact point, range and lower limit value). In addition, an approach to identify an optimal sampling threshold is defined in this study. This allows to get better estimations through using the optimal extreme data sample in the FAB method.FAB method is a flexible approach that enables the estimation of return levels both in frequentist and Bayesian contexts. An application of this method is carried out for a database of recorded skew surges (systematic data) and for 14 historical skew surges recovered from different sites located on French, British, Belgian and Spanish coasts of the Atlantic Ocean, the English Channel and the North Sea. Frequentist and Bayesian estimations of skew surges are computed for each homogeneous region and for every site. Finally, this manuscript explores the issues surrounding the finding and validation of historical data
Dehainsala, Hondjack. "Explicitation de la sémantique dans lesbases de données : Base de données à base ontologique et le modèle OntoDB". Phd thesis, Université de Poitiers, 2007. http://tel.archives-ouvertes.fr/tel-00157595.
Texto completo da fonteen termes de classes et de propriétés, ainsi que des relations qui les lient. Avec le développement de
modèles d'ontologies stables dans différents domaines, OWL dans le domaine duWeb sémantique,
PLIB dans le domaine technique, de plus en plus de données (ou de métadonnées) sont décrites par référence à ces ontologies. La taille croissante de telles données rend nécessaire de les gérer au sein de bases de données originales, que nous appelons bases de données à base ontologique (BDBO), et qui possèdent la particularité de représenter, outre les données, les ontologies qui en définissent le sens. Plusieurs architectures de BDBO ont ainsi été proposées au cours des dernières années. Les chémas qu'elles utilisent pour la représentation des données sont soit constitués d'une unique table de triplets de type (sujet, prédicat, objet), soit éclatés en des tables unaires et binaires respectivement pour chaque classe et pour chaque propriété. Si de telles représentations permettent une grande flexibilité dans la structure des données représentées, elles ne sont ni susceptibles de passer à grande échelle lorsque chaque instance est décrite par un nombre significatif de propriétés, ni adaptée à la structure des bases de données usuelles, fondée sur les relations n-aires. C'est ce double inconvénient que vise à résoudre le modèle OntoDB. En introduisant des hypothèses de typages qui semblent acceptables dans beaucoup de domaine d'application, nous proposons une architecture de BDBO constituée de quatre parties : les deux premières parties correspondent à la structure usuelle des bases de données : données reposant sur un schéma logique de données, et méta-base décrivant l'ensemble de la structure de tables.
Les deux autres parties, originales, représentent respectivement les ontologies, et le méta-modèle
d'ontologie au sein d'un méta-schéma réflexif. Des mécanismes d'abstraction et de nomination permettent respectivement d'associer à chaque donnée le concept ontologique qui en définit le sens, et d'accéder aux données à partir des concepts, sans se préoccuper de la représentation des données. Cette architecture permet à la fois de gérer de façon efficace des données de grande taille définies par référence à des ontologies (données à base ontologique), mais aussi d'indexer des bases de données usuelles au niveau connaissance en leur adjoignant les deux parties : ontologie et méta-schéma. Le modèle d'architecture que nous proposons a été validé par le développement d'un prototype opérationnel implanté sur le système PostgreSQL avec le modèle d'ontologie PLIB. Nous présentons également une évaluation comparative de nos propositions aux modèles présentés antérieurement.
Bounar, Boualem. "Génération automatique de programmes sur une base de données en réseau : couplage PROLOG-Base de données en réseau". Lyon 1, 1986. http://www.theses.fr/1986LYO11703.
Texto completo da fonteEl, Khalil Firas. "Sécurité de la base de données cadastrales". Thesis, Polynésie française, 2015. http://www.theses.fr/2015POLF0001/document.
Texto completo da fonteQuantity Based Aggregation (QBA) controls closely related to inference control database and has been rarely addressed by the scientific community. Let us consider a set S of N elements. The aggregation of k elements, at most, out of N is not considered sensitive, while the aggregation of mor than k out of N elements is considered sensitive and should be prevented. The role of QBA control is to make sure the number of disclosed elements of S is less than or equal to k, where k
Dehainsala, Hondjack. "Explicitation de la sémantique dans les bases de données : base de données à base ontologique et le modèle OntoDB". Poitiers, 2007. http://www.theses.fr/2007POIT2270.
Texto completo da fonteAn Ontology–Based DataBase (OBDB) is a database which allows to store both data and ontologies that define data meaning. In this thesis, we propose a new architecture model for OBDB, called OntoDB. This model has two main original features. First, like usual databases, each stored entity is associated with a logical schema which define the structure of all its instances. Thus, our approach provides for adding ontology to existing database for semantic indexation of its content. Second, meta-model of the ontology model is also represented in the same database. This allows to support change and evolution of ontology models. The OntoDB model has been validated by a prototype. Performance evaluation of this prototype has been done and has shown that our approach allows to manage very large data and supports scalability much better than the previously proposed approaches
Lemaire, Pierre. "Base de données informatique : application aux leucémies aigue͏̈s". Paris 5, 1997. http://www.theses.fr/1997PA05P039.
Texto completo da fonteKindombi, Lola Ndontoni. "Communications interactives avec une machine base de données". Paris 11, 1985. http://www.theses.fr/1985PA112379.
Texto completo da fonteHsu, Lung-Cheng. "Pbase : une base de données déductive en Prolog". Compiègne, 1988. http://www.theses.fr/1988COMPD126.
Texto completo da fonteThis thesis describes a relational database system coupling PROLOG II and VAX RMS (Record Management Services). The SQL-like DDL (Data Definition Language) and DML (Data Manipulation Language) are implemented in PROLOG and the management of storage and research of fact record is delegated to RMS. The indexed file organization is adopted to provide a satisfactory response time. An interface written in PASCAL is called to enable the communication between PROLOG and RMS. Once the interface is established, access to the database is transparent. No precompilation is requiert. PBASE can be used as a general DBMS or it can cooperate with an expert system (Our SQL translation module can be considered as such) to manage the voluminous facts stored in the secondary memory. It can also cooperate with VAX RDB (Relational DataBase) to constitute a powerful deductive database. Although PBASE works for normalized relations as well as non-normalized ones, a normalization module is included to avoid the problems caused by the redundancy of data
Matonda, Sakala Igor. "Le bassin de l'Inkisi à l'époque du royaume Kongo: confrontation des données historiques, archéologiques et linguistiques". Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/250381.
Texto completo da fonteDoctorat en Histoire, histoire de l'art et archéologie
info:eu-repo/semantics/nonPublished
Rajab, Ali. "Le Maroc et l'affaire du Sahara occidental : les données historiques, politiques, économiques et juridiques du problème". Lyon 2, 1989. http://www.theses.fr/1989LYO20013.
Texto completo da fonteFankam, Nguemkam Chimène. "OntoDB2 : un système flexible et efficient de base de données à base ontologique pour le web sémantique et les données techniques". Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aéronautique, 2009. https://tel.archives-ouvertes.fr/tel-00452533.
Texto completo da fonteThe need to represent the semantics of data in various scientific fields (medicine, geography, engineering, etc…) has resulted in the definition of data referring to ontologies, also called ontology-based data. With the proliferation of domain ontologies, and the increasing volume of data to handle, has emerge the need to define systems capable of managing large size of ontology-based data. Such systems are called Ontology Based DataBase (OBDB) Management Systems. The main limitations of existing OBDB systems are (1) their rigidity, (2) lack of support for non standard data (spatial, temporal, etc…) and (3) their lack of effectiveness to manage large size data. In this thesis, we propose a new OBDB called OntoDB2, allowing (1) the support of ontologies based on different ontology models, (2) the extension of its model to meet specific applications requirements, and (3) an original management of ontology-based data facilitating scalability. Onto DB2 is based on the existence of a kernel ontology, and model-based techniques to enable a flexible extension of this kernel. We propose to represent only canonical data by transforming, under certain conditions, any given non-canonical data to its canonical representation. We propose to use the ontology query language to (1) to access non-canonical data thereby transform and, (2) index and pre-calculate the reasoning operations by using the mechanisms of the underlying DBMS
Jouanne, François. "Mesure de la déformation actuelle des Alpes occidentales et du Jura par comparaison de données géodésiques historiques". Phd thesis, Chambéry, 1994. http://tel.archives-ouvertes.fr/tel-00723714.
Texto completo da fonteMichel, Franck. "Intégrer des sources de données hétérogènes dans le Web de données". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4002/document.
Texto completo da fonteTo a great extent, the success of the Web of Data depends on the ability to reach out legacy data locked in silos inaccessible from the web. In the last 15 years, various works have tackled the problem of exposing various structured data in the Resource Description Format (RDF). Meanwhile, the overwhelming success of NoSQL databases has made the database landscape more diverse than ever. NoSQL databases are strong potential contributors of valuable linked open data. Hence, the object of this thesis is to enable RDF-based data integration over heterogeneous data sources and, in particular, to harness NoSQL databases to populate the Web of Data. We propose a generic mapping language, xR2RML, to describe the mapping of heterogeneous data sources into an arbitrary RDF representation. xR2RML relies on and extends previous works on the translation of RDBs, CSV/TSV and XML into RDF. With such an xR2RML mapping, we propose either to materialize RDF data or to dynamically evaluate SPARQL queries on the native database. In the latter, we follow a two-step approach. The first step performs the translation of a SPARQL query into a pivot abstract query based on the xR2RML mapping of the target database to RDF. In the second step, the abstract query is translated into a concrete query, taking into account the specificities of the database query language. Great care is taken of the query optimization opportunities, both at the abstract and the concrete levels. To demonstrate the effectiveness of our approach, we have developed a prototype implementation for MongoDB, the popular NoSQL document store. We have validated the method using a real-life use case in Digital Humanities
Jouzier, Cécile. "Constitution d'une base de données d'histoire médico-pharmaceutique bordelaise". Bordeaux 2, 2000. http://www.theses.fr/2000BOR2P107.
Texto completo da fonteDevulder, Grégory. "Base de données de séquences, phylogénie et identification bactérienne". Lyon 1, 2004. http://www.theses.fr/2004LYO10164.
Texto completo da fonteOuld, Yahia Sabiha. "Interrogation multi-critères d'une base de données spatio-temporelles". Troyes, 2005. http://www.theses.fr/2005TROY0006.
Texto completo da fonteThe study of the human behavior in driving situations is of primary importance for the improvement of drivers security. This study is complex because of the numerous situations in which the driver may be involved. The objective of the CASSICE project (Symbolic Characterization of Driving Situations) is to elaborate a tool in order to simplify the analysis task of the driver's behavior. In this paper, we will mainly take an interest in the indexation and querying of a multimedia database including the numerical data and the video sequences relating to a type of driving situations. We will put the emphasis on the queries to this database. They are often complex because they are formulated according to criteria depending on time, space and they use terms of the natural language
Vachey, Françoise. "Les suffixes toponymiques français : atlas et base de données". Nancy 2, 1999. http://www.theses.fr/1999NAN21036.
Texto completo da fontePloquin, Catherine. "LAB langage d'analyse associé à une base de données". Bordeaux 1, 1985. http://www.theses.fr/1985BOR10534.
Texto completo da fonteBec, Xavier. "Une base de données pour les effets spéciaux numériques". Paris 8, 2000. http://www.theses.fr/2000PA081818.
Texto completo da fonteAbdelhédi, Fatma. "Conception assistée d’entrepôts de données et de documents XML pour l’analyse OLAP". Thesis, Toulouse 1, 2014. http://www.theses.fr/2014TOU10005/document.
Texto completo da fonteToday, data warehouses are a major issue for business intelligence applications within companies. Sources of a warehouse, i.e. the origin of data that feed, are diverse and heterogeneous sequential files, spreadsheets, relational databases, Web documents. The complexity is such that the software on the market only partially meets the needs of decision makers when they want to analyze the data. Therefore, our work is within the decision support systems context that integrate all data types (mainly extracted from relational databases and XML documents databases) for decision makers. They aim to provide models, methods and software tools to elaborate and manipulate data warehouses. Our work has specifically focused on two complementary issues: aided data warehouse and modeling and OLAP analysis of XML documents
Bernard, Guillaume. "Détection et suivi d’événements dans des documents historiques". Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS032.
Texto completo da fonteCurrent campaigns to digitise historical documents from all over the world are opening up new avenues for historians and social science researchers. The understanding of past events is renewed by the analysis of these large volumes of historical data: unravelling the thread of events, tracing false information are, among other things, possibilities offered by the digital sciences. This thesis focuses on these historical press articles and suggests, through two opposing strategies, two analysis processes that address the problem of tracking events in the press. A simple use case is for instance a digital humanities researcher or an amateur historian who is interested in an event of the past and seeks to discover all the press documents related to it. Manual analysis of articles is not feasible in a limited time. By publishing algorithms, datasets and analyses, this thesis is a first step towards the publication of more sophisticated tools allowing any individual to search old press collections for events, and why not, renew some of our historical knowledge
Boleda, Mario. "Démographie historique des Andes : évaluation de certaines méthodes d'estimation du régime démographique à l'époque moderne". Lyon 2, 2003. http://theses.univ-lyon2.fr/documents/lyon2/2003/boleda_m.
Texto completo da fonteDemographic dynamics estimations on historical population are frequently done by methods designed to be applied when data are lacking or incomplete. In this thesis, it is proposed an empirical test for several of these methods: the stable and quasi-stable models, that were elaborated by Coale & Demeny (1966) and the inverse projection designed by R. Lee as it is included in the POPULATE solution, a software produced by Robert McCaa and H Pérez Brignoli. Methods appeared to be seriously biaised. Differences between direct mesures and estimates coming from the tested methods were much larger than expected. Researchers are going to be using these techniques in the next future, waiting for a new and better procedure. Researchers can now apply the correction factors that we obtained from our experimental study based on the Quebec population
Grignard, Arnaud. "Modèles de visualisation à base d'agents". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066268.
Texto completo da fonteInformation visualization is the study of interactive visual representations of abstract data to reinforce human cognition. It is very closely associated with data mining issues which allow to explore, understand and analyze phenomena, systems or data masses whose complexity continues to grow today. However, most existing visualization techniques are not suited to the exploration and understanding of datasets that consist of a large number of individual data from heterogeneous sources that share many properties with what are commonly called "complex systems". The reason is often the use of monolithic and centralized approaches. This situation is reminiscent of the modeling of complex systems (social sciences, chemistry, ecology, and many other fields) before progress represented by the generalization of agent-based approaches twenty years ago. In this thesis, I defend the idea that the same approach can be applied with the same success to the field of information visualization. By starting from the now commonly accepted idea that the agent-based models offer appropriate representations the complexity of a real system, I propose to use an approach based on the definition of agent-based visualization models to facilitate visual representation of complex data and to provide innovative support which allows to explore, programmatically and visually, their underlying dynamics. Just like their software counterparts, agent-based visualization models are composed of autonomous graphical entities that can interact and organize themselves, learn from the data they process and as a result adapt their behavior and visual representations. By providing a user the ability to describe visualization tasks in this form, my goal is to allow them to benefit from the flexibility, modularity and adaptability inherent in agent-based approaches. These concepts have been implemented and experimented on the GAMA modeling and simulation platform in which I developed a 3D immersive environment offering the user different point of views and way to interact with agents. Their implementation is validated on models chosen for their properties, supports a linear progression in terms of complexity, allowing us to highlight the concepts of flexibility, modularity and adaptability. Finally, I demonstrate through the particular case of data visualization, how my approach allows, in real time, to represent, to clarify, or even discover their dynamics and how that progress in terms of visualization can contributing,in turn, to improve the modeling of complex systems
Pineau, Nicolas. "La performance en analyse sensorielle : une approche base de données". Phd thesis, Université de Bourgogne, 2006. http://tel.archives-ouvertes.fr/tel-00125171.
Texto completo da fonteGagnon, Bertrand. "Gestion d'information sur les procédés thermiques par base de données". Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65447.
Texto completo da fonteFolio, Patrice. "Etablissement d'une base de données protéomique de Listeria monocytogenes EGDe". Clermont-Ferrand 2, 2003. http://www.theses.fr/2003CLF21478.
Texto completo da fontePersyn, Emmanuel. "Base de données informatiques sur la première guerre du Golfe". Lille 3, 2003. http://www.theses.fr/2003LIL30018.
Texto completo da fonteTahir, Hassane. "Aide à la contextualisation de l’administration de base de données". Paris 6, 2013. http://www.theses.fr/2013PA066789.
Texto completo da fonteThe complexity of database administration tasks requires the development of tools for supporting database experts. When problems occur, the database administrator (DBA) is frequently the first person blamed. Most DBAs work in a fire-fighting mode and have little opportunity to be proactive. They must be constantly ready to analyze and correct failures based on a large set of procedures. In addition, they are continually readjusting these procedures and developing practices to manage a multitude of specific situations that differ from the generic situation by some few contextual elements. These practices have to deal with these contextual elements in order to solve the problem at hand. This thesis aims to use Contextual Graphs formalism in order to improve existing procedures used in database administration. The thesis shows also the benefits of using Contextual Graphs to capture user practices in order to be reused in the working contexts. Up to now, this improvement is achieved by a DBA through practices that adapt procedures to the context in which tasks should be performed and the incidents appear. This work will be the basis for designing and implementing a Context-Based Intelligent Assistant System (CBIAS) for supporting DBAs
Treger, Michèle. "Spécification et implantation d'une base de données des contacts intermoléculaires". Université Louis Pasteur (Strasbourg) (1971-2008), 1991. http://www.theses.fr/1991STR13089.
Texto completo da fonteÉtat-Le, Blanc Marie-Sylvie d'. "Une base de données sédimentologiques : structure, mise en place, applications". Bordeaux 1, 1986. http://www.theses.fr/1986BOR10565.
Texto completo da fontePeerbocus, Mohamed Ally. "Gestion de l'évolution spatiotemporelle dans une base de données géographiques". Paris 9, 2001. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2001PA090055.
Texto completo da fonteCuré, Olivier. "Relations entre bases de données et ontologies dans le cadre du web des données". Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00843284.
Texto completo da fonteGrignard, Arnaud. "Modèles de visualisation à base d'agents". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066268/document.
Texto completo da fonteInformation visualization is the study of interactive visual representations of abstract data to reinforce human cognition. It is very closely associated with data mining issues which allow to explore, understand and analyze phenomena, systems or data masses whose complexity continues to grow today. However, most existing visualization techniques are not suited to the exploration and understanding of datasets that consist of a large number of individual data from heterogeneous sources that share many properties with what are commonly called "complex systems". The reason is often the use of monolithic and centralized approaches. This situation is reminiscent of the modeling of complex systems (social sciences, chemistry, ecology, and many other fields) before progress represented by the generalization of agent-based approaches twenty years ago. In this thesis, I defend the idea that the same approach can be applied with the same success to the field of information visualization. By starting from the now commonly accepted idea that the agent-based models offer appropriate representations the complexity of a real system, I propose to use an approach based on the definition of agent-based visualization models to facilitate visual representation of complex data and to provide innovative support which allows to explore, programmatically and visually, their underlying dynamics. Just like their software counterparts, agent-based visualization models are composed of autonomous graphical entities that can interact and organize themselves, learn from the data they process and as a result adapt their behavior and visual representations. By providing a user the ability to describe visualization tasks in this form, my goal is to allow them to benefit from the flexibility, modularity and adaptability inherent in agent-based approaches. These concepts have been implemented and experimented on the GAMA modeling and simulation platform in which I developed a 3D immersive environment offering the user different point of views and way to interact with agents. Their implementation is validated on models chosen for their properties, supports a linear progression in terms of complexity, allowing us to highlight the concepts of flexibility, modularity and adaptability. Finally, I demonstrate through the particular case of data visualization, how my approach allows, in real time, to represent, to clarify, or even discover their dynamics and how that progress in terms of visualization can contributing,in turn, to improve the modeling of complex systems
De, Vlieger P. "Création d'un environnement de gestion de base de données " en grille ". Application à l'échange de données médicales". Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00654660.
Texto completo da fonteDe, Vlieger Paul. "Création d'un environnement de gestion de base de données "en grille" : application à l'échange de données médicales". Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00719688.
Texto completo da fontePonchateau, Cyrille. "Conception et exploitation d'une base de modèles : application aux data sciences". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2018. http://www.theses.fr/2018ESMA0005/document.
Texto completo da fonteIt is common practice in experimental science to use time series to represent experimental results, that usually come as a list of values in chronological order (indexed by time) and generally obtained via sensors connected to the studied physical system. Those series are analyzed to obtain a mathematical model that allow to describe the data and thus to understand and explain the behavio rof the studied system. Nowadays, storage and analyses technologies for time series are numerous and mature, but the storage and management technologies for mathematical models and their linking to experimental numerical data are both scarce and recent. Still, mathematical models have an essential role to play in the interpretation and validation of experimental results. Consequently, an adapted storage system would ease the management and re-usability of mathematical models. This work aims at developing a models database to manage mathematical models and provide a “query by data” system, to help retrieve/identify a model from an experimental time series. In this work, I will describe the conception (from the modeling of the system, to its software architecture) of the models database and its extensions to allow the “query by data”. Then, I will describe the prototype of models database,that I implemented and the results obtained by tests performed on the latter
Devogele, Thomas. "Processus d'intégration et d'appariement de bases de données géographiques : application à une base de données routières multi-échelles". Versailles-St Quentin en Yvelines, 1997. https://tel.archives-ouvertes.fr/tel-00085113.
Texto completo da fonteJean, Stéphane. "OntoQL, un langage d'exploitation des bases de données à base ontologique". Phd thesis, Université de Poitiers, 2007. http://tel.archives-ouvertes.fr/tel-00201777.
Texto completo da fonteKratky, Andreas. "Les auras numériques : pour une poétique de la base de données". Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010561/document.
Texto completo da fonteDatabase are ubiquitous in our lives and play an important rôle in many aspects of our daily activities. Conceived as a technical support to facilitate the efficient management of information and as the preferred means of storage, the database has gained a level of importance with aesthetic and political implications that go far beyond purely technical questions.Both theorical and practical in its approach, our research investigates the database as a means of expressive and poetic creation and reveals its specific character, in particular the discretization of data and the establishment of flexible relationships between them. In order to develop a poetics of the database we will reconsider the term « aura », which was utilized by walter Benjamin to analyse the transformations of the nature of aesthetic experience brought about by industrial rationalisation and technology at the end of the nineteenth century. The practical part of our research consists of two interactive projects based on the poetic principles elaborated in context of this dissertation
Alfonso, Espinosa-Oviedo Javier. "Coordination fiable de services de données à base de politiques active". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-01011464.
Texto completo da fonteDubois, Jean-Christophe. "Vers une interrogation en langage naturel d'une base de données image". Nancy 1, 1998. http://www.theses.fr/1998NAN10044.
Texto completo da fonte