Tesis sobre el tema "Manipulation des données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Manipulation des données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Segoufin, Luc. "Manipulation de données spaciales et topologiques". Paris 11, 1999. http://www.theses.fr/1999PA112033.
Texto completoTekli, Gilbert. "Manipulation des données XML par des utilisateurs non-experts". Phd thesis, Université Jean Monnet - Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00697756.
Texto completoTeste, Olivier. "Modélisation et manipulation d'entrepôts de données complexes et historisées". Phd thesis, Université Paul Sabatier - Toulouse III, 2000. http://tel.archives-ouvertes.fr/tel-00088986.
Texto completoAu niveau de l'entrepôt, nous définissons un modèle de données permettant de décrire l'évolution temporelle des objets complexes. Dans notre proposition, l'objet entrepôt intègre des états courants, passés et archivés modélisant les données décisionnelles et leurs évolutions. L'extension du concept d'objet engendre une extension du concept de classe. Cette extension est composée de filtres (temporels et d'archives) pour construire les états passés et archivés ainsi que d'une fonction de construction modélisant le processus d'extraction (origine source). Nous introduisons également le concept d'environnement qui définit des parties temporelles cohérentes de tailles adaptées aux exigences des décideurs. La manipulation des données est une extension des algèbres objet prenant en compte les caractéristiques du modèle de représentation de l'entrepôt. L'extension se situe au niveau des opérateurs temporels et des opérateurs de manipulation des ensembles d'états.
Au niveau des magasins, nous définissons un modèle de données multidimensionnelles permettant de représenter l'information en une constellation de faits ainsi que de dimensions munies de hiérarchies multiples. La manipulation des données s'appuie sur une algèbre englobant l'ensemble des opérations multidimensionnelles et offrant des opérations spécifiques à notre modèle. Nous proposons une démarche d'élaboration des magasins à partir de l'entrepôt.
Pour valider nos propositions, nous présentons le logiciel GEDOOH (Générateur d'Entrepôts de Données Orientées Objet et Historisées) d'aide à la conception et à la création des entrepôts dans le cadre de l'application médicale REANIMATIC.
Jedidi, Faïza. "Conception et manipulation de bases de données dimensionnelles à contraintes". Toulouse 3, 2004. http://www.theses.fr/2004TOU30176.
Texto completoIn decisional system framework, my thesis focuses on the conception and the querying of multidimensional data. We provide a constraint-based dimensional model organising data in a constellation of facts (subject of analysis), which are associated with dimensions (axes of analysis). These dimensions are flexible because each dimension instance can belong to one or several hierarchies (perspectives of analysis). Moreover, we integrate constraints in the multidimensional model allowing both the validation of multidimensional structures and consistent analyses by disambiguating null values, which can result from the combination of unsuitable hierarchies. To integrate these constraints, we define a multidimensional algebra enabling decision-makers to precise analysed instances. Finally, we provide a conceptual design method of multidimensional data integrating decision-maker needs and data issued from source schema. To validate our propositions, we provide a CASE tool to help designer to define conceptual multidimensional schema
Kabbaj, Younnes el. "MATHUS, un éditeur de relations : contribution à la manipulation des bases de données relationnelles". Paris 11, 1986. http://www.theses.fr/1986PA112279.
Texto completoMathus is a full-screen relational editor. It is used as a user interface of the PEPIN data base management system. It offers a new way to query a data base. The user browses through relations and marks the data he is interested in. In this thesis the functionalities and the implementation of MATHUS are presented. The PEPIN's third version developed to make possible the imp!ementation of MATHUS, is also described
Hubert, Gilles. "Les versions dans les bases de données orientées objet : modélisation et manipulation". Phd thesis, Université Paul Sabatier - Toulouse III, 1997. http://tel.archives-ouvertes.fr/tel-00378240.
Texto completoOuellet, Etienne. "Représentation et manipulation de données de simulation dans un environnement virtuel immersif". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28502/28502.pdf.
Texto completoBlouin, Arnaud. "Un modèle pour l'ingénierie des systèmes interactifs dédiés à la manipulation de données". Phd thesis, Université d'Angers, 2009. http://tel.archives-ouvertes.fr/tel-00477735.
Texto completoFournié, Laurent Henri. "Stockage et manipulation transactionnels dans une base de données déductives à objets : techniques et performances". Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0017.
Texto completoPietriga, Emmanuel. "Langages et techniques d'interaction pour la visualisation et la manipulation de masses de données". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00709533.
Texto completoKhouri, Selma. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2013. http://www.theses.fr/2013ESMA0016/document.
Texto completoData Warehouses (DWs) become essential components for companies and organizations.DWdesign field has been actively researched in recent years. The main limitation of the proposedapproaches is the lack of an overall vision covering the DW design cycle. Our main contributionin this thesis is to propose a method adapted to recent evolutions of the DW design cycle,and covering all its phases. These evolutions have given rise to new data storage models andnew deployment architectures, which offers different design choices for designers and administrators.DW literature recognizes the importance of user requirements in the design process, andthe importance of accessing and representing data semantics. We propose an ontology drivendesign method that valorizes users’ requirements by providing them a persistent view in theDW structure. This view allows anticipating diverse design tasks and simulating different designchoices. Our second proposal revisits the design cycle by executing the ETL phase (extractiontransformation-loading of data) in the conceptual stage. This proposal allows a deployment à lacarte of the DW using the different deployment platforms available
Khouri, Selma y Selma Khouri. "Cycle de vie sémantique de conception de systèmes de stockage et manipulation de données". Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00926657.
Texto completoBonhomme, Christine. "Unlangage visuel dédié à l'interrogation et la manipulation de bases de données spatio-temporelles". Lyon, INSA, 2000. http://www.theses.fr/2000ISAL0049.
Texto completoThis thesis deals with LVIS, a visual query language for spatiotemporal databases and more specifically for Geographical Information Systems (GIS). The language follows a query-by-example philosophy. Visual representations of queries - or visual queries - are incrementally specified by means of two sets of icons: the first one contains the icons that represent the object types of the database to be queried; the second one contains the icons of a minimal set of operators that are useful to express some criteria. Visual queries are then translated into a intermediate textual language - named pivot language. This pivot language is independent of the GIS that will finally execute queries. The language is defined by three independent grammars. A first grammar defines the semantics of the language. The second grammar - or visual grammar - defines the visual semantics of the language. The last grammar defines the keywords of the pivot language and allows the queries to be translated into the query language of a GIS that is chosen by the end user. A prototype has been developed with the aim of testing the interactions of the language with the MapInfo GIS. The two main contributions in the field of visual querying are: (1) The formulation of spatiotemporal queries are handled by both the integration of temporal (Allen relationships) and spatiotemporal (life-cycle of objects) operators and the definition of new visual metaphors allowing to visually represent such queries. (2) The validation of the icons of the language is assumed by psycho-cognitive tests that have been subjected to potential users. These tests aim too at evaluating the user-friendliness of the language
Wabbi, Ahmad. "Architectures parallèles systoliques pour la compression de données et la manipulation des sous-chaînes". Amiens, 1998. http://www.theses.fr/1998AMIE0107.
Texto completoBui, Quang Ngoc. "Aspects dynamiques et gestion du temps dans les systèmes de bases de données généralisées". Grenoble INPG, 1986. https://theses.hal.science/tel-00321849.
Texto completoGordillo, Silvia. "Modélisation et manipulation de phénomènes continus spatio-temporels". Lyon 1, 2001. http://www.theses.fr/2001LYO10118.
Texto completoRomat, Hugo. "From data exploration to presentation : designing new systems and interaction techniques to enhance the sense-making process". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS335/document.
Texto completoDuring the last decade, the amount of data has been constantly increasing. These data can come from several sources such as smartphones, audio recorders, cameras, sensors, simulations, and can have various structure. While computers can help us process these data, human judgment and domain expertise is what turns the data into actual knowledge. However, making sense of this increasing amount of diverse data requires visualization and interaction techniques. This thesis contributes such techniques to facilitate data exploration and presentation, during sense-making activities. In the first part of this thesis, we focus on interactive systems and interaction techniques to support sense-making activities. We investigate how users work with diverse content in order to make them able to externalize thoughts through digital annotations. We present our approach with two systems. The first system, ActiveInk enables the natural use of pen for active reading during a data exploration process. Through a qualitative study with eight participants, we contribute observations of active reading behaviors during data exploration and design principles to support sense-making. The second system, SpaceInk, is a design space of pen & touch techniques that make space for in-context annotations during active reading by dynamically reflowing documents. In the second part, we focus on techniques to visually represent insights and answers to questions that arise during sense-making activities. We focus on one of the most elaborate data structures: multivariate networks, that we visualize using a node-link diagram visualization. We investigate how to enable a flexible iterative design process when authoring node-link diagrams for multivariate networks. We first present a system, Graphies, that enables the creation of expressive node-link diagram visualizations by providing designers with a flexible workflow that streamlines the creative process, and effectively supports quick design iterations. Moving beyond the use of static visual variables in node-link diagrams, we investigated the use of motion to encode data attributes. To conclude, we show in this thesis that the sense-making process can be enhanced in both processes of exploration and presentation, by using ink as a new medium to transition between exploration and externalization, and by following a flexible, iterative process to create expressive data representations. The resulting systems establish a research framework where presentation and exploration are a core part of visual data systems
Liu, Can. "Embodied Interaction for Data Manipulation Tasks on Wall-sized Displays". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS207/document.
Texto completoLarge data sets are used acceleratingly in various professional domains, such as medicine and business. This rises challenges in managing and using them, typically including sense-making, searching and classifying. This does not only require advanced algorithms to process the data sets automatically, but also need users' direct interaction to make initial judgment or to correct mistakes from the machine work. This dissertation explores this problem domain and study users' direct interaction with scattered large data sets. Human body is made for interacting with the physical world, from micro scope to very large scales. We can naturally coordinate ourselves to see, hear, touch and move to interact with the environment in various scales. Beyond individual, humans collaborate with each other through communication and coordination. Based on Dourish's definitioncite{2001:AFE:513034}, Embodied Interaction encourages interaction designers to take advantage of users' existing skills in the physical world, when designing the interaction with digital artefacts. I argue that large interactive spaces enable embodied user interaction with data spread over space, by leveraging users' physical abilities such as walking, approaching and orienting. Beyond single users, co-located environments provide multiple users with physical awareness and verbal gestural communication. While single users' physical actions have been augmented to be various input modalities in existing research, the augmentation of between-user resources has been less explored. In this dissertation, I first present an experiment that formally evaluates the advantage of single users performing a data manipulation task on a wall-sized display, comparing to on a desktop computer. It shows that using users' physical movements to navigate in a large data surface, outperforms existing digital navigation techniques on a desktop computer such as Focus+Context. With the same experimental task, I then study the interaction efficiency of collaborative data manipulation with a wall-sized display, in loosely or closely coupled collaboration styles. The experiment measures the effect of providing a Shared Interaction Technique, in which collaborators perform part of an action each to issue a command. The results conclude its benefits in terms of efficiency, user engagement as well as physical fatigue. Finally, I explore the concept of augmenting human-to-human interaction with shared interaction techniques, and illustrate a design space of such techniques for supporting collaborative data manipulation. I report the design, implementation and evaluation of a set of these techniques and discuss the future work
Gutierrez, Alejandro. "Extraction et manipulation d'information structurée sous la forme de graphe à partir de sources de données existantes". Versailles-St Quentin en Yvelines, 1997. http://www.theses.fr/1997VERS0015.
Texto completoBruno, Emmanuel. "Documents XML : modélisation, manipulation et contrôle d'accès". Toulon, 2001. http://www.theses.fr/2001TOUL0011.
Texto completoTekli, Gilbert. "XML manipulation by non-expert users". Thesis, Saint-Etienne, 2011. http://www.theses.fr/2011STET4013/document.
Texto completoComputers and the Internet are everywhere nowadays, in every home, domain and field. Communications between users, applications and heterogeneous information systems are mainly done via XML structured data. XML, based on simple textual data and not requiring any specific platform or environment, has invaded and governed the communication Medias. In the 21stcentury, these communications are now inter-domain and have stepped outside the scope of computer science into other areas (i.e., medical, commerce, social, etc.). As a consequence, and due to the increasing amount of XML data floating between non-expert users (programmers, scientists, etc.), whether on instant messaging, social networks, data storage and others, it is becoming crucial and imperative to allow non-experts to be able to manipulate and control their data (e.g.,parents who want to apply parental control over instant messaging tools in their house, a journalist who wants to gather information from different RSS feeds and filter them out, etc.). The main objective of this work is the study of XML manipulations by non-expert users. Four main related categories have been identified in the literature: XML-oriented visual languages, Mashups, XML manipulation by security and adaptation techniques, and Dataflow visual programming languages. However, none of them provides a full-fledged solution for appropriate XML data manipulation. In our research, we formally defined an XML manipulation framework, entitled XA2C (XML Alteration/Adaptation Composition Framework). XA2C represents a visual studio for an XML-oriented DFVPL (Dataflow Visual Programming Language), called XCDL (XML-oriented Composition Definition Language) which constitutes the major contribution of this study. XCDL is based on Colored Petri Nets allowing non-expert users to compose manipulation operations. The XML manipulations range from simple data selection/projection to data modification (insertion, removal, obfuscation, etc.). The language is oriented to deal with XML data (XML documents and fragments), providing users with means to compose XML oriented operations. Complementary to the language syntax and semantics, XA2C formally defines also the compiler and runtime environment of XCDL. In addition to the theoretical contribution, we developed a prototype, called X-Man, and formally defined an evaluation framework for XML-oriented visual languages and tools that was used in a set of case studies and experiments to evaluate the quality of use of our language and compare it to existing approaches. The obtained assessments and results were positive and show that our approach outperforms existing ones. Several future tracks are being studied such as integration of more complex operations (control operators, loops, etc.), automated compositions, and language derivation to define specific languages oriented towards different XML-based standards (e.g., RSS, RDF, SMIL, etc.)
Lakhal, Lotfi. "Contribution à l'étude des interfaces pour non-informaticien dans la manipulation de bases de données relationnelles". Nice, 1986. http://www.theses.fr/1986NICE4067.
Texto completoAlexandre, Thomas. "Manipulation de données multimédia dans la carte à micro-processeur : application à l'identification biométrique et comportement". Lille 1, 1995. http://www.theses.fr/1995LIL10010.
Texto completoRavat, Franck. "Modèles et outils pour la conception et la manipulation de systèmes d'aide à la décision". Habilitation à diriger des recherches, Université des Sciences Sociales - Toulouse I, 2007. http://tel.archives-ouvertes.fr/tel-00379779.
Texto completoPour les ED, notre objectif a été d'apporter des solutions pour la modélisation de l'évolution des données décisionnelles (extension de modèle objet) et pour l'intégration de données textuelles sans en fixer le schéma à priori. Pour les MD, nous avons proposé un modèle multidimensionnel de base avec différentes extensions répondant aux besoins des décideurs. Ces extensions permettent de prendre en compte la gestion d'indicateurs et de données textuelles, l'évolution temporelle (versions), la cohérence des données et de ses analyses (contraintes sémantiques), l'intégration et la capitalisation de l'expertise des décideurs (annotations) ainsi que la personnalisation des schémas multidimensionnels (poids). Ces travaux ont été complétés par la proposition d'une démarche de conception qui présente l'avantage de prendre en compte les besoins des décideurs et les sources de données. Cette démarche permet de modéliser aussi bien l'aspect statique (données décisionnelles) que l'aspect dynamique (processus d'alimentation du SAD).
D'un point de vue manipulation des données, nous avons proposé une algèbre complétée d'un langage graphique orienté décideur et d'un langage déclaratif. Nos propositions ont été validées par la participation à différents projets ainsi que le co-encadrement de 5 thèses de doctorat et le suivi de travaux de plusieurs Master Recherche.
Faudemay, Pascal. "Un processeur VLSI pour les opérations de bases de données". Paris 6, 1986. http://www.theses.fr/1986PA066468.
Texto completoMoussa, Ahmad. "Pour une cohérence du résultat d'un opérateur dans un contexte spatial, temporel et alphanumérique". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR31/document.
Texto completoGeographic information can be perceived according to three dimensions: a spatial dimension (e.g., city), a temporal dimension (e.g., date) and an alphanumeric dimension (e.g., population). The integration of these various types of data is one of the current challenges in order to model geographic representation and to process geographic information. We contribute to this challenge by proposing a method to help in the management of data coherency involving two on three dimensions. This coherency can be settled after applying an operator on geographic information. Our method is based on semantic links between dimensions. A semantic link represents the logical interaction between (two) dimensions. This connection, associated with a set of rules depending on data manipulation operators, allows us to guarantee the coherency of the data model associated with the result of an operator
Gérardin, Benoit. "Manipulation et contrôle d'ondes élastiques guidées en milieux complexes". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC230/document.
Texto completoWhatever their nature or the propagation medium, controlling the propagation of waves is of fundamental interest for many applications. On the one hand, one can tame wave-fields in order to take advantage of the complexity of the medium. On the other hand, one can force waves along desired paths through a careful design of manmade materials. In this thesis, we study those two aspects on the basis of laser-ultrasonic experiments involving the propagation of Lamb waves in elastic plates.The control of wave propagation through complex systems is first investigated by means of the scattering matrix approach. In diffusive media, theorists have demonstrated the existence of propagation channels either closed or open through which the wave can travel. The first part of this work present a direct experimental evidence of this result as well as the ability to fully transmit a wave through a disordered medium. In a second part, the measurement of the time-delay matrix allows the study of such channels in the time domain. They are shown to give rise to particle-like wave packets that remain focused in time and space throughout their trajectory in the medium.The second part of this thesis consists in studying the concepts of negative reflection and refraction for the manipulation of Lamb wave propagation. On the one hand, negative reflection is taken advantage of to perform a passive phase conjugation of Lamb waves. On the other hand, the notion of complementary media is investigated in order to cancel the diffraction of waves and cloak some areas of the plate
Jullien, Christian. "Le-cool : un langage orienté objet à héritage multiple permettant la manipulation des concepts et des données en intelligence artificielle". Paris 6, 1986. http://www.theses.fr/1986PA066264.
Texto completoToussaint, Marion. "Une contribution à l'industrie 4.0 : un cadre pour sécuriser l'échange de données standardisées". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0121.
Texto completoThe recent digital transformation of the manufacturing world has resulted in numerous benefits, from higher quality products to enhanced productivity and shorter time to market. In this digital world, data has become a critical element in many critical decisions and processes within and across organizations. Data exchange is now a key process for the organizations' communication, collaboration, and efficiency. Industry 4.0 adoption of modern communication technologies has made this data available and shareable at a quicker rate than we can consume or track it. This speed brings significant challenges such as data interoperability and data traceability, two interdependent challenges that manufacturers face and must understand to adopt the best position to address them. On one hand, data interoperability challenges delay faster innovation and collaboration. The growing volume of data exchange is associated with an increased number of heterogeneous systems that need to communicate with and understand each other. Information standards are a proven solution, yet their long and complex development process impedes them from keeping up with the fast-paced environment they need to support and provide interoperability for, slowing down their adoption. This thesis proposes a transition from predictive to adaptive project management with the use of Agile methods to shorten the development iterations and increase the delivery velocity, increasing standards adoption. While adaptive environments have shown to be a viable solution to align standards with the fast pace of industry innovation, most project requirements management solutions have not evolved to accommodate this change. This thesis also introduces a model to support better requirement elicitation during standards development with increased traceability and visibility. On the other hand, data-driven decisions are exposed to the speed at which tampered data can propagate through organizations and corrupt these decisions. With the mean time to identify (MTTI) and mean time to contain (MTTC) such a threat already close to 300 days, the constant growth of data produced and exchanged will only push the MTTI and MTTC upwards. While digital signatures have already proven their use in identifying such corruption, there is still a need for formal data traceability framework to track data exchange across large and complex networks of organizations to identify and contain the propagation of corrupted data. This thesis analyses existing cybersecurity frameworks, their limitations, and introduces a new standard-based framework, in the form of an extended NIST CSF profile, to prepare against, mitigate, manage, and track data manipulation attacks. This framework is also accompanied with implementation guidance to facilitate its adoption and implementation by organizations of all sizes
Teste, Olivier. "Modélisation et manipulation des systèmes OLAP : de l'intégration des documents à l'usager". Habilitation à diriger des recherches, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00479460.
Texto completoLiu, Jiangping. "Conception, implémentation et évaluation d'opérateurs de manipulation des objets structurés pour le langage fonctionnel de programmation de base de données GRIFFON". Aix-Marseille 3, 1995. http://www.theses.fr/1995AIX30041.
Texto completoZechinelli-Martini, José-Luis. "Construction et manipulation de présentations spatio-temporelles multi-médias à partir de serveurs d'objets répartis : applications aux données sur le Web". Université Joseph Fourier (Grenoble), 2001. http://www.theses.fr/2001GRE10052.
Texto completoWe propose an infrastructure (JAGUAR) for specifying spatio-temporal multimedia presentations managers. These managers are mediators between applications and distributed heterogeneous object sources accessible through the Web (via their URL). A manager is able to define, query, and build presentations that are stored in a multimedia database system. All objects are described by a schema that associates a default presentation to each of them. We defined a spatio-temporal model to integrate objects and presentations. The model describes objects composition through spatio-temporal relations. We also proposed OQLiST a language that provides spatio-temporal operators. Language and model can be used for specifying, querying and representing homogeneously inter and intra-media descriptions. In order to validate the JAGUAR infrastructure, a presentation manager prototype was implemented using SMIL and Java Media Framework (JMF) platforms. The manager has been used for specifying and implementing (1) a touristic application and (2) a visualization tool for objects stored in a data warehouse. For the later, the manager provides a specific schema for the data cube
Lambert, de Cambray Béatrix. "Etude de la modélisation de la manipulation et de la représentation de l'information spatiale tridimensionnelle dans les bases de données géographiques". Paris 6, 1994. http://www.theses.fr/1994PA066518.
Texto completoSallaberry, Christian. "Cholq : une interface de manipulation de base de données orientée objet pour non-spécialistes. Mise en oeuvre dans le cadre d'une application industrielle". Toulouse 3, 1992. http://www.theses.fr/1992TOU30180.
Texto completoRhazi, Aicha. "Automatisation de la gestion et de la manipulation des données issues d'un chantier de fouilles archéologiques depuis leur enregistrement jusqu'à leur interrogation". Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25692/25692.pdf.
Texto completoAn archaeological excavation is a technical process aiming to collect all relevant information related to every entity or fact that appears on an archeological site. The excavation process of an archeological site can be described according to the following stages: - To explore the site in order to locate the uncovered remains ; - To collect and record excavated objects; - To assemble and interpret these objects in order to reconstruct the site; - To analyze these objects following predefined queries by archaeologists, and to disseminate the results. The steps of an excavation require the development of a computing system which could help field archaeologists to structure their data in a data base and to explore their data by meaningful queries during the analytical stage, the latter being considered as the main one in the process of excavation. Our project aims to develop such a computing system; we named it NESTOR. It is a database system that helps archaeologists to store every collected data from an excavated site in a user-friendly way. It also allows updating data, once recorded, in order to take into account the evolution of related works on site, or to check them in the course of the excavation in order to make sure of their adequacy with the reality of the excavation site and the completeness of the stored data. NESTOR also makes it possible for archaeologist’s to collect and incorporate archaeological data in the database. This process is modifiable and reversible. Lastly, archaeologists are able to query the database using some predefined queries offered by the NESTOR system.
Zanotti, Andrea. "Modélisation de type multi-agents en archéologie : l'expansion des premiers agriculteurs Balkaniques : adaptation du modèle OBRESOC : manipulation et exploration des données simulées". Thesis, Paris, EPHE, 2016. http://www.theses.fr/2016EPHE3057/document.
Texto completoA topic of great importance in archaeological research throughout the last decades concerns the expansion of the first farmers from Anatolia through the Balkans. The standard archaeological approaches allowed the understanding of the path and timing of this expansion; however, they lack explanation of what is unobservable in the archaeological record: in particular, the socio-economic structure of a prehistoric farming society. Throughout this thesis, an agent-based model was built in order to explore those elements which are hidden in archaeology. This model, called BEAN (Bridging European and Anatolian Neolithic), is an adaptation of the OBRESOC model (Un OBservatoire REtrospectif d'une SOCiété archéologique). OBRESOC was created to simulate the expansion of the LBK farmers in central Europe, and was adapted to the Balkan archaeological context. The expansion of the first Neolithic farmers in the Balkans was simulated by combining the archaeological records to ethnohistoric and paleodemographic inferences. A realistic environment has been modelled where the areas of optimum farming are determined by meteorology and soil fertility estimates. An agent corresponds to a household; agents interact on this landscape, following socioeconomic partial intermediate models. For instance: households composed of a nuclear family; intensive farming system on small plot completed by hunting-gathering; expansion determined by scalar stress at the hamlet scale; family clan solidarity; shortages and famines caused by meteorological events). Thus, the model simulates the functioning of the Neolithic farming society and its geographic expansion. Several simulations have been executed, testing different combinations of the key parameters, identified through a sensitivity analysis. The goodness of fit of simulated data to the archaeological data is measured mostly on geographic criteria : the best simulation is the one that produces the expansion pattern that better fits to the archaeological data. Specific procedures have been developed in order to process the large amount of data produced by the model. The observation of this data permitted to explore some aspects that are invisible in archaeological record : for example, the model helped to investigate some archaeological beliefs, based on assumptions that could not be verified. The model also permitted the exploration of other topics, such as the comparison between the pioneer front of colonization and the zones of previous occupation, as well as the effect of meteorology on the expansion of the farming system. The model produced an expansion pattern that corresponds geographically and chronologically to the expansion suggested by the archaeological evidence. The exploration of socio-economic outputs permitted the formulation of new hypothesis that could not be made using purely archaeological record. Even when there's a large gap between what is found in archaeology and what is produced by the model, this agent-based modelling approach helps to raise new questions, adding new ideas and perspective to the actual state of research
Guillot, Bernard. "Réalisation d'un outil autonome pour l'écriture et l'interrogation de systèmes de gestion de bases de données et de connaissances sur une machine multiprocesseur : évolution du concept de bases de données vers la manipulation d'objets image et graphique". Compiègne, 1986. http://www.theses.fr/1986COMPI219.
Texto completoDuque, Hector. "Conception et mise en oeuvre d'un environnement logiciel de manipulation et d'accès à des données réparties : application aux grilles d'images médicales : le système DSEM / DM2". Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0050/these.pdf.
Texto completoOur vision, in this thesis, is the one of a bio-medical grip as a partner of hospital's information systems, sharing computing resources as well as a platform for sharing information. Therefore, we aim at (i) providing transparent access to huge distributed medical data sets, (ii) querying these data by their content, and (iii), sharing computing resources within the grip. Assuming the existence of a grip infrastructure, we suggest a multi-layered architecture (Distributed Systems Engines – DSE). This architecture allows us to design High Performance Distributed Systems which are highly extensible, scalable and open. It ensures the connection between the grip, data storing systems, and medical platforms. The conceptual design of the architecture assumes a horizontal definition for each one of the layers, and is based on a multi-process structure. This structure enables the exchange of messages between processes by using the Message Passing Paradigm. These processes and messages allow one to define entities of a higher level of semantic significance, which we call Drivers and, which instead of single messages, deal with different kinds of transactions: queries, tasks and requests. Thus, we define different kinds of drivers for dealing with each kind of transaction, and in a higher level, we define services as an aggregation of drivers. The architectural framework of drivers and services eases the design of components of a Distributed System (DS), which we call engines, and also eases the extensibility and scalability of DS
Carbonneaux, Yves. "Conception et réalisation d'un environnement informatique sur la manipulation directe d'objets mathématiques, l'exemple de Cabri-graphes". Phd thesis, Université Joseph Fourier (Grenoble), 1998. http://tel.archives-ouvertes.fr/tel-00004882.
Texto completoRobidou, Sébastien. "Représentation de l'imperfection des connaissances dans les bases de situation des systèmes de commandement". Rouen, 1997. http://www.theses.fr/1997ROUES083.
Texto completoCruz, Christophe. "Intégration et manipulation de données hétérogènes au travers de scènes 3D dynamiques, évolutives et interactives : application aux IFC pour la gestion collaborative de projets de génie civil". Dijon, 2004. http://www.theses.fr/2004DIJOS035.
Texto completoHe, Wuwei. "Reactive control and sensor fusion for mobile manipulators in human robot interaction". Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00979633.
Texto completoAbdelmoula, Mariem. "Génération automatique de jeux de tests avec analyse symbolique des données pour les systèmes embarqués". Thesis, Nice, 2014. http://www.theses.fr/2014NICE4149/document.
Texto completoOne of the biggest challenges in hardware and software design is to ensure that a system is error-free. Small errors in reactive embedded systems can have disastrous and costly consequences for a project. Preventing such errors by identifying the most probable cases of erratic system behavior is quite challenging. Indeed, tests in industry are overall non-exhaustive, while formal verification in scientific research often suffers from combinatorial explosion problem. We present in this context a new approach for generating exhaustive test sets that combines the underlying principles of the industrial test technique and the academic-based formal verification approach. Our approach builds a generic model of the system under test according to the synchronous approach. The goal is to identify the optimal preconditions for restricting the state space of the model such that test generation can take place on significant subspaces only. So, all the possible test sets are generated from the extracted subspace preconditions. Our approach exhibits a simpler and efficient quasi-flattening algorithm compared with existing techniques and a useful compiled internal description to check security properties and reduce the state space combinatorial explosion problem. It also provides a symbolic processing technique of numeric data that provides a more expressive and concrete test of the system. We have implemented our approach on a tool called GAJE. To illustrate our work, this tool was applied to verify an industrial project on contactless smart cards security
Schmitt, Alan. "Analyses Statiques pour Manipulations de Données Structurées Hiérarchiquement". Habilitation à diriger des recherches, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00637917.
Texto completoMarcel, Patrick. "Manipulations de données multidimensionnelles et langages de règles". Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0093.
Texto completoThis works is a contribution to the study of the manipulations in data warehouses. In the first part, we present a state of the art about multidimensional data manipulation languages in systems dedicated to On-Line analytical Processing (OLAP systems). We point out interesting combinations that haven't been studied. These conclusions are used in the second part to propose a simple rule-based language allowing specifying typical treatments arising in OLAP systems. In a third part, we illustrate the use of the language to describe OLAP treatments in spreadsheets, and to generate semi automatic spreadsheet programs
Faiza, Ghozzi. "CONCEPTION ET MANIPULATION DE BASES DE DONNEES DIMENSIONNELLES À CONTRAINTES". Phd thesis, Université Paul Sabatier - Toulouse III, 2004. http://tel.archives-ouvertes.fr/tel-00549421.
Texto completoBADAOUI, SAID. "Base d'images medicales, le modele de donnees, l'interface de manipulation". Paris 11, 1990. http://www.theses.fr/1990PA112220.
Texto completoDennebouy, Yves. "Un langage visuel pour la manipulation de données /". Lausanne, 1993. http://library.epfl.ch/theses/?nr=1182.
Texto completoRenault, Thomas. "Three essays on the informational efficiency of financial markets through the use of Big Data Analytics". Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E009/document.
Texto completoThe massive increase in the availability of data generated everyday by individuals on the Internet has made it possible to address the predictability of financial markets from a different perspective. Without making the claim of offering a definitive answer to a debate that has persisted for forty years between partisans of the efficient market hypothesis and behavioral finance academics, this dissertation aims to improve our understanding of the price formation process in financial markets through the use of Big Data analytics. More precisely, it analyzes: (1) how to measure intraday investor sentiment and determine the relation between investor sentiment and aggregate market returns, (2) how to measure investor attention to news in real time, and identify the relation between investor attention and the price dynamics of large capitalization stocks, and (3) how to detect suspicious behaviors that could undermine the in-formational role of financial markets, and determine the relation between the level of posting activity on social media and small-capitalization stock returns. The first essay proposes a methodology to construct a novel indicator of investor sentiment by analyzing an extensive dataset of user-generated content published on the social media platform Stock-Twits. Examining users’ self-reported trading characteristics, the essay provides empirical evidence of sentiment-driven noise trading at the intraday level, consistent with behavioral finance theories. The second essay proposes a methodology to measure investor attention to news in real-time by combining data from traditional newswires with the content published by experts on the social media platform Twitter. The essay demonstrates that news that garners high attention leads to large and persistent change in trading activity, volatility, and price jumps. It also demonstrates that the pre-announcement effect is reduced when corrected newswire timestamps are considered. The third essay provides new insights into the empirical literature on small capitalization stocks market manipulation by examining a novel dataset of messages published on the social media plat-form Twitter. The essay proposes a novel methodology to identify suspicious behaviors by analyzing interactions between users and provide empirical evidence of suspicious stock recommendations on social media that could be related to market manipulation. The conclusion of the essay should rein-force regulators’ efforts to better control social media and highlights the need for a better education of individual investors
Jannin, Pierre. "Modelisation, visualisation et manipulation de donnees volumiques par arbre octal en imagerie medicale". Rennes 1, 1988. http://www.theses.fr/1988REN10055.
Texto completo