Rozprawy doktorskie na temat „Approche orientée sur les données”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Approche orientée sur les données”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Ait, Lahcen Ayoub. "Développement d'Applications à Base de Composants avec une Approche Centrée sur les Données et dans une Architecture Orientée Service et Pair-à-Pair : Spécification, Analyse et Intergiciel". Phd thesis, Université Nice Sophia Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00766329.
Pełny tekst źródłaLecler, Philippe. "Une approche de la programmation des systèmes distribués fondée sur la fragmentation des données et des calculs et sa mise en oeuvre dans le système GOTHIC". Rennes 1, 1989. http://www.theses.fr/1989REN10103.
Pełny tekst źródłaMeziane, Madjid. "Développement d'une approche orientée objet actif pour la conception de systèmes d'information". Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0124.
Pełny tekst źródłaInformation systems (IS) present two very dependant aspects: a structural (or static) aspect and a behavior (or dynamic) one. Working distinctly over these two aspects makes the information systems analysis, conception and also evolution more complicated. Even the methods of object-oriented conception, which integrate partially the system behavior at structure level (through methods), cannot take into account the IS dynamic dimension. We deduce that management rules (we mean integrity constraints, derivation rules and active ones), that describe the IS activities and execution conditions, are generally diffused through multiple models (of method). Stated in the object approach context, we propose, in this work, the u e of active object concept as modeling entity because it constitutes an ideal support in describing not only the data parts and object treatments, but also the set of management rules. The active object concept makes easier the IS conception in integrating efficacy the «Event-condition-Action mechanism”, key of active databases. The introduction of such concept needs some new models to describe and traduce the passive and active behavior of IS. For that reason, we propose an extension of state diagrams. Nevertheless, the important number of produced rules at conceptual level requires its partition. We realize it by rules stratification. Finally, over the utilities plan, we had to add some new functionality to CASE Tools
Bernardi, Fabrice. "Conception de bibliothèques hiérarchisées de modèles réutilisables selon une approche orientée objet". Corte, 2002. http://www.theses.fr/2002CORT3068.
Pełny tekst źródłaHamon, Catherine. "Conception orientée objet d'une base de données éditoriale : implantation sur le SGBDOO 02". Nancy 1, 1992. http://www.theses.fr/1992NAN10319.
Pełny tekst źródłaLamontagne, Philippe. "Modélisation spatio-temporelle orientée par patrons avec une approche basée sur individus". Mémoire, École de technologie supérieure, 2009. http://espace.etsmtl.ca/64/1/LAMONTAGNE_Philippe.pdf.
Pełny tekst źródłaGhemtio, Wafo Léo Aymar. "Simulation numérique et approche orientée connaissance pour la découverte de nouvelles molécules thérapeutiques". Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10103/document.
Pełny tekst źródłaTherapeutic innovation has traditionally benefited from the combination of experimental screening and molecular modelling. In practice, however, the latter is often limited by the shortage of structural and biological information. Today, the situation has completely changed with the high-throughput sequencing of the human genome, and the advances realized in the three-dimensional determination of the structures of proteins. This gives access to an enormous amount of data which can be used to search for new treatments for a large number of diseases. In this respect, computational approaches have been used for high-throughput virtual screening (HTVS) and offer an alternative or a complement to the experimental methods, which allow more time for the discovery of new treatments.However, most of these approaches suffer the same limitations. One of these is the cost and the computing time required for estimating the binding of all the molecules from a large data bank to a target, which can be considerable in the context of the high-throughput. Also, the accuracy of the results obtained is another very evident challenge in the domain. The need to manage a large amount of heterogeneous data is also particularly crucial.To try to surmount the current limitations of HTVS and to optimize the first stages of the drug discovery process, I set up an innovative methodology presenting two advantages. Firstly, it allows to manage an important mass of heterogeneous data and to extract knowledge from it. Secondly, it allows distributing the necessary calculations on a grid computing platform that contains several thousand of processors. The whole methodology is integrated into a multiple-step virtual screening funnel. The purpose is the consideration, in the form of constraints, of the knowledge available about the problem posed in order to optimize the accuracy of the results and the costs in terms of time and money at various stages of high-throughput virtual screening
Midouni, Sid Ahmed Djallal. "Une approche orientée service pour la recherche sémantique de contenus multimédias". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI056/document.
Pełny tekst źródłaMultimedia data sources from various fields (medical, tourism, trade, art and culture, etc.) became essential on the web. Accessing to multimedia data in distributed systems poses new challenges due to many system parameters: volume, diversity of interfaces, representation format, location, etc. In addition, the growing needs of users and applications to incorporate semantics in the information retrieval pose new issues. To take into account this new complexity, we are interested in our research of data integration solutions based on web services. In this thesis, we propose an approach-oriented service for the semantic search of multimedia content. We called this approach SeSaM (Semantic Search of Multimedia content). SeSaM is based on the definition of a new pattern of services to access multimedia content, which is the MaaS services (Multimedia as a Services). It is based on a two-phase process: description and discovery of MaaS services. As for the MaaS services description, we have defined the SA4MaaS language (Semantic Annotation for MaaS services), which is an extension of SAWSDL (W3C recommendation). The main idea of this language is the integration, in addition to business domain semantic, of multimedia information semantics in the MaaS services description. As for the MaaS service discovery, we have proposed a new matchmaker MaaS-MX (MaaS services Matchmaker) adapted to the MaaS services description model. MaaS-MX is composed of two essential steps: domain matching and multimedia matching. Domain matching consists in comparing the business domain description of MaaS services and the query, whereas multimedia matching compares the multimedia description of MaaS services and the query. The approach has been implemented and evaluated in two different domains: medical and tourism. The results indicate that using both domain and multimedia matching considerably improves the performance of multimedia data retrieving systems
Tanasescu, Adrian. "Vers un accès sémantique aux données : approche basée sur RDF". Lyon 1, 2007. http://www.theses.fr/2007LYO10069.
Pełny tekst źródłaThe thesis mainly focuses on information retrival through RDF documents querying. Therefore, we propose an approach able to provide complete and pertinent answers to a user formulated SPARQL query. The approach mainly consists of (1) determining, through a similarity measure, whether two RDF graphs are contradictory, by using the associated ontological knowledge, and (2) building pertinent answers through the combination of statements belonging to non contradicting RDF graphs that partially answer a given query. We also present an RDF storage and querying platform, named SyRQuS, whose query answering plan is entirely based on the former proposed querying approach. SyRQuS is a Web based plateform that mainly provides users with a querying interface where queries can be formulated using SPARQL
Ghemtio, Leo. "Simulation numérique et approche orientée connaissance pour la découverte de nouvelles molécules thérapeutiques". Phd thesis, Université Henri Poincaré - Nancy I, 2010. http://tel.archives-ouvertes.fr/tel-00609018.
Pełny tekst źródłaHaudot, Luc. "Une approche orientée utilisateur pour la conception de systèmes coopératifs en ordonnancement de production". Toulouse, INSA, 1996. http://www.theses.fr/1996ISAT0015.
Pełny tekst źródłaPoulain, Thibault. "Une approche orientée sémantique pour l'interrogation d'une coopération de systèmes d'information basée sur des ontologies". Dijon, 2009. http://www.theses.fr/2009DIJOS037.
Pełny tekst źródłaThis research task fits in the field of the interoperability of information systems. A cooperation system based on ontologies called OWSCIS (Ontology and Web Service based Cooperation of Information Sources) is proposed. It allows the cooperation of a group of information systems using ontologies to express the semantics of shared information, to allow a transparent access to the data managed by the cooperation. The architecture of cooperation relies on the use of a domain ontology and of local ontologies mapped together. A study of existing cooperation systems and of tools and methods of ontology mapping is done to identify their limits. The work developped focuses on the three main elements of the cooperation: 1) the cooperation architecture definition, especially the description of a knowledge base describing the reference ontology used as a pivot inside the cooperation, 2) the definition of a mapping methodology between the local ontologies and the reference ontology, 3) the methodlogy of querying the cooperation. The mapping methodology combines complementary comparative méthods to discover matches between a local and the reference ontology relying on structural and individual information from those ontologies. The methodology is implemented and experimental results are described. This thesis provides a solution to the problem of interoperation of information systems based on the exploitation of semantic information described by ontologies
Amy, Matthieu. "Systèmes résilients pour l'automobile : d'une approche à composants à une approche à objets de la tolérance aux fautes adaptative sur ROS". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0014.
Pełny tekst źródłaLike the mobile phone evolved as smartphone, cars have gradually turned into smartcars. Advanced Driver Assistance Systems (ADAS), infotainment or personalization of the vehicle are clearly today key aspects of attractiveness for customers. Connected vehicles led manufacturers to remotely update embedded software, promoting their maintainability and the subsequent addition of features later in the lifetime of a car. In this context, the AUTOSAR consortium, a group of major car manufacturers, has designed a new software platform to facilitate remote updates and online modification of such embedded systems. However, with the increasing complexity of embedded software systems, it becomes mandatory to maintain dependability in operation despite unforeseen changes. Thus, the dependability mechanisms must also be adapted and updated to ensure the resilience of the system, namely, the persistence of dependability when facing changes. Fault Tolerance Mechanisms (FTM) which are means ensuring a nominal or an (acceptable) degraded service in the presence of faults must also adapt to a change in the application operational context (fault model changes, characteristics of the application or available resources). This ability to adapt FTMs is called Adaptive Fault Tolerance (AFT). The contributions of this thesis are performed in this context of evolution and adaptivity of critical embedded software. In this work, we propose approaches to develop safe embedded systems whose FTMs can adapt to the operational context in different ways, coarse-grain or fine-grain modifications of their implementation at runtime, to minimize the impact on the application. We propose a first solution based on a substitutable component approach: we break down FTMs according to a Before-Proceed-After design scheme grouping respectively fault tolerance actions performed before a functional action of the application, the interaction with the application and fault tolerance actions required after the action performed by the application. We implement this approach on ROS (Robot Operating System), a middleware for robotics that enables an application to be implemented as a component graph. We then propose a second solution in which we refine the granularity of the FTM components by first categorizing the individual dependability actions they contain. This enables an elementary action to be substituted instead of a component as a whole. Thus, we solved a resource problem that appeared in the substitutable component approach. Since a component is mapped to a process, the FTMs overuse more resources that are obviously limited in embedded systems. To this aims, we design a solution based on an object-based scheduling approach. FTMs are designed in this case as an object graph. The fault tolerance basic actions are mapped to objects that are scheduled within the FTM component. This second approach was also implemented on ROS. Finally, we make a comparative analysis of the two software execution platforms of the automotive industry, namely the AUTOSAR Classic Platform and the AUTOSAR Adaptive Platform, which is still under development today. As a final step, we examine the compatibility between these two runtime supports and our approaches to design resilient systems based on adaptive fault tolerance
Mihaita, Adriana. "Approche probabiliste pour la commande orientée évènement des systèmes stochastiques à commutation". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENT036/document.
Pełny tekst źródłaHybrid systems are dynamical systems, characterized by a dual behaviour, a continuousinteraction between a discrete and a continuous functioning part. The center ofour work is represented by a particular class of hybrid systems, more specific by thestochastic switching systems which we model using continuous time Markov chains anddifferential equations.The random behaviour of such systems requires a special command which adapts tothe arbitrary events that can completely change the evolution of the system. We chose anevent-based control policy which is triggered only when it’s necessary (on an unforeseenevent - for example when a threshold that is reached), and until certain functioningconditions are met (the system returns in the normal state).Our approach aims to develop a probabilistic model that calculates a performancecriterion (in this case the energy of the system) for the proposed control policy. We startby proposing a discrete event simulation for the controlled stochastic switching system,which gives us the opportunity of applying a direct optimisation of the control command.It also allows us to compare the results with the ones obtained by the analytical modelswe have built when the event-based control is applied.An analytical model for computing the energy consumed by the system to apply thecontrol is designed by using the exit probabilities of the control region, respectively, thesojourn times of the Markov chain before and after reaching the control limits. The lastpart of this work presents the results we have obtained when comparing the analyticaland the simulation method
Benjelloun, Omar. "Active XML : une approche des services Web centrée sur les données". Paris 11, 2004. http://www.theses.fr/2004PA112087.
Pełny tekst źródłaThis thesis introduces Active XML (AXML, for short), a declarative framework that harnesses Web services for distributed data management, and is put to work in a peer-to-peer architecture. An AXML document is an XML document that may contain embedded calls to Web services, whose invocation enriches the document. An AXML service is a Web service that exchanges AXML documents. An AXML "peer" is a repository of AXML documents. On the one hand, it acts as a client, by invoking the service calls embedded in its documents. On the other hand, a peer acts as a server, by providing AXML services that can be declaratively specified as queries or updates over the AXML documents of its repository. The AXML approach allows for gracefully combining stored information with data defined in an intensional manner (as service calls). The fact that AXML peers can exchange a mix of materialized and intensional data (via AXML documents) leads to a very powerful distributed data management paradigm. The AXML approach leads to a number of important problems that are studied in the thesis. First, we address the issue of controlling the exchange of AXML data. We propose to use declarative schema specifications, and provide algorithms to statically enforce them. Second, we propose techniques for the "lazy evaluation" of queries on AXML documents, that detect which embedded service calls may contribute to query answers. An implementation of AXML peers compliant with W3C standards is also described in the thesis
Hajji, Hicham. "Gestion des risques naturels : une approche fondée sur l'intégration des données". Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0039/these.pdf.
Pełny tekst źródłaThere is a huge geographic data available with many organizations collecting geographic data for centuries, but some of that is still in the form of paper maps or in traditional files or databases, and with the emergence of latest technologies in the field of software and data storage some has been digitized and is stored in latest GIS systems. However, too often their reuse for new applications is a nightmare, due to diversity of data sets, heterogeneity of existing systems in terms of data modeling concepts, data encoding techniques, obscure semantics of data,storage structures, access functionality, etc. Such difficulties are more common in natural hazards information systems. In order to support advanced natural hazards management based on heterogeneous data, this thesis develops a new approach to the integration of semantically heterogeneous geographic information which is capable of addressing the spatial and thematic aspects of geographic information. The approach is based on OpenGIS standard. It uses it as a common model for data integration. The proposed methodology takes into consideration a large number of the aspects involved in the construction and the modelling of natural hazards management information system. Another issue has been addressed in this thesis, which is the design of an ontology for natural hazards. The ontology design has been extensively studied in recent years, we have tried throughout this work to propose an ontology to deal with semantic heterogeneity existing between different actors and to model existing knowledge present for this issue. The ontology contains the main concepts and relationships between these concepts using OWL Language
Mihaita, Adriana, i Adriana Mihaita. "Approche probabiliste pour la commande orientée évènement des systèmes stochastiques à commutation". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00770325.
Pełny tekst źródłaAhmad, Houda. "Une approche matérialisée basée sur les vues pour l'intégration de documents XML". Phd thesis, Grenoble 1, 2009. http://www.theses.fr/2009GRE10086.
Pełny tekst źródłaSemi-structured data play an increasing role in the development of the Web through the use ofXML. However, the management of semi-structured data poses specific problems because semi-structured data, contrary to classical databases, do not rely on a predefined schema. The schema of a document is contained in the document itself and similar documents may be represented by different schemas. Consequently, the techniques and algorithms used for querying or integrating this data are more complex than those used for structured data. The objective of our work is the integration of XML data by using the principles of Osiris, a prototype of KB-DBMS, in which views are a central concept. Ln this system, a family of objects is defined by a hierarchy of views, where a view is defined by its parent views and its own attributes and constraints. Osiris belongs to the family of Description Logics; the minimal view of a family of objects is assimilated to a primitive concept and its other views to defined concepts. An object of a family satisfies sorne ofits views. For each family of objects, Osiris builds a n-dimensional classification space by analysing the constraints defined in all of its views. This space is used for object classification and indexation. Ln this the sis we study the contribution of the main features of Osiris - classification, indexation and semantic query optimization - to the integration ofXML documents. For this purpose we produce a target schema (an abstract XML schema), who represents an Osiris schema; every document satisfying a source schema (concrete XML schema) is rewritten in terrns of the target schema before undergoing the extraction of the values ofits entities. The objects corresponding to these entities are then classified and indexed. The Osiris mechanism for semantic query optimization can then be used to extract the objects of interest of a query
Ahmad, Houda. "Une approche matérialisée basée sur les vues pour l'intégration de documents XML". Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00957148.
Pełny tekst źródłaLacouture, Jérôme. "Ingénierie logicielle orientée service : une contribution à l'adaptation dynamique basée sur une approche mixte composant/agent". Pau, 2008. http://www.theses.fr/2008PAUU3011.
Pełny tekst źródłaThe evolution of the distributed systems is taking a new dimension with the development of new technologies (Services Oriented Architectures, Grid computing, nomad and ubiquitous computing). Within such environments, the software architecture of the system evolves at runtime, that is during the exploitation phase of the development cycle. Consequently, persistence of the services and dynamic aspects establish new challenges and bring to reconsider inherent problems that are the reuse of the existing services and their adaptation. To adapt, to integrate and to coordinate “on the fly" the available services, to react dynamically to the evolution of the systems appear as central research concerns today. The objectives of the works which we present around the CompAA approach join this context and propose a way in a contextual adaptation, relative to the environmental conditions (quality of service, availability on the network), most dynamic and most autonomous that possible, by the discovery of the available services. For that purpose, our contributions get organized around two main propositions : 1) a model of adaptable components, leaning on the principles of abstraction and variability, and also leaning on a semantic definition in terms of functional and non-functional properties allowing an automatic interpretation by software agents. 2) a process of dynamic adaptation implementing the proposed model. The specified process covers stages going of the analysis of needs until the adaptation of components, by way of stages of discovery and selection of components. Various policies allow a level of adaptability increased within the process. A dominating aspect emphasized in this thesis lies in the originality of the approach which aims at joining the advantages known for two paradigms : components and agents. For us, there is a real interest to specify entities possessing the structuring and the qualities of reuse of software components and evolving in an autonomous and flexible way following the example of the software agents. The field of experiment through which are tried our propositions is the one of the e-learning, more particularly through our participation to the European project ELeGI (European Learning Grid Infrastructure). Through various situations of learning, the participants evolve by sharing their knowledge to progress individually and collectively. In this context, the knowledge and the needs of each are in perpetual spheres of influence. The CompAA model finds naturally its place in this kind of activity and allows to guarantee a certain transparency to the user while guaranteeing him an optimal quality of service by endowing the system of more autonomous and more self-adaptable entities
Mebarki, Nasser. "Une approche d'ordonnancement temps réel basée sur la sélection dynamique de règles de priorité". Lyon 1, 1995. http://www.theses.fr/1995LYO10043.
Pełny tekst źródłaAit, Brahim Amal. "Approche dirigée par les modèles pour l'implantation de bases de données massives sur des SGBD NoSQL". Thesis, Toulouse 1, 2018. http://www.theses.fr/2018TOU10025/document.
Pełny tekst źródłaLe résumé en anglais n'a pas été communiqué par l'auteur
Denoual, Franck. "Développement d'une plate-forme logicielle orientée objet pour la décompression et l'édition vidéo sur noyau temps-réel". Rennes 1, 2001. http://www.theses.fr/2001REN10107.
Pełny tekst źródłaMarguin-Lortic, Marie-Claude. "Les Données radar en télédétection : approche théorique et application sur la région niortaise". Paris, EHESS, 1987. http://www.theses.fr/1987EHES0014.
Pełny tekst źródłaRadar data, as yet not very well-known, are rarely used for the purpose of land systems and soil occupation studies. The objectives of this thesis are : 1) to familiarize remote sensing users with this new data, 2) to increase the understanding of backscattering through the analysis and interpretation of a seasat picture (21. 08. 1978) in the south of the deux-sevres department (france). The performances of the s. A. R. Of seasat are tested by using three different approaches : - the land systems, - the land units, - the forestry units and cultivated fields. A comparison is made between s. A. R. Seasat and m. S. S. Landsat in order to show their complementarity
Marcenac, Pierre. "Eddi : apports pour les environnements de développement de didacticiels : modélisation des stratégies tutorielles basée sur une approche structurelle des connaissances". Nice, 1990. http://www.theses.fr/1990NICE4422.
Pełny tekst źródłaBenjamin, Catherine. "L'affectation du travail dans les exploitations agricoles : approche microéconomique et application sur données françaises". Paris 1, 1993. http://www.theses.fr/1993PA010051.
Pełny tekst źródłaNguyen, Thi Dieu Thu. "Une approche basée sur LD pour l'interrogation de données relationnelles dans le Web sémantique". Nice, 2008. http://www.theses.fr/2008NICE4007.
Pełny tekst źródłaThe Semantic Web is a new Web paradigm that provides a common framework for data to be shared and reused across applications, enterprises and community boundaries. The biggest problem we face right now is a way to ``link'' information coming from different sources that are often heterogeneous syntactically as well as semantically. Today much information is stored in relational databases. Thus data integration from relational sources into the Semantic Web is in high demand. The objective of this thesis is to provide methods and techniques to address this problem. It proposes an approach based on a combination of ontology-based schema representation and description logics. Database schemas in the approach are designed using ORM methodology. The stability and flexibility of ORM facilitate the maintenance and evolution of integration systems. A new web ontology language and its logic foundation are proposed in order to capture the semantics of relational data sources while still assuring a decidable and automated reasoning over information from the sources. An automatic translation of ORM models into ontologies is introduced to allow capturing the data semantics without laboriousness and fallibility. This mechanism foresees the coexistence of others sources, such as hypertext, integrated into the Semantic Web environment. This thesis constitutes the advances in many fields, namely data integration, ontology engineering, description logics, and conceptual modeling. It is hoped to provide a foundation for further investigations of data integration from relational sources into the Semantic Web
Ségura-Devillechaise, Marc. "Traitement par aspects des problèmes d'évolution logicielle dans les caches Web". Nantes, 2005. http://www.theses.fr/2005NANT2148.
Pełny tekst źródłaThis thesis addresses the problem of the number of intermediaries over the Internet. Internet performance motivates the need for intermediaries to reduce tnetwork latency. The multiplication of services available over Intenet fiouls the ever growing number of intermediaries: every service provider deploys -or requests subscontractors to deploy - machines reducing the latency of the services he provides. Each specific service requires a particular replication strategy increasing consequently the number of intermediaries. To solve this issue, we propose to build an adaptable Web cache. We propose to use aspect-oriented programming to turn a legacy Web cache : squid into an open Web cache. Aspects, woven on the fly, are used to build the interface between the cache and the adaptation. The main advantage of this approach is to delay the specification of the adaptation interface up to the time where programmers are ready to design it that is when they are programming the adaptation. In the absence of an appropriate aspect system capable of supporting our approach, we devised our own one. According to our evaluation, Arachne, our aspect system, allows to turn squid into an open Web cache. We conlude that our approach could put an end to the multiplication of intermediaries over Internet. Speeding up the deployment of the adaptation of existing replication strategies would increase Internet performance while sharing the investment in its infrastructure
Al-Najdi, Atheer. "Une approche basée sur les motifs fermés pour résoudre le problème de clustering par consensus". Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4111/document.
Pełny tekst źródłaClustering is the process of partitioning a dataset into groups, so that the instances in the same group are more similar to each other than to instances in any other group. Many clustering algorithms were proposed, but none of them proved to provide good quality partition in all situations. Consensus clustering aims to enhance the clustering process by combining different partitions obtained from different algorithms to yield a better quality consensus solution. In this work, a new consensus clustering method, called MultiCons, is proposed. It uses the frequent closed itemset mining technique in order to discover the similarities between the different base clustering solutions. The identified similarities are presented in a form of clustering patterns, that each defines the agreement between a set of base clusters in grouping a set of instances. By dividing these patterns into groups based on the number of base clusters that define the pattern, MultiCons generates a consensussolution from each group, resulting in having multiple consensus candidates. These different solutions are presented in a tree-like structure, called ConsTree, that facilitates understanding the process of building the multiple consensuses, and also the relationships between the data instances and their structuring in the data space. Five consensus functions are proposed in this work in order to build a consensus solution from the clustering patterns. Approach 1 is to just merge any intersecting clustering patterns. Approach 2 can either merge or split intersecting patterns based on a proposed measure, called intersection ratio
Monlon-Borrel, Jean-Louis. "Systèmes interactifs d'aide à la décision en agriculture : de l'intérêt d'un raisonnement basé sur les modèles et d'une approche orientée objets". Toulouse 1, 1992. http://www.theses.fr/1992TOU10010.
Pełny tekst źródłaData processing in agriculture is characterised by a wide range of packages but few of them are really decision support systems. Artificial intelligence and expert systems tried to make up for this lack, especially in the domain of diagnosis which is often the determining key of all decision support system. Knowledge representation with production rules allows declarative way of programming but this approach turns out to be inadequate and restrictive to such an extend that most expert systems weren't operational from then on, several proposals have been made for a new kind of decision support systems including a model-based reasoning method, a high level user interface and object oriented approach. All these proposals were validated by the system Rentagri for financial diagnosis in farm management
Chaari, Anis. "Nouvelle approche d'identification dans les bases de données biométriques basée sur une classification non supervisée". Phd thesis, Université d'Evry-Val d'Essonne, 2009. http://tel.archives-ouvertes.fr/tel-00549395.
Pełny tekst źródłaAoun-Allah, Mohamed. "Le forage distribué des données : une approche basée sur l'agrégation et le raffinement de modèles". Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23393/23393.pdf.
Pełny tekst źródłaWith the pervasive use of computers in all spheres of activity in our society, we are faced nowadays with the explosion of electronic data. This is why we need automatic tools that are able to automatically analyze the data in order to provide us with relevant and summarized information with respect to some query. For this task, data mining techniques are generally used. However, these techniques require considerable computing time in order to analyze a huge volume of data. Moreover, if the data is geographically distributed, gathering it on the same site in order to create a model (a classifier for instance) could be time consuming. To solve this problem, we propose to build several models, that is one classifier by site. Then, rules constituting these classifiers are aggregated and filtered based on some statistical measures, and a validation process is carried out on samples from each site. The resulting model, called a metaclassifier is, on one hand, a prediction tool for any new (unseen) instance and, on the other hand, an abstract view of the whole data set. We base our rule filtering approach on a confidence measure associated with each rule, which is computed statistically and then validated using the data samples (one from each site). We considered several validation techniques such as will be discussed in this thesis.
Boudoin, Pierre. "L'interaction 3D adaptative : une approche basée sur les méthodes de traitement de données multi-capteurs". Phd thesis, Université d'Evry-Val d'Essonne, 2010. http://tel.archives-ouvertes.fr/tel-00553369.
Pełny tekst źródłaBenhalima, Djamel-Eddine. "Contribution à la conception d'un système d'analyse expérimentale SICOPE fondée sur une approche orientée-objet : Application à la communication graphique". Valenciennes, 1995. https://ged.uphf.fr/nuxeo/site/esupversions/3eea74bf-26cc-473d-9ec6-11405b54fb6c.
Pełny tekst źródłaZendjebil, Iman mayssa. "Localisation 3D basée sur une approche de suppléance multi-capteurs pour la réalité augmentée mobile en milieu extérieur". Thesis, Evry-Val d'Essonne, 2010. http://www.theses.fr/2010EVRY0024/document.
Pełny tekst źródłaThe democratization of mobile devices such as smartphones, PDAs or tablet-PCs makes it possible to use Augmented Reality systems in large scale environments. However, in order to implement such systems, many issues must be adressed. Among them, 3D localization is one of the most important. Indeed, the estimation of the position and orientation (also called pose) of the viewpoint (of the camera or the user) allows to register the virtual objects over the visible part of the real world. In this paper, we present an original localization system for large scale environments which uses a markerless vision-based approach to estimate the camera pose. It relies on natural feature points extracted from images. Since this type of method is sensitive to brightness changes, occlusions and sudden motion which are likely to occur in outdoor environment, we use two more sensors to assist the vision process. In our work, we would like to demonstrate the feasibility of an assistance scheme in large scale outdoor environment. The intent is to provide a fallback system for the vision in case of failure as well as to reinitialize the vision system when needed. The complete localization system aims to be autonomous and adaptable to different situations. We present here an overview of our system, its performance and some results obtained from experiments performed in an outdoor environment under real conditions
Shahzad, Atif. "Une Approche Hybride de Simulation-Optimisation Basée sur la fouille de Données pour les problèmes d'ordonnancement". Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00647353.
Pełny tekst źródłaParakh, Ousman Yassine Zaralahy. "Une nouvelle approche pour la détection des spams se basant sur un traitement des données catégorielles". Mémoire, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/5753.
Pełny tekst źródłaShahzad, Muhammad Atif. "Une approche hybride de simulation-optimisation basée sur la fouille de données pour les problèmes d'ordonnancement". Nantes, 2011. http://archive.bu.univ-nantes.fr/pollux/show.action?id=53c8638a-977a-4b85-8c12-6dc88d92f372.
Pełny tekst źródłaA data mining based approach to discover previously unknown priority dispatching rules for job shop scheduling problem is presented. This approach is based upon seeking the knowledge that is assumed to be embedded in the efficient solutions provided by the optimization module built using tabu search. The objective is to discover the scheduling concepts using data mining and hence to obtain a set of rules capable of approximating the efficient solutions for a job shop scheduling problem (JSSP). A data mining based scheduling framework is presented and implemented for a job shop problem with maximum lateness and mean tardiness as the scheduling objectives. The results obtained are very promising
Chichignoud, Aurélien. "Approche méthodologique pour le maintien de la cohérence des données de conception des systèmes sur puce". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS069/document.
Pełny tekst źródłaThe development of highly complex products requires the maintenance of a huge set of inter-dependent documents, in various formats. Unfortunately, no tool or methodology is available today to systematically maintain consistency between all these documents. Therefore, according to observations made in STMicroelectronics, when a document changes, stakeholders must manually propagate the changes to the impacted set of dependent documents. For various reasons, they may not well propagate the change, or even may not propagate it at all. Related documents thereby diverge more and more over time. It dramatically impacts productivity to realign documents and make the very wide-ranging corpus of documents consistent. This paper proposes a methodology to help stakeholders to systematically maintain consistency between documents, based on the Architecture Description concept introduced by ISO42010. First, a model is defined to describe formally and completely correspondences between Architecture Description Elements of documents. This model is designed to be independent of documents formats, selected system development lifecycle and the working methods of the industry. Second, these correspondences are analyzed in case of document modification in order to help stakeholders maintaining global corpus consistency. A prototype has been developed, which implements the proposed approach, to evaluate the methodology. 18 subjects volunteered to evaluate the approach. These subjects made two tests (with and without our methodology) involving the correction of inconsistencies added in a set of documents. These tests allowed us to identify two variables: the number of inconsistencies corrected and the average time to correct the inconsistencies. According to our study, the use of the approach helps to correct 5.5% more inconsistencies in a time 3.3% lower
Toure, Fadel. "Orientation de l'effort des tests unitaires dans les systèmes orientés objet : une approche basée sur les métriques logicielles". Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27081.
Pełny tekst źródłaCurrent software systems are large, complex and critical. The need for quality requires a lot of tests that consume a large amount of resources during the development and the maintenance of systems. Different techniques are used to reduce the costs of testing activities. Our work is in this context. It aims to guide the unit testing effort distribution on the riskiest software components using the source code attributes. We conducted several empirical analyses on different large object-oriented open source software systems. We identified and studied several metrics that characterize the unit testing effort according to different perspectives. We also studied their relationships with the software class metrics including quality indicators. The quality indicators are a synthetic metric that we introduced in our previous work. It captures control flow and different software attributes. We explored different approaches for unit testing effort orientation using source code attributes and machine learning algorithms. By grouping software metrics, we proposed an effort orientation approach based on software class risk analysis. In addition to the significant relationships between testing metrics and source code attributes, the results we obtained suggest the possibility of using source code metrics for unit testing effort orientation.
Zhang, Lei. "Sur une approche isogéométrique pour problèmes multi-champs couplés en grandes transformations". Thesis, Ecole centrale de Marseille, 2016. http://www.theses.fr/2016ECDM0012/document.
Pełny tekst źródłaRecently proposed as a general purpose numerical method, the Isogeometric Analysis (IGA) offers great perspective to bridge the gap between CAD and CAE. The IGA is closely related to the finite element method (FEM) as the method is based on the same variational framework. Moreover, this method has shown in many circumstances to be have a better accuracy than the FEM (large mesh distortions…). Our final aim in this work is to simulate complex multiphysics problems for elastomers industrial parts. As matter of fact, the two main numerical issues in this context is the incompressibility/quasi-incompressibility of the material and the thermochemical coupling in Galerkin formulations. First, we propose, a programming paradigm of the IGA in an existing Java object-oriented hierarchy initially designed for solving multi-fields coupled problems at finite strains. We develop an approach that fully take benefit of the original architecture to reduce developments for both FEM and IGA (one problem developed in FEM can be run in IGA and vice versa). Second, we investigate volumetric locking issues persisting for low order NURBS element observed with standard displacement formulation as finite elements. To cure the problem, we adopt two-fields mixed formulation (displacement/pressure) for the sake of simplicity and target at assessing different discretizations in stability (inf-sup condition). The basic idea is to first to increase the internal knot’s multiplicity or to subdivide the patch for displacements. These ideas that are directly inspired from patches properties, have been found in the literature for the Stokes problem and extended to large strain in solid mechanics. The comparison between the two-fields mixed formulation and a strain projection method is lead at small and large strains. At last, we originally adopt a similar strategy for thermomechanical problem at small and large strains. In the context two-fields formulation, displacement/temperature, the LBB stability condition must be fulfilled to guaranty stability. Thus, we investigate the choices of patches for two-fields formulation displacement/temperature fields for IGA applied to thermoelasticity. Several numerical results for thermomechanical problems at small and finite strains, linear and nonlinear have been presented. At last, an incompressible viscous thermo-hyperelastic model is evaluated in the IGA framework with the proposed approach
Nguyen, Thu Thi Dieu. "Une approche basée sur la logique de description pour l'intégration de données relationnelles dans le web sémantique". Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00507482.
Pełny tekst źródłaL'objectif de cette thèse est de fournir des méthodes et des techniques pour résoudre ce problème d'intégration des bases de données. Nous proposons une approche combinant des représentations de schémas à base d'ontologie et des logiques de descriptions. Les schémas de base de données sont conçus en utilisant la méthodologie ORM. La stabilité et la flexibilité de ORM facilite la maintenance et l'évolution des systèmes d'intégration. Un nouveau langage d'ontologie web et ses fondements logiques sont proposées afin de capturer la sémantique des sources de données relationnelles, tout en assurant le raisonnement décidable et automatique sur les informations provenant des sources. Une traduction automatisée des modèles ORM en ontologies est introduite pour permettre d'extraire la sémantique des données rapidement et sans faillibilité. Ce mécanisme prévoit la coexistence d'autre sources d'informations, tel que l'hypertexte, intégrées à l'environnement web sémantique.
Cette thèse constitue une avancée dans un certain nombre de domaine, notamment dans l'intégration de données, l'ingénierie des ontologies, les logiques de descriptions, et la modélisation conceptuelle. Ce travail pourra fournir les fondations pour d'autres investigations pour intégrer les données provenant de sources relationnelles vers le web sémantique.
Pinard, Hugo. "Imagerie électromagnétique 2D par inversion des formes d'ondes complètes : Approche multiparamètres sur cas synthétiques et données réelles". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAU041/document.
Pełny tekst źródłaGround Penetrating Radar (GPR) is a geophysical investigation method based on electromagnetic waves propagation in the underground. With frequencies ranging from 5 MHz to a few GHz and a high sensitivity to electrical properties, GPR provides reflectivity images in a wide variety of contexts and scales: civil engineering, geology, hydrogeology, glaciology, archeology. However, in some cases, a better understanding of some subsurface processes requires a quantification of the physical parameters of the subsoil. For this purpose, inversion of full waveforms, a method initially developed for seismic exploration that exploits all the recorded signals, could prove effective. In this thesis, I propose methodological developments using a multiparameter inversion approach (dielectric permittivity and conductivity), for two-dimensional transmission configurations. These developments are then applied to a real data set acquired between boreholes.In a first part, I present the numerical method used to model the propagation of electromagnetic waves in a heterogeneous 2D environment, a much-needed element to carry out the process of imaging. Then, I introduce and study the potential of standard local optimization methods (nonlinear conjugate gradient, l-BFGS, Newton truncated in its Gauss-Newton and Exact-Newton versions) to fight the trade-off effects related to the dielectric permittivity and to the electrical conductivity. In particular, I show that effective decoupling is possible only with a sufficiently accurate initial model and the most sophisticated method (truncated Newton). As in the general case, this initial model is not available, it is necessary to introduce a scaling factor which distributes the relative weight of each parameter class in the inversion. In a realistic medium and for a cross-hole acquisition configuration, I show that the different optimization methods give similar results in terms of parameters decoupling. It is eventually the l-BFGS method that is used for the application to the real data, because of lower computation costs.In a second part, I applied the developed Full waveform inversion methodology to a set of real data acquired between two boreholes located in carbonate formations, in Rustrel (France, 84). This inversion is carried out together with a synthetic approach using a model representative of the studied site and with a similar acquisition configuration. This approach enables us to monitor and validate the observations and conclusions derived from data inversion. It shows that reconstruction of dielectrical permittivity is very robust. Conversely, conductivity estimation suffers from two major couplings: the permittivity and the amplitude of the estimated source. The derived results are successfully compared with independent data (surface geophysics and rock analysis on plugs) and provides a high resolution image of the geological formation. On the other hand, a 3D analysis confirms that 3D structures presenting high properties contrasts, such as the buried gallery present in our site, would require a 3D approach, notably to better explain the observed amplitudes
Ané, Thierry. "Changement de temps, processus subordonnés et volatilité stochastique : une approche financière sur des données à haute fréquence". Paris 9, 1997. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1997PA090027.
Pełny tekst źródłaThe goal of this thesis is to validate mathematically the brilliant conjecture by Clark (1973) who chose the volume as the subordinating process t defining the economic time in which asset prices should be observed. Along the lines of the recent microstructure literature and using the tick by tick data, we show, in agreement with the recent empirical results by Jones, Kaul and Lipson (1994), that it is in fact the number of trades which defines the economic time. We prove that without any assumption on the distribution of the stochastic time t we recover normality for asset price returns when using the number of trades as the "stochastic clock". We extract from a tick by tick data base the empirical distribution of asset returns and use a parametric estimation procedure to compute the moments of the unknown distribution of the subordinator t. The moments of t coincide with the corresponding moments of the number of trades. Lastly, we explain how the issue of stochastic volatility can be embedded in the general framework of stochastic time changes and what it implies for option pricing and hedging. The effectiveness of implied versus historical volatility in forecasting the future volatility has recently been, with good reasons, the subject of scrutiny both among academics and practitioners. It is common practice to use implied volatility as the market's forecast of future volatility. S&P 500 options and futures prices are used to show that implied volatility is a poor forecast of the realized volatility. The use of subordinated processes can help to construct a good forecast of the realized volatility. Moreover, our time change as well as our volatility forecast highlights the role of the rate of information arrival proxied by the number of trades
Kaswengi, Mbwiti Joseph. "L'influence du point de vente sur le capital d'une marque : une approche par les données du panel". Thesis, Orléans, 2012. http://www.theses.fr/2012ORLE0507.
Pełny tekst źródłaDoes a store format quality can generally influence brand equity? This is the main question we address inthis research. Numerous studies have been published on brand equity drivers. However, little has been saidabout the role of distribution. In addition, much research has conceptualized store image as a global or onedimensionalconcept. However, according to the research majority, store image is a multidimensionalconstruct.The purpose of this research is to investigate the relationship between distribution quality and brand equity.We develop a model that connects store image dimensions (price image, assortment variety, private labelquality, product quality, service quality, and location) and brand equity, measured thanks to the interceptswhich are considered as a brand incremental utility measure. The model controls for the variables such asthe product category. We adopt a dynamic factor model using panel data on 4500 households, 12 storesbelonging to different chains in France over a period of five years and a half (2004-2009). The results showthat store image effects on the brand equity depend on the store name, store format, product categories,brands and consumer characteristics.From a theoretical perspective, this research identifies the most relevant store image dimensions as well astheir efficiency conditions. From a methodological point of view, we use a dynamic factor model that has notyet been used on brand equity measurement. From a managerial standpoint, this research may help brandmanagers to better assess the store impact on their brands value
Kaswengi, Joseph. "L'influence du point de vente sur le capital d'une marque : une approche par les données du panel". Electronic Thesis or Diss., Orléans, 2012. http://www.theses.fr/2012ORLE0507.
Pełny tekst źródłaDoes a store format quality can generally influence brand equity? This is the main question we address inthis research. Numerous studies have been published on brand equity drivers. However, little has been saidabout the role of distribution. In addition, much research has conceptualized store image as a global or onedimensionalconcept. However, according to the research majority, store image is a multidimensionalconstruct.The purpose of this research is to investigate the relationship between distribution quality and brand equity.We develop a model that connects store image dimensions (price image, assortment variety, private labelquality, product quality, service quality, and location) and brand equity, measured thanks to the interceptswhich are considered as a brand incremental utility measure. The model controls for the variables such asthe product category. We adopt a dynamic factor model using panel data on 4500 households, 12 storesbelonging to different chains in France over a period of five years and a half (2004-2009). The results showthat store image effects on the brand equity depend on the store name, store format, product categories,brands and consumer characteristics.From a theoretical perspective, this research identifies the most relevant store image dimensions as well astheir efficiency conditions. From a methodological point of view, we use a dynamic factor model that has notyet been used on brand equity measurement. From a managerial standpoint, this research may help brandmanagers to better assess the store impact on their brands value
Olteanu, Ana-Maria. "Fusion de connaissances imparfaites pour l'appariement de données géographiques : proposition d'une approche s'appuyant sur la théorie des fonctions de croyance". Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00469407.
Pełny tekst źródłaTeguiak, Henry Valery. "Construction d'ontologies à partir de textes : une approche basée sur les transformations de modèles". Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2012. http://tel.archives-ouvertes.fr/docs/00/78/62/60/PDF/ISAE-ENSMA_2012-12-12_Thesis_TEGUIAK.pdf.
Pełny tekst źródłaSince its emergence in the early 1990s, the notion of ontology has been quickly distributed in many areas of research. Given the promise of this concept, many studies focus on the use of ontologies in many areas like information retrieval, electronic commerce, semantic Web, data integration, etc. . The effectiveness of all this work is based on the assumption of the existence of a domain ontology that is already built an that can be used. However, the design of such ontology is particularly difficult if you want it to be built in a consensual way. If there are tools for editing ontologies that are supposed to be already designed, and if there are also several platforms for natural language processing able to automatically analyze corpus of texts and annotate them syntactically and statistically, it is difficult to find a globally accepted procedure useful to develop a domain ontology in a progressive, explicit and traceable manner using a set of information resources within this area. The goal of ANR DaFOE4App (Differential and Formal Ontology Editor for Application) project, within which our work belongs to, was to promote the emergence of such a set of tools. Unlike other tools for ontologies development, the platform DaFOE presented in this thesis does not propose a methodology based on a fixed number of steps with a fixed representation of theses steps. Indeed, in this thesis we generalize the process of ontologies development for any number of steps. The interest of such a generalization is, for example, to offer the possibility to refine the development process by inserting or modifying steps. We may also wish to remove some steps in order to simplify the development process. The aim of this generalization is for instance, for the overall process of ontologies development, to minimize the impact of adding, deleting, or modifying a step while maintaining the overall consistency of the development process. To achieve this, our approach is to use Model Driven Engineering to characterize each step through a model and then reduce the problem of switching from one step to another to a problem of models transformation. Established mappings between models are then used to semi-automate the process of ontologies development. As all this process is stored in a database, we propose in this thesis, for Model Based Database (MBDB) because they can store both data and models describing these data, an extension for handling mappings. We also propose the query language named MQL (Mapping Query Language) in order to hide the complexity of the MBDB structure. The originality of the MQL language lies in its ability, through queries syntactically compact, to explore the graph of mappings using the transitivity property of mappings when retrieving informations
Bouzillé, Guillaume. "Enjeux et place des data sciences dans le champ de la réutilisation secondaire des données massives cliniques : une approche basée sur des cas d’usage". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B023/document.
Pełny tekst źródłaThe dematerialization of health data, which started several years ago, now generates na huge amount of data produced by all actors of health. These data have the characteristics of being very heterogeneous and of being produced at different scales and in different domains. Their reuse in the context of clinical research, public health or patient care involves developing appropriate approaches based on methods from data science. The aim of this thesis is to evaluate, through three use cases, what are the current issues as well as the place of data sciences regarding the reuse of massive health data. To meet this objective, the first section exposes the characteristics of health big data and the technical aspects related to their reuse. The second section presents the organizational aspects for the exploitation and sharing of health big data. The third section describes the main methodological approaches in data sciences currently applied in the field of health. Finally, the fourth section illustrates, through three use cases, the contribution of these methods in the following fields: syndromic surveillance, pharmacovigilance and clinical research. Finally, we discuss the limits and challenges of data science in the context of health big data
Abbar, Sofiane. "Modèle d'accès personnalisé pour les plateformes de livraison de contenu : une approche basée sur les services". Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0053.
Pełny tekst źródłaAccess to relevant information adapted to user’s preferences and contexts is a challenge in many applications. In this thesis, we address the personalization in the context of content delivery platforms. Hence we propose a personalized access model based on multidimensional models of user’s profile and context. The PAM provides a set of services that enable applications to take into account both user’s profile and context within personalization processes, hence delivering more accurate contents. PAM services include, but are not limited to, an automatic approach for contexts and contextual preferences discovery, the projection of user’s profile within his current context, and matching of profiles and contents to provide user recommendations. We also show that PAM services allow a smooth integration of context within personalized applications without changing their inner processes. Thus, we instantiated the PAM to define context-aware recommender systems used to evaluate our approach