Dissertations / Theses on the topic 'Conception pour les données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Conception pour les données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mordan, Taylor. "Conception d'architectures profondes pour l'interprétation de données visuelles." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS270.
Nowadays, images are ubiquitous through the use of smartphones and social media. It then becomes necessary to have automatic means of processing them, in order to analyze and interpret the large amount of available data. In this thesis, we are interested in object detection, i.e. the problem of identifying and localizing all objects present in an image. This can be seen as a first step toward a complete visual understanding of scenes. It is tackled with deep convolutional neural networks, under the Deep Learning paradigm. One drawback of this approach is the need for labeled data to learn from. Since precise annotations are time-consuming to produce, bigger datasets can be built with partial labels. We design global pooling functions to work with them and to recover latent information in two cases: learning spatially localized and part-based representations from image- and object-level supervisions respectively. We address the issue of efficiency in end-to-end learning of these representations by leveraging fully convolutional networks. Besides, exploiting additional annotations on available images can be an alternative to having more images, especially in the data-deficient regime. We formalize this problem as a specific kind of multi-task learning with a primary objective to focus on, and design a way to effectively learn from this auxiliary supervision under this framework
Abdelhédi, Fatma. "Conception assistée d’entrepôts de données et de documents XML pour l’analyse OLAP." Thesis, Toulouse 1, 2014. http://www.theses.fr/2014TOU10005/document.
Today, data warehouses are a major issue for business intelligence applications within companies. Sources of a warehouse, i.e. the origin of data that feed, are diverse and heterogeneous sequential files, spreadsheets, relational databases, Web documents. The complexity is such that the software on the market only partially meets the needs of decision makers when they want to analyze the data. Therefore, our work is within the decision support systems context that integrate all data types (mainly extracted from relational databases and XML documents databases) for decision makers. They aim to provide models, methods and software tools to elaborate and manipulate data warehouses. Our work has specifically focused on two complementary issues: aided data warehouse and modeling and OLAP analysis of XML documents
Chelghoum, Kamel. "Un modèle de données sémantique pour la C. A. O." Lyon 1, 1989. http://www.theses.fr/1989LYO10173.
Tlili, Assed. "Structuration des données de la conception d'un bâtiment pour une utilisation informatique." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1986. http://tel.archives-ouvertes.fr/tel-00529509.
Kahwati, Ghassan. "Conception et réalisation d'une interface pour l'interrogation d'une base de données documentaire." Grenoble 2, 1986. http://www.theses.fr/1986GRE21021.
This survey deals with basic structure of query languages and their linguistic characteristics (i. E abbreviations, vocabulary and syntax). A part of the work carried out is about the different types of users in order to apprehend human factors influencing their behaviour when facing an automatized system. Next, the occasional user affected by the interface is defined and a command language (Lasydo) is adapted to him. The characteristics, functions and elements of this language are analysed here. A translator with a compiler structure realizes the transformations of the user's request into a form accepted by DBMS. For this purpose we use a software as a support to the writing of translator. It takes into account the internal structure of Lasydo and that of the data base. The translator is defined and realized. A comparative and synthetic study about data models has allowed the implementation of a database consistent with the relational model. By working out the properties of this model, we suggest two schemes for this base : a one-relational basis and multiple-relational basis. Moreover, we study the representation of these schemes as a dynamic graph ; and we express the deduction mechanism of the access path to the required information. At last, we study the implementation of the base and the interface under the multics system of the HB68
Serna, Encinas María Trinidad. "Entrepôts de données pour l'aide à la décision médicale : conception et expérimentation." Université Joseph Fourier (Grenoble), 2005. http://www.theses.fr/2005GRE10083.
Data warehouses integrate infonnation coming from different data sources which are often heterogeneous and distributed. Their main goal is to provide a global view for analysts and managers to make decisions based on data sets and historical logs. The design and construction of a data warehouse are composed by three phases : extraction-integration, organisation and interrogation. Ln this thesis, we are interested in the latter two. For us, the organisation is a complex and delicate task. Hence, we divide it into two parts : data structuring and data managing. For structuring we propose a multidimensional model which is composed by three classes : Cube, Dimension and Hierarchy. We propose also an algorithm for selecting the optimal set of materialized views. We consider that data management should include warehouse evolution. The concept of schema evolution was adapted here and we propose to use bitemporal schema versions for the management, storage and visualization of current and historical data (intentional and extensional). Finally, we have implemented a graphie interface that allows semi-automatic query generation (indicators). These queries (for example, "number of patients by hospitals and diseases") are determined by the application domain. We had the opportunity to work in a medical project ; it allowed us to verify and 10 validate our proposition using real data
Mammar, Amel. "Un environnement formel pour le développement d'applications bases de données." Paris, CNAM, 2002. http://www.theses.fr/2002CNAM0437.
This work presents a formal approach for developing safety database applications. This approach consists of generating relational database implementations from formal specifications. We begin by designing the application with graphical notations such as UML, OMT,. . . Then an automatic process is used to translate them into B formal specifications. Using the B refinement process, a set of refinement rules, acting on both data and operations (programs), are applied on the specifications. These refinement process is generally a manuel and very costy task especially in proff phase. Thanks to the generic feauture of the refinement rules, an assistant refiner can be elaborated, allowing the cost of the refienement process to be reduced
Benadjaoud, Ghazi Nourdine. "Dee : Un environnement d'échange de données pour l'intégration des applicatons." Ecully, Ecole centrale de Lyon, 1996. http://www.theses.fr/1996ECDL0027.
Meziane, Madjid. "Développement d'une approche orientée objet actif pour la conception de systèmes d'information." Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0124.
Information systems (IS) present two very dependant aspects: a structural (or static) aspect and a behavior (or dynamic) one. Working distinctly over these two aspects makes the information systems analysis, conception and also evolution more complicated. Even the methods of object-oriented conception, which integrate partially the system behavior at structure level (through methods), cannot take into account the IS dynamic dimension. We deduce that management rules (we mean integrity constraints, derivation rules and active ones), that describe the IS activities and execution conditions, are generally diffused through multiple models (of method). Stated in the object approach context, we propose, in this work, the u e of active object concept as modeling entity because it constitutes an ideal support in describing not only the data parts and object treatments, but also the set of management rules. The active object concept makes easier the IS conception in integrating efficacy the «Event-condition-Action mechanism”, key of active databases. The introduction of such concept needs some new models to describe and traduce the passive and active behavior of IS. For that reason, we propose an extension of state diagrams. Nevertheless, the important number of produced rules at conceptual level requires its partition. We realize it by rules stratification. Finally, over the utilities plan, we had to add some new functionality to CASE Tools
M'Sir, Mohamed El Amine. "Conception d'architectures rapides pour codes convolutifs en télécommunications : application aux turbo-codes." Metz, 2003. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/2003/Msir.Mohamed.El.Amine.SMZ0315.pdf.
Barbar, Aziz. "Extraction de schémas objet pour la rétro-conception de bases de données relationnelles." Nice, 2002. http://www.theses.fr/2002NICE5762.
Petit, Jean-Marc. "Fondements pour un processus réaliste de rétro-conception de bases de données relationnelles." Lyon 1, 1996. http://www.theses.fr/1996LYO19004.
Bonneville, François. "Élaboration d'une base de données d'équipements pour la conception des systèmes d'assemblage réactifs." Besançon, 1994. http://www.theses.fr/1994BESA2034.
Veltri, Pierangelo. "Un système de vues pour les données XML du Web : conception et implantation." Paris 11, 2002. http://www.theses.fr/2002PA112146.
The thesis presents the design and implementation of a view mechanism to query a large and highly heterogeneous XML repository. XML documents can be queried using their structure (DTD). Nevertheless, to query many XML documents, users need to know the structure of all of them. We classify XML documents by domain (e. G. , art, tourism, etc. ), and we define an abstract DTD to represent each domain. A view definition consists of an abstract DTD and a set of mappings that map paths in the abstract DTD into paths in the actual documents of the domain. When a view is queried, the system translates the query into a union of queries against actual data that the query processor evaluates. An important issue that we considered is the scalability of the system. To achieve high scalability and to allow an efficient query translation, we distribute views over the machines of a distributed system. The view mechanism has been fully implemented in Xyleme system, and patented by Xyleme S. A, the society that sells Xyleme
Nauer, Emmanuel. "Principes de conception de systèmes hypertextes pour la fouille de données bibliographiques multibases." Nancy 1, 2001. http://www.theses.fr/2001NAN10008.
Information is essential in scientific and technical research and watch. The significant quantity of currently available data in a domain requires to implement adapted tools to exploit them. The goal of this research is to provide an environment in which the data of a domain (bibliographical references and Web) can be exploited for bibliographical search or domain analysis needs. In this framework, a general approach to build an hypertextual datamining system on bibliographical data is proposed. The use of hypextext capabilities favorizes an explorative access to data. Functionalities of datamining (statistical information, classifications, rules extraction) may be available to analyze data more precisely. The principal idea of this thesis is that datamining and information retrieval are two complementary approaches to access and analyse data : datamining allows to guide the information retrieval by using the knowledge extracted from the data. Conversely, information retrieval allows to guide the datami- ning pro cess by taking into account the extracted knowledge. The datamining process also favours the information access on the Web. Concretely, the knowledge extracted from bibliographical data provides a help for query formulation and improve the answer's precision of web search engine Building such a system requires the exploitation of different technics, Le. Datamining, information retrieval and database management. From a technical point of view, the tools of these fields are combined thanks to a modular approach exploiting XML for the representation and the exchange of the data, and a data flow processing
Paquet, Marie-France. "Une approche à simulation pour le traitement des données longitudinales incomplètes." Paris 1, 2001. http://www.theses.fr/2001PA010080.
Barkat, Okba. "Utilisation conjointe des ontologies et du contexte pour la conception des systèmes de stockage de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0001/document.
We are witnessing an era when any company is strongly interested in collecting and analyzing data from heterogeneous and varied sources. These sources also have another specificity, namely con- text awareness. Three complementary problems are identified: the resolution of the heterogeneity of the sources, (ii) the construction of a decisional integrating system, and (iii) taking into account the context in this integration. To solve these problems, we are interested in this thesis in the design of contextual applications based on a domain ontology.To do this, we first propose a context model that integrates the main dimensions identified in the literature. Once built, it is linked to the ontology model. This approach increases flexibility in the design of advanced applications. Then, we propose two case studies: (1) the contextualization of semantic data sources where we extend the OntoBD/OntoQL system to take the context into account, and (2) the design of a contextual data warehouse where the context model is projected on the different phases of the life cycle design. To validate our proposal, we present a tool implementing the different phases of the proposed design approach
Nguyen, Kim. "Langage de combinateurs pour XML : conception, typage, implantation." Paris 11, 2008. http://www.theses.fr/2008PA112071.
This thesis details the theoretical and practical study of a language of combinators for XML. XML documents, which are a de facto standard used to represent heterogeneous data in a structured and generic way so that they can be easily shared by many programs, are usually manipulated by all-purpose languages (JAVA, C,. . . ). Alongside these languages, one finds specialised languages, designs specifically to deal with XML documents (retrieving information from a document, transforming from a document format to another. . . ). We focus on statically typed languages. It is indeed possible to specify the ''shape'' of a document (sets of tags, order,. . . ) by the mean of a schema. Statically typed languages perform a static analysis of the source code of the program to ensure that every operation is valid with respect to the schema of a processed document. The analysis is said to be static because it only relies on the source code of the program, not on any runtime information or document sample. This thesis presents the theoretical foundations of a language for manipulating XML documents, in a statically typed way. It also features a practical study as well as an implementation of the formal language. Lastly, it presents many use case of type based optimisation in the context of XML processing (transformation, loading of a document in memory. . . )
Favre, Cécile. "Evolution de schémas dans les entrepôts de données : mise à jour de hiérarchies de dimension pour la personnalisation des analyses." Lyon 2, 2007. http://theses.univ-lyon2.fr/documents/lyon2/2007/favre_c.
In this thesis, we propose a solution to personalize analyses in data warehousing. This solution is based on schema evolution driven by users. More precisely, it consists in users’ knowledge and integrating it in the data warehouse to build new analysis axes. To achieve that, we propose an evolving rule-based data warehouse formal model. The rules are named aggregation rules. To exploit this model, we propose an architecture that allows the personalization process. This architecture includes four modules: users’ knowledge acquisition under the form of if-then rules, integration of these rules in the data warehouse; schema evolution; on-line analysis on the new schema. To realize this architecture, we propose an executive model in the relational context to deal with the process of the global architecture. Besides we interested in the evaluation of our evolving model. To do that, we propose an incremental updating method of a given workload in response to the data warehouse schema evolution. To validate our proposals, we developed the WEDriK (data Warehouse Evolution Driven by Knowledge) platform. The problems evoked in this thesis come from the reality of the LCL bank
Mebarki, Abdelkrim. "Implantation de structures de données compactes pour les triangulations." Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00336178.
façon compacte les triangulations. Pour ce faire, deux issues sont explorées : modifier la représentation interne en mémoire des objets géométriques, et redéfinir les types abstraits des objets géométriques correspondants. Une première solution consiste à utiliser des indices sur une taille arbitraire de bits, au lieu des références absolues. Les gains dépendent de la taille de la triangulation, et aussi de la taille du mot mémoire de la machine. Le handicap majeur est le coût élevé de la méthode en termes de temps d'exécution. Une deuxième piste consiste à utiliser des catalogues stables. L'idée consiste à regrouper les triangles dans des micro-triangulations, et de représenter la triangulation comme un ensemble de ces micro-triangulations. Le nombre des références multiples vers les sommets, et des références réciproques entre voisins est alors nettement réduit. Les résultats sont
prometteurs, sachant que le temps d'exécution n'est pas dramatiquement altéré par la modification des méthodes d'accés aux triangles. Une troisième solution consiste à décomposer la triangulation en plusieurs sous-triangulations permettant ainsi de coder les références dans une sous-triangulation sur un nombre réduit de bits par rapport aux références absolues. Les résultats de cette techniques sont encourageants, et peuvent être amplifiés par d'autres techniques comme le codage relatif des références, ou le partage de l'information géométrique des sommets sur les bords entre les différentes sous-triangulations. L'élaboration de structures compactes nécessite encore plus d'intérêts, et plusieurs pistes sont à explorer pour pouvoir arriver à des solutions plus économiques en termes d'espace mémoire.
Bogo, Gilles. "Conception d'applications pour systèmes transactionnels coopérants." Habilitation à diriger des recherches, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00315574.
Cantzler, Olivier. "Une architecture conceptuelle pour la pérennisation d'historiques globaux de conception de produits industriels complexes." Châtenay-Malabry, Ecole centrale de Paris, 1997. http://www.theses.fr/1997ECAP0665.
Naciri, Hanane. "Conception et réalisation d'outils pour l'interaction homme machine dans les environnements de démonstrations mathématiques." Nice, 2002. http://www.theses.fr/2002NICE5755.
Moumouni, Kairou. "Etude et conception d'un modèle mixte semiparamétrique stochastique pour l'analyse des données longitudinales environnementales." Phd thesis, Université Rennes 2, 2005. http://tel.archives-ouvertes.fr/tel-00012164.
Dans une deuxième partie, une extension de la méthode d'influence locale de Cook au modèle mixte modifié est proposée, elle fournit une analyse de sensibilité permettant de détecter les effets de certaines perturbations sur les composantes structurelles du modèle. Quelques propriétés asymptotiques de la matrice d'influence locale sont exhibées.
Enfin, le modèle proposé est appliqué à deux jeux de données réelles : une analyse des données de concentrations de nitrates issues de différentes stations de mesures d'un bassin versant, puis une analyse de la pollution bactériologiques d'eaux de baignades.
Bassand, Agnès. "Proposition d'un modèle de données de référence pour la conception des systèmes de supervision." Cachan, Ecole normale supérieure, 1997. http://www.theses.fr/1997DENS0003.
Moumouni, Kairou. "Etude et conception d'un modèle mixte sémiparamétrique stochastique pour l'analyse des données longitudinales environnementales." Rennes 2, 2005. http://www.theses.fr/2005REN20052.
This thesis is dealing with the analysis of longitudinal data that can be encountered in environmental studies. The general approach is based on the stochastic linear mixed model, that we extend using semiparametric techniques, such as penalized cubic splines. First, estimation methods are developed for the semiparametric stochastic mixed model, and then a simulation study is performed to measure the performances of the parameter estimates. In a second part, we propose an extension of the Cook's local influence method, in order to produce a sensibility analysis of our model and detect the effect of the perturbation of the structural components of the model. Some asymptotic properties of the local influence matrix are exhibited. Finally, the proposed model is applied to two real datasets : first, the analysis of nitrate concentration measurements in different locations of a watershed ; second, the analysis of bacteriological pollution of coastal bathing waters
Wakim, Bernadette. "La Conception des bases de données orientées objet : Propositions pour la construction d'un AGL." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0028.
The recent apparition of the Object Oriented DBMSs requires an enhancement of classical information system design. The complexity of the Information System is accompanied by the development of more sophisticated aide tools and by having recourse to design methodologies. Using the traditional design methods are insufficient to abject approach. For example, the methods formed upon the Entity - Association model are not convenient for the design of applications developed on abject oriented DBMSs. New means must be explored to benefit as much as possible from such DBMSs. We propose some concepts for an Object Oriented methodology. The proposed method, folloing an object oriented approach provides a static and dynamic representation of the applications. Our approach considers different aspects of the same object, depending on the viewpoint of each users. We proceed then to integrate all these views in a global conceptual scheme. The views integration, tockled in some classical conceptual methods arises new problems features and highlights the complexity of phenomena. E can mention, for example. Inheritance conflicts, data semantic, synonymy and polysemy. The target DBMS which guides us is 02. We have developed a tool. (CASE)
Amin, Mohsin. "Conception d'une architecture journalisée tolérante aux fautes pour un processeur à pile de données." Thesis, Metz, 2011. http://www.theses.fr/2011METZ017S/document.
In this thesis, we propose a new approach to designing a fault tolerant processor. The methodology is addressing several goals including high level of protection against transient faults along with reasonable performance and area overhead trade-offs. The resulting fault-tolerant processor will be used as a building block in a fault tolerant MPSoC (Multi-Processor System-on-Chip) architecture. The concepts being used to achieve fault tolerance are based on concurrent detection and rollback error recovery techniques. The core elements in this architecture are a stack processor core from the MISC (Minimal Instruction Set Computer) class and a hardware journal in charge of preventing error propagation to the main memory (supposedly dependable) and limiting the impact of the rollback mechanism on time performance. The design methodology relies on modeling at different abstraction levels and simulating modes, developing dedicated software tools, and prototyping on FPGA technology. The results, obtained without seeking a thorough optimization, show clearly the relevance of the proposed approach, offering a good compromise in terms of protection and performance. Indeed, fault tolerance, as revealed by several error injection campaigns, prove to be high with 100% of errors being detected and recovered for single bit error patterns, and about 60% and 78% for double and triple bit error patterns, respectively. Furthermore, recovery rate is still acceptable for larger error patterns, with yet a recovery rate of 36%on 8 bit error patterns
Amin, Mohsin. "Conception d'une architecture journalisée tolérante aux fautes pour un processeur à pile de données." Electronic Thesis or Diss., Metz, 2011. http://www.theses.fr/2011METZ017S.
In this thesis, we propose a new approach to designing a fault tolerant processor. The methodology is addressing several goals including high level of protection against transient faults along with reasonable performance and area overhead trade-offs. The resulting fault-tolerant processor will be used as a building block in a fault tolerant MPSoC (Multi-Processor System-on-Chip) architecture. The concepts being used to achieve fault tolerance are based on concurrent detection and rollback error recovery techniques. The core elements in this architecture are a stack processor core from the MISC (Minimal Instruction Set Computer) class and a hardware journal in charge of preventing error propagation to the main memory (supposedly dependable) and limiting the impact of the rollback mechanism on time performance. The design methodology relies on modeling at different abstraction levels and simulating modes, developing dedicated software tools, and prototyping on FPGA technology. The results, obtained without seeking a thorough optimization, show clearly the relevance of the proposed approach, offering a good compromise in terms of protection and performance. Indeed, fault tolerance, as revealed by several error injection campaigns, prove to be high with 100% of errors being detected and recovered for single bit error patterns, and about 60% and 78% for double and triple bit error patterns, respectively. Furthermore, recovery rate is still acceptable for larger error patterns, with yet a recovery rate of 36%on 8 bit error patterns
Michel, Gabriel. "Contribution à la conception et réalisation d'un système de gestion de bases de données pour la conception assistée par ordinateur." Metz, 1988. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1988/Michel.Gabriel.SMZ8818.pdf.
Damier, Christophe. "Omega : un SGBD multimedia orienté objet pour les applications géographiques." Grenoble 1, 1989. https://theses.hal.science/tel-00333131.
Antoniu, Gabriel. "Contribution à la conception de services de partage de données pour les grilles de calcul." Habilitation à diriger des recherches, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00437324.
Pastor, Julien. "Conception d'une légende interactive et forable pour le SOLAP." Thesis, Université Laval, 2004. http://www.theses.ulaval.ca/2004/21994/21994.pdf.
Ferrettini, Gabriel. "Système adaptatif pour l'aide à la conception de processus d'analyse." Thesis, Toulouse 1, 2021. http://www.theses.fr/2021TOU10004.
Lahire, Philippe. "Conception et realisation d'un modele de persistance pour la langage eiffel." Nice, 1992. http://www.theses.fr/1992NICE4543.
Petit, Laurent. "Etude de la qualité des données pour la représentation des réseaux techniques urbains : applications au réseau d'assainissement." Artois, 1999. http://www.theses.fr/1999ARTO0203.
Ahmed, Bacha Adda Redouane. "Localisation multi-hypothèses pour l'aide à la conduite : conception d'un filtre "réactif-coopératif"." Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0051/document.
“ When we use information from one source,it's plagiarism;Wen we use information from many,it's information fusion ”This work presents an innovative collaborative data fusion approach for ego-vehicle localization. This approach called the Optimized Kalman Particle Swarm (OKPS) is a data fusion and an optimized filtering method. Data fusion is made using data from a low cost GPS, INS, Odometer and a Steering wheel angle encoder. This work proved that this approach is both more appropriate and more efficient for vehicle ego-localization in degraded sensors performance and highly nonlinear situations. The most widely used vehicle localization methods are the Bayesian approaches represented by the EKF and its variants (UKF, DD1, DD2). The Bayesian methods suffer from sensitivity to noises and instability for the highly non-linear cases. Proposed for covering the Bayesian methods limitations, the Multi-hypothesis (particle based) approaches are used for ego-vehicle localization. Inspired from monte-carlo simulation methods, the Particle Filter (PF) performances are strongly dependent on computational resources. Taking advantages of existing localization techniques and integrating metaheuristic optimization benefits, the OKPS is designed to deal with vehicles high nonlinear dynamic, data noises and real time requirement. For ego-vehicle localization, especially for highly dynamic on-road maneuvers, a filter needs to be robust and reactive at the same time. The OKPS filter is a new cooperative-reactive localization algorithm inspired by dynamic Particle Swarm Optimization (PSO) metaheuristic methods. It combines advantages of the PSO and two other filters: The Particle Filter (PF) and the Extended Kalman filter (EKF). The OKPS is tested using real data collected using a vehicle equipped with embedded sensors. Its performances are tested in comparison with the EKF, the PF and the Swarm Particle Filter (SPF). The SPF is an interesting particle based hybrid filter combining PSO and particle filtering advantages; It represents the first step of the OKPS development. The results show the efficiency of the OKPS for a high dynamic driving scenario with damaged and low quality GPS data
Schmitt, Gabriel. "Un système pour la description et la gestion de méthodologies et de données en conception." Montpellier 2, 1996. http://www.theses.fr/1996MON20207.
Pedraza, Linares Esperanza. "SGBD sémantiques pour un environnement bureautique : intégrité et gestion de transactions." Grenoble 1, 1988. http://tel.archives-ouvertes.fr/tel-00009437.
Gardès, Julien. "Étude et conception in silico d'amorces PCR pour l'identification des principaux pathogènes bactériens." Nice, 2011. http://www.theses.fr/2011NICE4061.
The detection of pathogens is a priority for medical research. Since 2000, the discipline underwent a technological transition: molecular methods (e. G. PCR), faster and more accurate, are gradually replacing the traditional methods of cell culture and biochemical tests. However, the sensitivity and the specificity of PCR are dependent on the good design of primers. It is generally accepted that these primers are good if the annealing temperature exceeds 55 °C, they are specific to the target gene and species, and they hybridize to all known alleles of the gene. During my PhD, we performed a semi-automatic procedure to collect every sequence of a gene for a species, information from different public databases, literature and every published primer. Then the efficiency of each primer was estimated by checking their specificity, their sensitivity and their thermodynamic characteristics. This pipeline was applied to every annotated gene of several organisms of biodefense interest. The results were organized in the form of a website: www. Pathogenes. Org, to provide a turnkey system for biologists wishing to develop molecular tests for these species. In addition, our work showed for the virulence genes of Vibrio cholerae that only one third of published primers are "good" according to the criteria mentioned above, and the date of publication and citation counts of a primer are not factors permitting to estimate their quality
Sahnouni, Belblidia Yasmine. "Modélisation des données dans le bâtiment pour le développement d'outils d'assistance à la conception technique : un modèle pour la simulation du cycle de la conception technique." Vandoeuvre-les-Nancy, INPL, 1999. http://www.theses.fr/1999INPL033N.
Bogo, Gilles. "Conception d'applications pour systèmes transactionnels coopérants." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00315574.
Tercinet, Fabrice. "Méthodes arborescentes pour la résolution des problèmes d'ordonnancement, conception d'un outil d'aide au développement." Tours, 2004. http://www.theses.fr/2004TOUR4034.
We present in this PhD thesis a study on tree search based methods (TSM) for scheduling problems solving. It leads us to define global approach for the development helping of branch-and-bound algorithms. So, we have done an important study of TSM from the literature such that branch-and-bound algorithms, Recovering Beam Search methods, Limited Discrepancy Search methods or Branch-and-Greed algorithms. From this study, we have showed the need to have an object-oriented approach for the designing of tree nodes which are re-usable for future developments. So, we have identified from the literature a pool of interesting branch-and-bound methods for their branching rule and for the solved scheduling problem from which we have designed node object-oriented models. Moreover, this work is integrated in the e-OCEA project. However, we have studied more precisely the problem P|ri,qi |Cmax because it plays a central role in the solving of scheduling problems. This particular place has lead many researchers to develop algorithms to improve the exact solving of this problem such that upper bounds, lower bounds or satisfiability tests. So, we have developped new satisfiability tests based on energetic reasoning and max-flow formulation. We have also designed a new branching rule and several truncated branch-and-bounds methods
Sayah, Marguerite. "Un environnement d'interrogation graphique de bases de données orientées objet (EIGOO) pour des utilisateurs non informaticiens." Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0046.
Our work concerns the interrogation of abject oriented databases by non computer specialists. The problem they face relates mainly to the complexity of the database schema and to the difficulty of the textual query languages. . In this context, we propose a graphical query environment (EIGOO) that uses the v1ew technique to reduce the schema complexity and offers a graphical query language to consult databases through the defined views. . The view definition module of our environment proposes a graph1cal language and addresses users that are experts in the application domain. The views are defined for groups’ end users. They are adapted to their working context, to their application needs and to the1r access rights. The second main module concerns the database interrogation through v1ews and addresses non computer specialists’ users. It offers a query language and guarantees the conversion of graphical queries into Object Query Language (OQL) in order to execute them under any ODMG compliant DBMS. The graphical query language supports projection, selection, implicit join, explicit join, grouping and sorting operations. It also allows the specification of quantifiers and the elaboration of reflexive queries. The schema of the view is graphically visualized. The queries are directly formulated on the graph and are divided in two categories: the implicit join queries and the explicit join queries. Constructed queries can. Be saved and reused in order to create new queries. Concerning the conversion of graphical queries, a method is proposed for each category of queries
Ravat, Franck. "Modèles et outils pour la conception et la manipulation de systèmes d'aide à la décision." Habilitation à diriger des recherches, Université des Sciences Sociales - Toulouse I, 2007. http://tel.archives-ouvertes.fr/tel-00379779.
Pour les ED, notre objectif a été d'apporter des solutions pour la modélisation de l'évolution des données décisionnelles (extension de modèle objet) et pour l'intégration de données textuelles sans en fixer le schéma à priori. Pour les MD, nous avons proposé un modèle multidimensionnel de base avec différentes extensions répondant aux besoins des décideurs. Ces extensions permettent de prendre en compte la gestion d'indicateurs et de données textuelles, l'évolution temporelle (versions), la cohérence des données et de ses analyses (contraintes sémantiques), l'intégration et la capitalisation de l'expertise des décideurs (annotations) ainsi que la personnalisation des schémas multidimensionnels (poids). Ces travaux ont été complétés par la proposition d'une démarche de conception qui présente l'avantage de prendre en compte les besoins des décideurs et les sources de données. Cette démarche permet de modéliser aussi bien l'aspect statique (données décisionnelles) que l'aspect dynamique (processus d'alimentation du SAD).
D'un point de vue manipulation des données, nous avons proposé une algèbre complétée d'un langage graphique orienté décideur et d'un langage déclaratif. Nos propositions ont été validées par la participation à différents projets ainsi que le co-encadrement de 5 thèses de doctorat et le suivi de travaux de plusieurs Master Recherche.
Baklouti, Fatma. "Algorithmes de construction du Treillis de Galois pour des contextes généralisés." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090003.
Our main concern in this thesis is concept (or Galois) lattices. As shown by previous works, concept lattices is an effective tool for data analysis and knowledge discovery, especially for classification, clustering, information retrieval, and more recently for association rules mining. Several algorithms were proposed to generate concepts or concept lattices on a data context. They focus on binary data arrays, called contexts. However, in practice we need to deal with contexts which are large and not necessarily binary. We propose a fast Galois lattice-building algorithm, called ELL algorithm, for generating closed itemsets from objects having general descriptions and we compare its performance with other existing algorithms. In order to have better performance et to treat bigger contexts we propose also a distributed version of ELL algorithm called SD-ELL
Hajjeh, Ibrahim. "Sécurité des échanges. Conception et validation d'un nouveau protocole pour la sécurisation des échanges." Paris, ENST, 2003. https://pastel.archives-ouvertes.fr/pastel-00001168.
Many security mechanisms have been proposed for wired and wireless networks. Although these mechanisms have answered some security requirements, they remain efficient in a specific context related to the assumptions and the restrictive requirements which have been emitted at the time of their design. Firstly, we define a list of security requirements which make it possible to analyze the most deployed security solutions. Secondly, we propose to extend the SSL/TLS protocol with new services. SSL/TLS is a transparent security solution. Thus, security services providedto applications are the same. SSL/TLS does not meet specific needs to some classes of applications such as internet payment applications. We integrate the Internet Security Association and Key Management Protocol (ISAKMP) in SSL/TLS to provide, among others, identity protection and unification of security associations. In order to extend the use of SSL/TLS towards the Internet payment systems, we integrate a generic signature module in SSL/TLS that generate a non repudiation proof over all exchanged data. This module is interoperable with SSL/TLS and TLS Extensions standards. However, all these proposals suffer from the lack of interoperability with their previous versions. This will make it impossible to satisfy all the security needs through one existing protocol. Thus, we propose to design, validate and develop a new security protocol, which natively integrates the evolutions of the security protocols, in a powerful and elegant way. We called this protocol SEP for Secure and Extensible Protocol
Pamba, Capo-Chichi Medetonhan Shambhalla Eugène William. "Conception d’une architecture hiérarchique de réseau de capteurs pour le stockage et la compression de données." Besançon, 2010. http://www.theses.fr/2010BESA2031.
Recent advances in various aeras related to micro-electronics, computer science and wireless networks have resulted in the development of new research topics. Sensor networks are one of them. The particularity of this new research direction is the reduced performances of nodes in terms of computation, memory and energy. The purpose of this thesis is the definition of a new hierarchical architecture of sensor networks usable in different contexts by taking into account the sensors constraints and providing a high quality data such as multimedia to the end-users. We present our hierachical architecture with different nodes and the wireless technologies that connect them. Because of the high consumtpionof data transmission, we have developped two data compression algortithms in order to optimize the use of the channel by reducing data transmitted. We also present a solution for storing large amount of data on nodes by integrating the file system FAT16 under TinyOS-2. X
Cuesta, Fernand. "Synthèse des ressources de communication pour la conception de systèmes embarqués temps réel flots de données." Nice, 2001. http://www.theses.fr/2001NICE5659.
Saïdi, Houssem Eddine. "Conception et évaluation de techniques d'interaction pour l'exploration de données complexes dans de larges espaces d'affichage." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30252/document.
Today's ever-growing data is becoming increasingly complex due to its large volume and high dimensionality: it thus becomes crucial to explore interactive visualization environments that go beyond the traditional desktop in order to provide a larger display area and offer more efficient interaction techniques to manipulate the data. The main environments fitting the aforementioned description are: large displays, i.e. an assembly of displays amounting to a single space; Multi-display Environments (MDEs), i.e. a combination of heterogeneous displays (monitors, smartphones/tablets/wearables, interactive tabletops...) spatially distributed in the environment; and immersive environments, i.e. systems where everything can be used as a display surface, without imposing any bound between displays and immersing the user within the environment. The objective of our work is to design and experiment original and efficient interaction techniques well suited for each of the previously described environments. First, we focused on the interaction with large datasets on large displays. We specifically studied simultaneous interaction with multiple regions of interest of the displayed visualization. We implemented and evaluated an extension of the traditional overview+detail interface to tackle this problem: it consists of an overview+detail interface where the overview is displayed on a large screen and multiple detailed views are displayed on a tactile tablet. The interface allows the user to have up to four detailed views of the visualization at the same time. We studied its usefulness as well as the optimal number of detailed views that can be used efficiently. Second, we designed a novel touch-enabled device, TDome, to facilitate interactions in Multi- display environments. The device is composed of a dome-like base and provides up to 6 degrees of freedom, a touchscreen and a camera that can sense the environment. [...]