Dissertationen zum Thema „Standards de données“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-39 Dissertationen für die Forschung zum Thema "Standards de données" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Zenasni, Sarah. „Extraction d'information spatiale à partir de données textuelles non-standards“. Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS076/document.
Der volle Inhalt der QuelleThe extraction of spatial information from textual data has become an important research topic in the field of Natural Language Processing (NLP). It meets a crucial need in the information society, in particular, to improve the efficiency of Information Retrieval (IR) systems for different applications (tourism, spatial planning, opinion analysis, etc.). Such systems require a detailed analysis of the spatial information contained in the available textual data (web pages, e-mails, tweets, SMS, etc.). However, the multitude and the variety of these data, as well as the regular emergence of new forms of writing, make difficult the automatic extraction of information from such corpora.To meet these challenges, we propose, in this thesis, new text mining approaches allowing the automatic identification of variants of spatial entities and relations from textual data of the mediated communication. These approaches are based on three main contributions that provide intelligent navigation methods. Our first contribution focuses on the problem of recognition and identification of spatial entities from short messages corpora (SMS, tweets) characterized by weakly standardized modes of writing. The second contribution is dedicated to the identification of new forms/variants of spatial relations from these specific corpora. Finally, the third contribution concerns the identification of the semantic relations associated withthe textual spatial information
Colin, Clément. „Gestion et visualisation multiscalaire du territoire au bâtiment : Application à la Gestion et Maintenance assistée par Ordinateur“. Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20010.
Der volle Inhalt der QuelleCities and the objects that make them up, such as buildings, water, electricity and road networks, have increasingly precise digital twins that play an important role in understanding territories. The growing use of Geographic Information Systems (GIS), Building Information Model (BIM) and City Information Model (CIM) has led to the creation of a large number of geospatial representations of these urban objects, made up of geometric and semantic data, structured by numerous standards. These representations provide a variety of thematic and spatial information to describe what these objects are physically, functionally and operationally. A better understanding of these urban objects can be provided by applications enabling users to access, visualize and analyze these urban objects using these different representations.In this thesis, we focus on multiscalar interactive web navigation and visualization of multiple representations of the same object. We will consider various heterogeneous standards for representing the interior and exterior of a building and a city. Our first two contributions enable the creation of navigable and contextual views of these heterogeneous representations in a single web context, using approaches based on data integration methods. To this end, we propose a methodology and an open-source tool, Py3DTilers, for extracting, manipulating and visualizing the geometry of geospatial data, as well as a model-based semantic data integration methodology, to ensure that all the information present in these data can be brought and understood by the users. Our third contribution is the formalization of the concepts of Variant - instance or set of instances representing the same entity- and Variant Identifier to reference and navigate through a set of representations of the same object. Finally, our last contribution focuses on the choice of geometric representation of an object to be displayed, depending on the users' 3D context. We propose a study of the levels of detail described in different geospatial data standards, as well as a metric for describing the complexity of a geometric representation to enable this choice.Finally, this thesis was carried out in partnership with Carl Software - Berger-Levrault, a publisher of computer-aided maintenance software and asset management solutions. Particular attention was paid to interoperability (use of standards), reusability (creation of shared software architecture based on open-source tools) and reproducibility of the proposed solutions. This thesis aims to improve the understanding of equipment to facilitate its maintenance and management, by allowing the 3D visualization of equipment and the exploitation of the knowledge that can be found in various representations. This is achieved by establishing a natural link between equipment representations existing in this domain and various geospatial data sources
Langollf, Didier. „Analyse des standards de notation musicale en vue de leur utilisation pour la transcription de la musique en braille et l'étude d'une base de donnés musicales“. Toulouse 3, 2000. http://www.theses.fr/2000TOU30233.
Der volle Inhalt der QuellePekovic, Sanja. „Les déterminants et les effets des normes de qualité et d’environnement : analyses microéconométriques à partir de données françaises d’entreprises et de salariés“. Thesis, Paris Est, 2010. http://www.theses.fr/2010PEST3007/document.
Der volle Inhalt der QuelleThe scope and magnitude of changes occurring in business today has led to great interest in and widespread adoption of Quality and Environmental Management approaches. However, despite their prevalence, efforts at understanding the motives for their adoption, as well as their effects on firm and employee outcomes, are still in their infancy. This PhD dissertation provides useful theoretical and empirical contributions to three research topics dealing with Quality and Environmental Management approaches in the French institutional framework: (1) the determinants of their adoption, (2) their impact on firm outcomes and (3) their impact on employee outcomes. These three aspects make up the three parts of this PhD thesis.In part I, we define and characterise quality and environmental approaches with a special focus on ISO 9000 Quality Management standards and ISO 14000 Environmental Management standards. Furthermore, we empirically examine the determinants of quality and environmental standards adoption. Our findings reveal that the determinants of quality standards significantly differ between manufacturing and service firms, particularly when we examine features of the internal strategy of those firms (quality improvement, cost reduction and innovation). However, we have also obtained evidence which indicates that the characteristics of firms (firm size, corporate status and previous experience with similar standards) and features of their external strategy (export levels and customer satisfaction) play a significant role in quality standards adoption across both the manufacturing and service sectors. Moreover, we empirically investigate the determinants of chemical firms' registration for the ISO 14001 standard or the Responsible Care program. We show that most determinants are different for the two systems analysed: while firm size, previous experience with similar standards, information disclosure requirements and customer location are major determinants of ISO 14001 standard registration, regulatory pressure, past environmental problems and future risks are the main drivers of Responsible Care registration.In part II, we empirically investigate whether quality and environmental standards are related to better firm performance using various sets of performance measures. The evidence indicates that quality standards positively influence turnover and specific indicators of innovation performance and productivity, but have no impact on profit and some other innovation performance measures. Based on our empirical findings, we conclude that while environmental standards improve turnover and recruitment of both professional and non professional employees, they have no effect on profit. Moreover, the research shows that implementing both quality and environmental standards is likely to better enhance firm outcomes than implementing only one standard.Part III is focused on the effect of quality and environmental standards on employee outcomes. The estimation results show that quality standards increase the risk of employee accidents although more specifically they are ineffective on working accidents that lead to sick leave. On the other hand, our results lead to the conclusion that environmental standards add significantly to the enhancement of working conditions, via the reduction of accidents. Furthermore, the obtained evidence shows that environmental standards seem to improve employee well-being. More precisely, employees working for firms that are certified for an environmental standard report greater feelings of usefulness about their job and declare that they are more often fairly valued in their jobs. The evidence also shows that employees working for environmentally certified firms do not claim to be significantly more involved in their job but they are more likely, ceteris paribus, to work uncompensated for sup plementary work hours than “non green workers”
Choquet, Rémy. „Partage de données biomédicales : modèles, sémantique et qualité“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00824931.
Der volle Inhalt der QuellePeoples, Bruce E. „Méthodologie d'analyse du centre de gravité de normes internationales publiées : une démarche innovante de recommandation“. Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080023.
Der volle Inhalt der Quelle“Standards make a positive contribution to the world we live in. They facilitate trade, spreadknowledge, disseminate innovative advances in technology, and share good management andconformity assessment practices”7. There are a multitude of standard and standard consortiaorganizations producing market relevant standards, specifications, and technical reports in thedomain of Information Communication Technology (ICT). With the number of ICT relatedstandards and specifications numbering in the thousands, it is not readily apparent to users howthese standards inter-relate to form the basis of technical interoperability. There is a need todevelop and document a process to identify how standards inter-relate to form a basis ofinteroperability in multiple contexts; at a general horizontal technology level that covers alldomains, and within specific vertical technology domains and sub-domains. By analyzing whichstandards inter-relate through normative referencing, key standards can be identified as technicalcenters of gravity, allowing identification of specific standards that are required for thesuccessful implementation of standards that normatively reference them, and form a basis forinteroperability across horizontal and vertical technology domains. This Thesis focuses on defining a methodology to analyze ICT standards to identifynormatively referenced standards that form technical centers of gravity utilizing Data Mining(DM) and Social Network Analysis (SNA) graph technologies as a basis of analysis. As a proofof concept, the methodology focuses on the published International Standards (IS) published bythe International Organization of Standards/International Electrotechnical Committee; JointTechnical Committee 1, Sub-committee 36 Learning Education, and Training (ISO/IEC JTC1 SC36). The process is designed to be scalable for larger document sets within ISO/IEC JTC1 that covers all JTC1 Sub-Committees, and possibly other Standard Development Organizations(SDOs).Chapter 1 provides a review of literature of previous standard analysis projects and analysisof components used in this Thesis, such as data mining and graph theory. Identification of adataset for testing the developed methodology containing published International Standardsneeded for analysis and form specific technology domains and sub-domains is the focus ofChapter 2. Chapter 3 describes the specific methodology developed to analyze publishedInternational Standards documents, and to create and analyze the graphs to identify technicalcenters of gravity. Chapter 4 presents analysis of data which identifies technical center of gravitystandards for ICT learning, education, and training standards produced in ISO/IEC JTC1 SC 36.Conclusions of the analysis are contained in Chapter 5. Recommendations for further researchusing the output of the developed methodology are contained in Chapter 6
Etien-Gnoan, N'Da Brigitte. „L'encadrement juridique de la gestion électronique des données médicales“. Thesis, Lille 2, 2014. http://www.theses.fr/2014LIL20022/document.
Der volle Inhalt der QuelleThe electronic management of medical data is as much in the simple automated processing of personal data in the sharing and exchange of health data . Its legal framework is provided both by the common rules to the automated processing of all personal data and those specific to the processing of medical data . This management , even if it is a source of economy, creates protection issues of privacy which the French government tries to cope by creating one of the best legal framework in the world in this field. However , major projects such as the personal health record still waiting to be made and the right to health is seen ahead and lead by technological advances . The development of e-health disrupts relationships within one dialogue between the caregiver and the patient . The extension of the rights of patients , sharing responsibility , increasing the number of players , the shared medical confidentiality pose new challenges with which we must now count. Another crucial question is posed by the lack of harmonization of legislation increasing the risks in cross-border sharing of medical
Toussaint, Marion. „Une contribution à l'industrie 4.0 : un cadre pour sécuriser l'échange de données standardisées“. Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0121.
Der volle Inhalt der QuelleThe recent digital transformation of the manufacturing world has resulted in numerous benefits, from higher quality products to enhanced productivity and shorter time to market. In this digital world, data has become a critical element in many critical decisions and processes within and across organizations. Data exchange is now a key process for the organizations' communication, collaboration, and efficiency. Industry 4.0 adoption of modern communication technologies has made this data available and shareable at a quicker rate than we can consume or track it. This speed brings significant challenges such as data interoperability and data traceability, two interdependent challenges that manufacturers face and must understand to adopt the best position to address them. On one hand, data interoperability challenges delay faster innovation and collaboration. The growing volume of data exchange is associated with an increased number of heterogeneous systems that need to communicate with and understand each other. Information standards are a proven solution, yet their long and complex development process impedes them from keeping up with the fast-paced environment they need to support and provide interoperability for, slowing down their adoption. This thesis proposes a transition from predictive to adaptive project management with the use of Agile methods to shorten the development iterations and increase the delivery velocity, increasing standards adoption. While adaptive environments have shown to be a viable solution to align standards with the fast pace of industry innovation, most project requirements management solutions have not evolved to accommodate this change. This thesis also introduces a model to support better requirement elicitation during standards development with increased traceability and visibility. On the other hand, data-driven decisions are exposed to the speed at which tampered data can propagate through organizations and corrupt these decisions. With the mean time to identify (MTTI) and mean time to contain (MTTC) such a threat already close to 300 days, the constant growth of data produced and exchanged will only push the MTTI and MTTC upwards. While digital signatures have already proven their use in identifying such corruption, there is still a need for formal data traceability framework to track data exchange across large and complex networks of organizations to identify and contain the propagation of corrupted data. This thesis analyses existing cybersecurity frameworks, their limitations, and introduces a new standard-based framework, in the form of an extended NIST CSF profile, to prepare against, mitigate, manage, and track data manipulation attacks. This framework is also accompanied with implementation guidance to facilitate its adoption and implementation by organizations of all sizes
Peoples, Bruce E. „Méthodologie d'analyse du centre de gravité de normes internationales publiées : une démarche innovante de recommandation“. Electronic Thesis or Diss., Paris 8, 2016. http://www.theses.fr/2016PA080023.
Der volle Inhalt der Quelle“Standards make a positive contribution to the world we live in. They facilitate trade, spreadknowledge, disseminate innovative advances in technology, and share good management andconformity assessment practices”7. There are a multitude of standard and standard consortiaorganizations producing market relevant standards, specifications, and technical reports in thedomain of Information Communication Technology (ICT). With the number of ICT relatedstandards and specifications numbering in the thousands, it is not readily apparent to users howthese standards inter-relate to form the basis of technical interoperability. There is a need todevelop and document a process to identify how standards inter-relate to form a basis ofinteroperability in multiple contexts; at a general horizontal technology level that covers alldomains, and within specific vertical technology domains and sub-domains. By analyzing whichstandards inter-relate through normative referencing, key standards can be identified as technicalcenters of gravity, allowing identification of specific standards that are required for thesuccessful implementation of standards that normatively reference them, and form a basis forinteroperability across horizontal and vertical technology domains. This Thesis focuses on defining a methodology to analyze ICT standards to identifynormatively referenced standards that form technical centers of gravity utilizing Data Mining(DM) and Social Network Analysis (SNA) graph technologies as a basis of analysis. As a proofof concept, the methodology focuses on the published International Standards (IS) published bythe International Organization of Standards/International Electrotechnical Committee; JointTechnical Committee 1, Sub-committee 36 Learning Education, and Training (ISO/IEC JTC1 SC36). The process is designed to be scalable for larger document sets within ISO/IEC JTC1 that covers all JTC1 Sub-Committees, and possibly other Standard Development Organizations(SDOs).Chapter 1 provides a review of literature of previous standard analysis projects and analysisof components used in this Thesis, such as data mining and graph theory. Identification of adataset for testing the developed methodology containing published International Standardsneeded for analysis and form specific technology domains and sub-domains is the focus ofChapter 2. Chapter 3 describes the specific methodology developed to analyze publishedInternational Standards documents, and to create and analyze the graphs to identify technicalcenters of gravity. Chapter 4 presents analysis of data which identifies technical center of gravitystandards for ICT learning, education, and training standards produced in ISO/IEC JTC1 SC 36.Conclusions of the analysis are contained in Chapter 5. Recommendations for further researchusing the output of the developed methodology are contained in Chapter 6
Le, Lann Lucas. „Elaboration d'une procédure standardisée d'harmonisation des données de cytométrie en flux dans le cadre d'études multicentriques Multi-center harmonization of flow cytometers in the context of the European “PRECISESADS” project, in Autoimmunity Reviews 15(11), November 2016“. Thesis, Brest, 2019. http://www.theses.fr/2019BRES0048.
Der volle Inhalt der QuelleThe aim of this thesis is to ensure the comparability of flow cytometry data within the context of multi-center studies. This thesis work is part of the PRECISESADS project. This European project seeks to reclassify the systemic autoimmune diseases using "omic" data to find useful biological signatures. This encompasses tools like genomic, proteomic ... and flow cytometry. The inclusion of numerous individuals in the project make the use of informatics tools a must for the analysis automation of the thousands flow cytometry files obtained during the 5-years period of the project.Cell populations frequencies extracted from the files are comparable between centers but that is not the case for the median of fluorescence intensities (MFI) of the studied molecules, despite a normalization step. The origin of this incomparability is due to a combination of a batch effect and a center effect. Those two effects can be corrected with specific coefficients. The normalization and correction of both batch and center effect thanks to the elaboration of new R script and python script allow the production of comparable MFI between centers. Overall, this thesis work established a new standardized procedure, efficient for any multi-center projects of flow cytometry data analysis
Abdallah, Nada. „Raisonnements standard et non-standard pour les systèmes décentralisés de gestion de données et de connaissances“. Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00536926.
Der volle Inhalt der QuelleSchnell, Michaël. „Using Case-Based Reasoning and Argumentation to Assist Medical Coding“. Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0168.
Der volle Inhalt der QuelleThe aim of the National Cancer Registry (NCR) in Luxembourg is to collect data about cancer and the quality of cancer treatment. To obtain high quality data that can be compared with other registries or countries, the NCR follows international coding standards and rules, such as the International Classification of Diseases for Oncology (ICD-O). These standards are extensive and complex, which complicates the data collection process. The operators, i.e. the people in charge of this process, are often confronted with situations where data is missing or contradictory, preventing the application of the provided guidelines. To assist in their effort, the coding experts of the NCR answer coding questions asked by operators. This assistance.is time consuming for experts. To help reduce this burden on experts and to facilitate the operators’ task, this project aims at implementing a coding assistant that would answer coding questions. From a scientific point of view, this thesis tackles the problem of extracting the information from a set of data sources under a given set of rules and guidelines. Case-based reasoning has been chosen as the method for solving this problem given its similarity with the reasoning process of the coding experts. The method designed to solve this problem relies on arguments provided by coding experts in the context of previously solved problems. This document presents how these arguments are used to identify similar problems and to explain the computed solution to both operators and coding experts. A preliminary evaluation has assessed the designed method and has highlighted key areas to improve. While this work focused on cancer registries and medical coding, this method could be generalized to other domains
Gerl, Armin. „Modelling of a privacy language and efficient policy-based de-identification“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI105.
Der volle Inhalt der QuelleThe processing of personal information is omnipresent in our datadriven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement
Goëta, Samuel. „Instaurer des données, instaurer des publics : une enquête sociologique dans les coulisses de l'open data“. Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0045.
Der volle Inhalt der QuelleAs more than fifty countries have launched an open data policy, this doctoral dissertation investigates on the emergence and implementation of such policies. It is based on the analysis of public sources and an ethnographic inquiry conducted in seven French local authorities and institutions. By retracing six moments of definitions of the “open data principles” and their implementation by a French institution, Etalab, this work shows how open data has brought attention to data, particularly in their raw form, considered as an untapped resource, the “new oil” lying under the organisations. The inquiry shows that the process of opening generally begins by a phase of identification marked by progressive and uncertain explorations. It allows to understand that data are progressively instantiated from management files into data. Their circulation provoke frictions: to leave the sociotechnical network of organisations, data generally go through validation circuits and chains of treatment. Besides, data must often undergo important treatments before their opening in order to become intelligible by machines as well as humans. This thesis shows eventually that data publics are also instantiated as they are expected to visualize, inspect and process the data. Data publics are instantiated through various tools, which compose another area of the invisible work of open data projects. Finally, it appears from this work that the possible legal requirement to open data asks a fundamental question, “what is data?” Instead of reducing data to a relational category, which would apply to any informational material, studied cases show that they generally are applied when data are a starting point of sociotechnical networks dedicated to their circulation, their exploitation and their visibility
Goeta, Samuel. „Instaurer des données, instaurer des publics : une enquête sociologique dans les coulisses de l'open data“. Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0045/document.
Der volle Inhalt der QuelleAs more than fifty countries have launched an open data policy, this doctoral dissertation investigates on the emergence and implementation of such policies. It is based on the analysis of public sources and an ethnographic inquiry conducted in seven French local authorities and institutions. By retracing six moments of definitions of the “open data principles” and their implementation by a French institution, Etalab, this work shows how open data has brought attention to data, particularly in their raw form, considered as an untapped resource, the “new oil” lying under the organisations. The inquiry shows that the process of opening generally begins by a phase of identification marked by progressive and uncertain explorations. It allows to understand that data are progressively instantiated from management files into data. Their circulation provoke frictions: to leave the sociotechnical network of organisations, data generally go through validation circuits and chains of treatment. Besides, data must often undergo important treatments before their opening in order to become intelligible by machines as well as humans. This thesis shows eventually that data publics are also instantiated as they are expected to visualize, inspect and process the data. Data publics are instantiated through various tools, which compose another area of the invisible work of open data projects. Finally, it appears from this work that the possible legal requirement to open data asks a fundamental question, “what is data?” Instead of reducing data to a relational category, which would apply to any informational material, studied cases show that they generally are applied when data are a starting point of sociotechnical networks dedicated to their circulation, their exploitation and their visibility
Jaillot, Vincent. „3D, temporal and documented cities : formalization, visualization and navigation“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE2026.
Der volle Inhalt der QuelleThe study and understanding of cities evolution is an important societal issue, particularly for improving the quality of life in an increasingly dense city. Digital technology and in particular 3D city models can be part of the answer. Their manipulation is however sometimes complex due to their thematic, geometric, topological dimensions and hierarchical structure.In this thesis, we focus on the integration of the temporal dimension and in the enrichment with multimedia documents of these 3D models of the city, in an objective of visualization and navigation on the web. Moreover, we take a particular interest in interoperability (based on standards), reusability (with a shared software architecture and open source components) and reproducibility (to make our experiments durable).Our first contribution is a formalization of the temporal dimension of cities for interactive navigation and visualization on the web. For this, we propose a conceptual model of existing standards for the visualization of cities on the web, which we extend with a formalization of the temporal dimension. We also propose a logical model and a technical specification of these proposals.Our second contribution allows the integration of multimedia documents into city models for spatial, temporal and thematic visualization and navigation on the web. We propose a conceptual model for the integration of heterogeneous and multidimensional geospatial data. We then use it for the integration of multimedia documents and 3D city models.Finally, this thesis took place in a multidisciplinary context via the Fab-Pat project of the LabEx IMU, which focuses on cultural heritage sharing and shaping. In this framework, a contribution combining social sciences and computer science has allowed the design of DHAL, a methodology for the comparative analysis of devices for sharing heritage via digital technology. Dans cette thèse, nous nous intéressons à l'intégration de la dimension temporelle et à l'enrichissement avec des documents multimédia de ces modèles 3D de la ville, dans un objectif de visualisation et de navigation sur le web. Nous portons un intérêt particulier à l'intéropérabilité (en s'appuyant sur des standards), à la réutilisabilité (avec une architecture logicielle partagée et des composants open source) et à la reproductibilité (permettant de rendre nos expérimentations pérennes).Notre première contribution est une formalisation de la dimension temporelle des villes pour une navigation et visualisation interactive sur le web. Pour cela, nous proposons un modèle conceptuel des standards existants pour la visualisation de villes sur le web, que nous étendons avec une formalisation de la dimension temporelle. Nous proposons également un modèle logique et une spécification technique de ces propositions.Notre deuxième contribution permet d'intégrer des documents multimédias aux modèles de villes pour une visualisation et une navigation spatiale, temporelle et thématique sur le web. Nous proposons un modèle conceptuel pour l'intégration de données géospatiales hétérogènes et multidimensions. Nous l'utilisons ensuite pour l'intégration de documents multimédias et de modèles 3D de villes.Enfin, cette thèse s'est déroulée dans un contexte pluridisciplinaire via le projet Fab-Pat, du LabEx IMU, qui s'intéresse au partage de la fabrique du patrimoine. Dans ce cadre, une contribution mêlant sciences sociales et informatique a permis de concevoir DHAL, une méthodologie pour l’analyse comparative de dispositifs pour le partage du patrimoine via le numérique
Dumas, Menjivar Marlon. „TEMPOS : une plate-forme pour le développement d'applications temporelles au dessus de SGBD à objets“. Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00006741.
Der volle Inhalt der QuelleFourmanoit, N. „Analyse des 5 ans de données de l'expérience SuperNova Legacy Survey“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00587450.
Der volle Inhalt der QuelleFouchez, Dominique. „Etude de canaux de physique non-standard au LHC : analyse des données de test d'un calorimètre plomb/fibres scintillantes“. Aix-Marseille 2, 1993. http://www.theses.fr/1993AIX22003.
Der volle Inhalt der QuelleAbidi, Karima. „La construction automatique de ressources multilingues à partir des réseaux sociaux : application aux données dialectales du Maghreb“. Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0274.
Der volle Inhalt der QuelleAutomatic language processing is based on the use of language resources such as corpora, dictionaries, lexicons of sentiments, morpho-syntactic analyzers, taggers, etc. For natural languages, these resources are often available. On the other hand, when it comes to dealing with under-resourced languages, there is often a lack of tools and data. In this thesis, we are interested in some of the vernacular forms of Arabic used in Maghreb. These forms are known as dialects, which can be classified as poorly endowed languages. Except for raw texts, which are generally extracted from social networks, there is not plenty resources allowing to process Arabic dialects. The latter, compared to other under-resourced languages, have several specificities that make them more difficult to process. We can mention, in particular the lack of rules for writing these dialects, which leads the users to write the dialect without following strict rules, so the same word can have several spellings. Words in Arabic dialect can be written using the Arabic script and/or the Latin script (arabizi). For the Arab dialects of the Maghreb, they are particularly impacted by foreign languages such as French and English. In addition to the borrowed words from these languages, another phenomenon must be taken into account in automatic dialect processing. This is the problem known as code- switching. This phenomenon is known in linguistics as diglossia. This gives free rein to the user who can write in several languages in the same sentence. He can start in Arabic dialect and in the middle of the sentence, he can switch to French, English or modern standard Arabic. In addition to this, there are several dialects in the same country and a fortiori several different dialects in the Arab world. It is therefore clear that the classic NLP tools developed for modern standard Arabic cannot be used directly to process dialects. The main objective of this thesis is to propose methods to build automatically resources for Arab dialects in general and more particularly for Maghreb dialects. This represents our contribution to the effort made by the community working on Arabic dialects. We have thus produced methods for building comparable corpora, lexical resources containing the different forms of an input and their polarity. In addition, we developed methods for processing modern standard Arabic on Twitter data and also on transcripts from an automatic speech recognition system operating on Arabic videos extracted from Arab television channels such as Al Jazeera, France24, Euronews, etc. We compared the opinions of automatic transcriptions from different multilingual video sources related to the same subject by developing a method based on linguistic theory called Appraisal
Braci, Sofiane. „Etude des méthodes de dissimulation informées de données appliquées aux supports multimédias“. Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00866202.
Der volle Inhalt der QuelleCazalens, Sylvie. „Formalisation en logique non standard de certaines méthodes de raisonnement pour fournir des réponses coopératives, dans des systèmes de bases de données et de connaissances“. Toulouse 3, 1992. http://www.theses.fr/1992TOU30172.
Der volle Inhalt der QuelleNitoglia, Elisa. „Gravitational-wave data analysis for standard and non-standard sources of compact binary coalescences in the third LIGO-Virgo observing run“. Electronic Thesis or Diss., Lyon 1, 2023. http://www.theses.fr/2023LYO10143.
Der volle Inhalt der QuelleThis PhD thesis presents a comprehensive investigation into the detection of gravitational wave signals from compact binary mergers, with a specific focus on the analysis of data from the third observing run of the LIGO-Virgo Collaboration. The manuscript begins by providing an introduction to the fundamental principles of the theory of General Relativity, including the prediction of the existence of gravitational waves and an overview of the astrophysical sources that generate these waves. It also provides a detailed description of interferometers, the instruments used in gravitational wave observatories, and their basic functioning. Subsequently, the manuscript focuses on advanced data analysis techniques developed to extract gravitational wave signals from the detector noise. Special attention is given to the Multi-Band Template Analysis (MBTA) pipeline, which the author actively contributes to as part of the MBTA team. The functioning and methodology of the MBTA pipeline are described in detail, highlighting its role in the detection and analysis of gravitational wave signals. The manuscript then proceeds to present the results obtained from the standard analysis conducted to search for signals originating from the coalescence of binary black holes, binary neutron stars, and black hole-neutron star binaries in the data collected during the third observing run. The analysis includes a comprehensive examination of the observed signals, their properties, and the astrophysical implications of the detected mergers. Additionally, the manuscript explores the latest advancements in the search for gravitational waves emitted by sub-solar mass binaries, which involve binary systems comprising at east one object with a mass below the threshold of the mass of the Sun, providing an in-depth investigation into the methodology and results of the sub-solar mass search during the third observing run. Through this comprehensive investigation, the manuscript aims at contributing to the advancement of gravitational wave astronomy, offering a comprehensive exploration of gravitational wave research, encompassing the main achievement of the third observing run in both standard and sub-solar mass searches
Martin, Dit Latour Bertrand. „Mesure de la section efficace de production de paires de quarks top dans l'état final di-électron avec les données collectées par l'expérience D0 au Run IIa“. Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00330351.
Der volle Inhalt der QuelleLa reconstruction et l'identification des électrons et des jets sont primordiales pour cette analyse, et ont été étudiées dans une topologie où un boson Z est produit en association avec un ou plusieurs jets. Le processus Z+jets constitue en effet le bruit de fond physique dominant pour la production top-antitop dans l'état final diélectron.
Le principal enjeu de la mesure de section efficace est la vérification des prédictions du Modèle Standard. Dans ce manuscrit, ce résultat est également interprété pour extraire de façon indirecte la masse du quark top. Par ailleurs, la mesure de la section efficace est sensible à une éventuelle manifestation de nouvelle physique telle que l'existence d'un boson de Higgs chargé. La sélection établie pour mesurer la section efficace de production top-antitop a été mise à profit pour rechercher un boson H+ plus léger que le quark top, ce dernier pouvant ainsi se désintégrer en un boson W+ ou H+ et un quark b. Dans le modèle étudié, le boson H+ se désintègre exclusivement en un lepton tau et un neutrino.
Noumon, Allini Elie. „Caractérisation, évaluation et utilisation du jitter d'horloge comme source d'aléa dans la sécurité des données“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES019.
Der volle Inhalt der QuelleThis thesis, funded by the DGA, is motivated by the problem of evaluation of TRNG for applications with a very high level of security. As current standards such as AIS-31 are not sufficient for these types of applications, the DGA proposes a complementary procedure, validated on TRNG using ring oscillators (RO), which aims to characterize the source of randomness of TRNG in order to identify electronic noises present in it. These noises are manifested in the digital circuits by the clock jitter generated in the RO. They can be characterized by their power spectral density related to the time Allan variance which allows, unlike the standard variance which is still widely used, to discriminate these different types of noise (mainly thermal, flicker). This study was used as a basis for estimating the proportion of jitter due to thermal noise used in stochastic models describing the output of TRNG. In order to illustrate and validate the DGA certification approach on other principles of TRNG apart from RO, we propose a characterization of PLL as a source of randomness. We have modeled the PLL in terms of transfer functions. This modeling has led to the identification of the source of noise at the output of the PLL, as well as its nature as a function of the physical parameters of the PLL. This allowed us to propose recommendations on the choice of parameters to ensure maximum entropy. In order to help in the design of this type of TRNG, we also propose a tool to search for the non-physical parameters of the generator ensuring the best compromise between security and throughput
Zeman, Martin. „Measurement of the Standard Model W⁺W⁻ production cross-section using the ATLAS experiment on the LHC“. Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112263/document.
Der volle Inhalt der QuelleMeasurements of di-boson production cross-sections are an important part of the physics programme at the CERN Large Hadron Collider. These physics analyses provide the opportunity to probe the electroweak sector of the Standard Model at the TeV scale and could also indicate the existence of new particles or probe beyond the Standard Model physics. The excellent performance of the LHC through years 2011 and 2012 allowed for very competitive measurements. This thesis provides a comprehensive overview of the experimental considerations and methods used in the measurement of the W⁺W⁻ production cross-section in proton-proton collisions at √s = 7 TeV and 8 TeV. The treatise covers the material in great detail, starting with the introduction of the theoretical framework of the Standard Model and follows with an extensive discussion of the methods implemented in recording and reconstructing physics events in an experiment of this magnitude. The associated online and offline software tools are included in the discussion. The relevant experiments are covered, including a very detailed section about the ATLAS detector. The final chapter of this thesis contains a detailed description of the analysis of the W-pair production in the leptonic decay channels using the datasets recorded by the ATLAS experiment during 2011 and 2012 (Run I). The analyses use 4.60 fb⁻¹ recorded at √s = 7 TeV and 20.28 fb⁻¹ recorded at 8 TeV. The experimentally measured cross section for the production of W bosons at the ATLAS experiment is consistently enhanced compared to the predictions of the Standard Model at centre-of-mass energies of 7 TeV and 8 TeV. The thesis concludes with the presentation of differential cross-section measurement results
Vömel, Christof. „Contributions à la recherche en calcul scientifique haute performance pour les matrices creuses“. Toulouse, INPT, 2003. http://www.theses.fr/2003INPT003H.
Der volle Inhalt der QuelleCaillaud, Johann. „Le standard pratiqué : une nouvelle voie de standardisation des processus métier ouverte par une recherche-action“. Thesis, Paris 9, 2013. http://www.theses.fr/2013PA090040/document.
Der volle Inhalt der QuelleBusiness processes undergo standardization. This standardization is achieved through domination, confrontation and incorporation, means that have their origins in methods like Taylorism, reengineering or the implementation of tools such as ERP systems. Prescription and standardization of business processes, however, create problems for organizations, at the strategic, functional and operating levels. Our research attempts to uncover on one hand novel ways of standardizing processes and on the other the conditions facilitating the emergence of these new ways.Convinced that change cannot be defined any more as the imposition of an a priori model or a promulgated standard, we investigate how work practices may contribute to the creation of standards, and result in “practiced” standards. To find solutions to the problems met with current ways of standardizing, we propose a model, which places practice at the heart of a spiral of creation of organizational knowledge. Through an action research project, we analyze the effects of the implementation of this model in two different settings, namely a public banking institute and a conglomerate of national press, requiring different conditions for change.Our findings, which differ considerably from one case to the other, highlight how the “practiced” standard emerges as a novel way of standardizing. First, we notice that the “practiced” standard feeds on the promulgated standard to anchor business processes in the whole organization. Second, the emergence and the development of the “practiced” standard bring to light specific processes that operate in the organization, namely a process of sensemaking, the support of a structure of power parallel to the official one, and a process of organizational innovation
Lambert, Pascal. „Sismologie solaire et stellaire“. Phd thesis, Université Paris-Diderot - Paris VII, 2007. http://tel.archives-ouvertes.fr/tel-00140766.
Der volle Inhalt der QuelleHe, Shuang. „Production et visualisation de niveaux de détail pour les modèles 3D urbains“. Ecole centrale de Nantes, 2012. http://portaildocumentaire.citechaillot.fr/search.aspx?SC=theses&QUERY=+marie+Durand#/Detail/%28query:%28Id:%270_OFFSET_0%27,Index:1,NBResults:1,PageRange:3,SearchQuery:%28CloudTerms:!%28%29,ForceSearch:!t,Page:0,PageRange:3,QueryString:%27Shuang%20he%27,ResultSize:10,ScenarioCode:theses,ScenarioDisplayMode:display-standard,SearchLabel:%27%27,SearchTerms:%27Shuang%20he%27,SortField:!n,SortOrder:0,TemplateParams:%28Scenario:%27%27,Scope:%27%27,Size:!n,Source:%27%27,Support:%27%27%29%29%29%29.
Der volle Inhalt der Quelle3D city models have been increasingly used in a variety of urban applications as platforms to integrate and visualize diverse information. This work addresses level-of-detail production and visualization of 3D city models, towards their use in GIS applications. This work proposes a hybrid solution to LoD production of 3D city models, using a combination of techniques: extrusion, integration, generalization, and procedural modeling. The prerequisite for using our solution is to have data (like 2D cadastral buildings) for generating city coverage volumetric building models and data (like road network, administrative division, etc) for dividing the city into meaningful units. It can also gain advantage from using other accessible 2D and 3D models of city objects as many as possible. Because such requirements can be fulfilled at low cost, the solution may be easily adopted. The main focus of this work is placed on generalization techniques and algorithms. A generalization framework is proposed based on the divide-and-conquer concept, realized by land cover subdivision and cell-based generalization. The framework enables city scale generalization towards more abstract city models, and facilitates local scale generalization by grouping city objects according to city cells. An implementation of city scale generalization is realized based on the framework. A footprint-based approach is developed for generalizing 3D building groups at medium LoD, which can be used for local scale generalization. Moreover, using the LoD 3D city models produced by the proposed solution, three visualization examples are given to demonstrate some of the potential uses: multi-scale, focus + context, and view-dependent visualization
Lorenzo, Martinez Narei. „Observation of a Higgs boson and measurement of its mass in the diphoton decay channel with the ATLAS detector at the LHC“. Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00924105.
Der volle Inhalt der QuelleChen, Yan. „Traitement transactionnel dans un environnement OSI“. Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376126461.
Der volle Inhalt der QuelleJousse, Anne-Laure. „Modèle de structuration des relations lexicales fondé sur le formalisme des fonctions lexicales“. Thèse, Paris 7, 2010. http://hdl.handle.net/1866/4347.
Der volle Inhalt der QuelleThis thesis proposes a model for structuring lexical relations, based on the concept of lexical functions (LFs) proposed in Meaning-Text Theory [Mel’cuk, 1997]. The lexical relations taken into account include semantic derivations and collocations as defined within this theoretical framework, known as Explanatory and Combinatorial Lexicology [Mel’cuk et al., 1995]. Considering the assumption that lexical relations are neither encoded nor made available in lexical databases in an entirely satisfactory manner, we assume the necessity of designing a new model for structuring them. First of all, we justify the relevance of devising a system of lexical functions rather than a simple classification. Next, we present the four perspectives developped in the system: a semantic perspective, a combinatorial one, another one targetting the parts of speech of the elements involved in a lexical relation, and, finally, a last one emphasizing which element of the relation is focused on. This system covers all LFs, even non-standard ones, for which we have proposed a normalization of the encoding. Our system has already been implemented into the DiCo relational database. We propose three further applications that can be developed from it. First, it can be used to build browsing interfaces for lexical databases such as the DiCo. It can also be directly consulted as a tool to assist lexicographers in encoding lexical relations by means of lexical functions. Finally, it constitutes a reference to compute lexicographic information which will, in future work, be implemented in order to automatically fill in some fields within the entries in lexical databases.
Thèse réalisée en cotutelle avec l'Université Paris Diderot (Paris 7)
Todorova-Nova, Sharka. „Mesure de la masse des bosons w# au lep a l'aide du detecteur delphi“. Strasbourg 1, 1998. http://www.theses.fr/1998STR13108.
Der volle Inhalt der QuelleBert, Denis. „Systemes d'annuaire osi : specifications de mise en oeuvre pour la messagerie et l'administration de reseau“. Paris 6, 1988. http://www.theses.fr/1988PA066075.
Der volle Inhalt der QuelleGolden, Boris. „Un formalisme unifié pour l'architecture des systèmes complexes“. Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00827107.
Der volle Inhalt der QuelleMachet, Martina. „Higgs boson production in the diphoton decay channel with CMS at the LHC : first measurement of the inclusive cross section in 13 TeV pp collisions, and study of the Higgs coupling to electroweak vector bosons“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS254/document.
Der volle Inhalt der QuelleIn this document two analyses of the properties of the Higgs boson in the diphoton decay channel with the CMS experiment at the LHC (Large Hadron Collider) are presented.The document starts with a theoretical introduction of the Standard Model and the Higgs boson physics, followed by a detailed description of the CMS detector.Then, photon reconstruction and identification algorithms are presented, with a particular focus on the differences between the first and the second run of the LHC, the first run (Run1) took place from 2010 to 2012 with a centre-of-mass energy of 7 and then 8 TeV, while the second run (Run2) started in 2015 with a centre-of-mass energy of 13 TeV. Performances of Run1 and Run2 reconstructions from the photon identification point of view are compared. Then the photon identification algorithm for the H->γγ analysis optimised for Run2 is presented. To do that a multivariate analysis method is used. Performances of the photon identification at 13 TeV are finally studied and a data-simulation validation is performed.Afterwards, the H->γγ analysis using the first Run2 data is presented. The analysis is performed with a dataset corresponding to an integrated luminosity of 2.7/fb. An event classification is performed to maximize signal significance and to studyspecific Higgs boson production modes. The observed significance for the standard model Higgs boson is 1.7 sigma, while a significance of 2.7 sigma is expected.Finally a feasibility study, having the aim of constraining the anomalous couplings of the Higgs boson to the vector bosons, is presented. This analysis is performed using the data collected at 8 TeV during Run1 at the LHC, corresponding to an integrated luminosity of 19.5/fb. This analysis exploits the production of the Higgs boson through vector boson fusion (VBF), with the Higgs decaying to 2 photons. The kinematic distributions of the dijet and diphoton systems, which depend from the spin-parity hypothesis, are used to build some discriminants able to discriminate between different spin-parity hypotheses. These discriminants allow to define different regions of the phase-space enriched with a certain spin-parity process. The Higgs boson signal yield is extracted in each region from a fit to the diphoton mass, allowing to determine the contributions of the different processes and then constrain the production of a pseudo-scalar (spin-parity 0-) Higgs boson
Sow, Aboubakry Moussa. „Classification, réduction de dimensionnalité et réseaux de neurones : données massives et science des données“. Thèse, 2020. http://depot-e.uqtr.ca/id/eprint/9600/1/eprint9600.pdf.
Der volle Inhalt der QuelleDiouf, Jean Noël Dibocor. „Classification, apprentissage profond et réseaux de neurones : application en science des données“. Thèse, 2020. http://depot-e.uqtr.ca/id/eprint/9555/1/eprint9555.pdf.
Der volle Inhalt der Quelle