Dissertations / Theses on the topic 'Réutilisation des données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 29 dissertations / theses for your research on the topic 'Réutilisation des données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mehira, Djamel. "Bases de données images : application à la réutilisation en synthèse d'images." La Rochelle, 1998. http://www.theses.fr/1998LAROS017.
Full textBilasco, Ioan Marius. "Une approche sémantique pour la réutilisation et l’adaptation de données 3D." Grenoble 1, 2007. http://www.theses.fr/2007GRE10262.
Full textThe number of applications and access devices 30 enabled increases steadily. This trend is encouraging the development of tools for the retrieval, the reuse and the adaptation of 30 data. Usually, such tools are developed in the narrow context of a specifie application. This is partly because the 30 data are usually defined only by their geometry and by their appearance and are difficult to be considered in contexts independent from a specifie application domain. We consider these issues by proposing a solution that enhances 30 data description by semantic annotations and organizes them into a model (30SEAM) according to the dimension described: geometry, appearance, topology, semantics and media profile. To accommodate the variability of the model in terms of semantic descriptions, we propose an extension of the OQL using 30SEAM specifie constructs. A platform (30AF) federates the instantiation and the interrogation of such descriptions. Above this platform, we build a platform focusing on the reuse of 30 data and their semantics (30S0L). Ln addition, we are adopting an approach of adapting 30 data based on semantic rules interpreted and executed by an adaptation platform (Adapt30). The reuse and the adaptation processes employ an XML standard for 30 data: X30. The transformations applied to data are implemented using XSL T. An application that concerns the management of 30 urban scenes is proposed in order to validate the various formalisms introduced by our approach. The application is based on the instantiation of the description model usin MPEG-7. The query process is supported by XQuery
Hadji, Brahim. "Utilisation et réutilisation des données d'un système d'information clinique : application aux données de pilotage à l'hôpital européen Georges Pompidou." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066039/document.
Full textThe information and communication technologies (ICT) have been developed in all economic sectors. In the healthcare field, and particularly in hospitals with the introduction of clinical information systems (CIS), investments have dramatically increased. The rationale for these investments is the improvement of both the hospital efficiency and the quality of the care delivered to patients after the deployment of a fully integrated CIS. In the aim to validate these relationships adapted methodologies, need to be designed and implemented. This thesis concentrates on the CIS maturity and hospital efficiency relationship. Material for testing the hypothesis come from several CIS evaluations performed at HEGP and data extracted from the decision analytics tools of Assistance Publique Hôpitaux de Paris (AP-HP). After a study of the literature on the use and satisfaction evaluation of a CIS, the first part of the thesis is organized around two main studies. A 14 years longitudinal study achieved between 2004 and 2014 analyzes the evolution of use and satisfaction and their determinants within a multi professional group of users using multiple regression techniques and structural equation methods. In early post-adoption (4 years), the CIS use, the CIS quality, and the CIS perceived usefulness (PU) explain 53% of the variance in user satisfaction. In the very late post-adoption phase (> 10 years), the effect of use on user satisfaction is no more significant. In contrast, the CIS quality, the confirmation of expectations, and the PU are the best determinants of satisfaction explaining 86% of its variance. In a second study focused on continuance intention, satisfaction and PU appear to be the best determinants of continuance intention, with a strong indirect influence of the CIS quality. A unified model is proposed and compared to the main models of the literature. The measurement of hospital efficiency was achieved with an econometric approach. Selection of indicators entered in the econometric model was performed on the basis of a systematic literature review. Three categories of input indicators and three categories of output indicators are considered. The relationship between the input and output indicators are analyzed through a Stochastic Frontier Analysis model. An overall decrease of the efficiency of the 20 short-stay hospitals of the AP-HP for the 2009-2014 period is observed and its possible causes are discussed. The development and validation of CIS use-satisfaction evaluation model combined with the analysis of the hospital efficiency evolution over time could be the first phase of a more global evaluation of the complex influence of IT introduction on hospital efficiency and the quality of care delivered to patients
Hajri, Hiba. "Personnalisation des MOOC par la réutilisation de Ressources Éducatives Libres." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC046/document.
Full textFor many years now, personalization in TEL is a major subject of intensive research. With the spreading of Massive Open Online Courses (MOOC), the personalization issue becomes more acute. Actually, any MOOC can be followed by thousands of learners with different educational levels, learning styles, preferences, etc. So, it is necessary to present pedagogical contents taking into account their heterogeneous profiles so that they can maximize their benefit from following the MOOC.At the same time, the amount of Open Educational Resources (OER) available on the web is permanently growing. These OERs have to be reused in contexts different from the initial ones for which they were created.Indeed, producing quality OER is costly and requires a lot of time. Then, different metadata schemas are used to describe OER. However, the use of these schemas has led to isolated repositories of heterogeneous descriptions which are not interoperable. In order to address this problem, a solution adopted in the literature is to apply Linked Open Principles (LOD) to OER descriptions.In this thesis, we are interested in MOOC personalization and OER reuse. We design a recommendation technique which computes a set of OERs adapted to the profile of a learner attending some MOOC. The recommended OER are also adapted to the MOOC specificities. In order to find OER, we are interested in those who have metadata respecting LOD principles and stored in repositories available on the web and offering standardized means of access. Our recommender system is implemented in the MOOC platform Open edX and assessed using a micro jobs platform
Rochoy, Michaël. "Recherche de facteurs associés à la maladie d’Alzheimer par réutilisation de base de données massives." Thesis, Lille 2, 2019. http://www.theses.fr/2019LIL2S001/document.
Full textINTRODUCTION. Severe neurocognitive disorders or dementias are defined by ICD-10 and DSM-5. They encompass a broad nosographic framework: Alzheimer's dementia, vascular dementia, Lewy body dementia, frontal-temporal lobar degeneration, etc. Each type of dementia has its own diagnostic criteria and partially identified risk factors. Identifying cognitive disorders in large databases is a complex issue, which must take into account changes in knowledge. Our first objective was to describe the evolution of dementia coding in the national database of the Medicalization of Information Systems Program (PMSI) for short stays, as diagnostic criteria evolved. Our second objective was to summarize the main known associated factors of Alzheimer's disease. Our third objective was to determine the factors associated with the onset of Alzheimer's disease in the national database of the short stay PMSI.METHODS. For the first work, we used the main diagnoses on the ScanSanté site for the short stay PMSI from 2007 to 2017. For the second work, we synthesized the literature reviews and meta-analyses using the PubMed and LiSSa search engines. For the third work, we conducted an analytical study by data mining in the national database of the short stay PMSI for patients aged 55 years or older in 2014: we selected 137 potential explanatory variables in 2008; the dependant variable was Alzheimer's disease or dementia in 2014.RESULTS. Our first work on the identification of dementias shows a decrease in inpatient stays with a main diagnosis of Alzheimer's disease or dementia, with a shift towards other organic mental disorders; stability of inpatint stays with a main diagnosis of vascular dementia but with a modification of under-diagnosis (decrease in main diagnoses of multiple heart attacks and increase in all other subtypes); a significant increase in inpatient stays with a main diagnosis of dementia or other persistent or late cognitive disorders related to alcohol consumption; a homogeneous evolution throughout the French territory. These results support a coding that respects the evolution of the literature. Our next two studies on the identification of at-risk populations identify several factors associated with Alzheimer's disease or dementia, including age, gender, diabetes mellitus, depression, undernutrition, bipolar, psychotic and anxiety disorders, low education, excess alcohol, epilepsy, falls after age 75 and intracranial hypertension. These associated factors may be risk factors, early, revealing or precipitating symptoms.CONCLUSION. Identifying cognitive disorders in large databases requires a good understanding of the evolution of dementia coding, which seems to respect the evolution of knowledge. The identification of patients with factors associated with dementia allows a more focused early identification and then proper identification of the etiological diagnosis necessary for appropriate management
Ferret, Laurie. "Anticoagulants oraux, réutilisation de données hospitalières informatisées dans une démarche de soutien à la qualité des soins." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S016/document.
Full textIntroduction :Oral anticoagulants raise major issues in terms of bleeding risk and appropriate use. The computerization of medical records offers the ability to access large databases that can be explored automatically. The objective of this work is to show how routinely collected data can be reused to study issues related to anticoagulants in a supportive approach to quality of care.MethodsThis work was carried out on the electronic data (97,355 records) of a community hospital. For each inpatient stay we have diagnostic, biological, drug and administrative data, and the discharge letters. This work is organized around three axes:Axis I. The objective is to evaluate the accuracy of the detection of factors that may increase the anticoagulant effect of vitamin K antagonists (VKA), using rules developed in the PSIP european project (grant agreement N° 216130). A case review on one year enabled the calculation of the positive predictive value and sensitivity of the rules. Axis II. We conducted a cohort study on data from 2007 to 2012 to determine the major elements involved in raising the risk of bleeding related to VKA in clinical reality. Cases were the stays with an elevation of the INR beyond 5, the controls did not have.Axis III. We made data reuse serve a study of the quality of the prescriptions. On the one hand we assessed treatment of the thromboembolic risk recommendations in atrial fibrillation (AF) in the elderly, on the other hand we investigated the prescription of direct oral anticoagulants.Results : Axis I : The positive predictive value of the rules intended to detect the factors favoring the elevation of INR in case of treatment with VKA is 22.4%, the sensitivity is 84.6%. The main contributive rules are the ones intended to detect an infectious syndrome and amiodarone.Axis II : The major factor increasing the INR with VKA treatment highlighted by the cohort study are infectious syndrome, cancer, hepatic insufficiency and hypoprotidemia. The recommendations compliance rate in atrial fibrillation in the elderly is 47.8%. Only 45% of patients receive oral anticoagulants, 22.9% do not receive antithrombotic treatment at all and 32.1% received platelet aggregation inhibitors. Direct oral anticoagulants are prescribed at inadequate dosages in 15 to 31.4% of patients, respectively for dabigatran and rivaroxaban. These errors are mainly underdosages in the elderly with atrial fibrillation (82.6%).Discussion : The computerization of medical records has led to the creation of large medical databases, which can be used for various purposes as we show in this work. In the first work axis we have shown that rule-based decision support systems detect the contributing factors for VKA overdose with a good sensitivity but a low positive predictive value. The second line shows that we could use the data for exploratory purposes to identify factors associated with increased INR in patients receiving VKA in “real life practice”. The third line shows that the rule-based systems can also be used to identify inappropriate prescribing for the purpose of improving the quality of care. In the field of anticoagulation this work opens up innovative perspectives for improving the quality of care
Fruchart, Mathilde. "Réutilisation des données de soins premiers : spécificités, standardisation et suivi de la prise en charge dans les Maisons de Santé Pluridisciplinaires." Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS040.
Full textContext : Reusing healthcare data beyond its initial use helps to improve patient care, facilitate research, and optimize the management of healthcare organizations. To achieve this, data is extracted from healthcare software, transformed and stored in a data warehouse through an extract-transform-load(ETL) process. Common data models, such as the OMOP model, exist to store data in a homogeneous,source-independent format. Data from healthcare claims centralized in the national database (SNDS), hospital, social networks and forums, and primary care are different data sources representative of the patient care pathway. The last data source has not been fully exploited. Objective : The aim of this thesis was to incorporate the specificities of primary care data reuse to implement a data warehouse while highlighting the contribution of primary care to the field of research. Methods : The first step was to extract the primary care data of a multidisciplinary health center (MHC) from the WEDA care software. A primary care data warehouse was implemented using an ETL process. Structural transformation (harmonization of the database structure) and semantic transformation (harmonization of the vocabulary used in the data) were implemented to align the data with the common OMOP data model. A process generalization tool was developed to integrate general practitioners (GP) data from multiple care structures and tested on four MHCs. Subsequently, algorithm for assessing the persistence of a prescribed treatment and dashboards were developed. Thanks to the use of the OMOP model, these tools can be shared with other MHCs. Finally, retrospective studies were conducted on the diabetic population of the four MHCs. Results : Over a period of more than 20 years, data of 117,005 patients from four MHCs wereloaded into the OMOP model using our ETL process optimization tool. These data include biological results from laboratories and GP consultation data. The vocabulary specific to primary care was aligned with the standard concepts of the model. An algorithm for assessing persistence with treatment prescribed by the GP and also a dashboard for monitoring performance indicators (ROSP) and practice activity have been developed. Based on the data warehouses of four MHCs, we described the follow-up of diabetic patients. These studies use biological results, consultation and drug prescriptions data in OMOP format. The scripts of these studies and the tools developed can be shared. Conclusion : Primary care data represent a potential for reusing data for research purposes and improving the quality of care. They complement existing databases (hospital, national and social networks) by integrating clinical data from the city. The use of a common data model facilitates the development of tools and the conduct of studies, while enabling their sharing. Studies can be replicated in different centers to compare results
Djaffardjy, Marine. "Pipelines d'Analyse Bioinformatiques : solutions offertes par les Systèmes de Workflows, Cadre de représentation et Étude de la Réutilisation." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG059.
Full textBioinformatics is a multidisciplinary field that combines biology, computer science, and statistics, aiming to gain a better understanding of living mechanisms. It relies primarily on the analysis of biological data. Major technological improvements, especially sequencing technologies, gave rise to an exponential increase of data, laying out new challenges in data analysis and management.In order to analyze this data, bioinformaticians use pipelines, which chain computational tools and processes. However, the reproducibility crisis in scientific research highlights the necessity of making analyses reproducible and reusable by others.Scientific workflow systems have emerged as a solution to make pipelines more structured, understandable, and reproducible. Workflows describe procedures with multiple coordinated steps involving tasks and their data dependencies. These systems assist bioinformaticians in designing and executing workflows, facilitating their sharing and reuse. In bioinformatics, the most popular workflow systems are Galaxy, Snakemake, and Nextflow.However, the reuse of workflows faces challenges, including the heterogeneity of workflow systems, limited accessibility to workflows, and the need for public workflow databases. Additionally, indexing and developing workflow search engines are necessary to facilitate workflow discovery and reuse.In this study, we developed an analysis method for workflow specifications to extract several representative characteristics from a dataset of workflows. The goal was to propose a standardized representation framework independent of the specification language. Additionally, we selected a set of workflow characteristics and indexed them into a relational database and a structured semantic format. Finally, we established an approach to detect similarity between workflows and between processors, enabling us to observe the reuse practices adopted by workflow developers
Bouzillé, Guillaume. "Enjeux et place des data sciences dans le champ de la réutilisation secondaire des données massives cliniques : une approche basée sur des cas d’usage." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B023/document.
Full textThe dematerialization of health data, which started several years ago, now generates na huge amount of data produced by all actors of health. These data have the characteristics of being very heterogeneous and of being produced at different scales and in different domains. Their reuse in the context of clinical research, public health or patient care involves developing appropriate approaches based on methods from data science. The aim of this thesis is to evaluate, through three use cases, what are the current issues as well as the place of data sciences regarding the reuse of massive health data. To meet this objective, the first section exposes the characteristics of health big data and the technical aspects related to their reuse. The second section presents the organizational aspects for the exploitation and sharing of health big data. The third section describes the main methodological approaches in data sciences currently applied in the field of health. Finally, the fourth section illustrates, through three use cases, the contribution of these methods in the following fields: syndromic surveillance, pharmacovigilance and clinical research. Finally, we discuss the limits and challenges of data science in the context of health big data
Eyssautier-Bavay, Carole. "Modèles, langage et outils pour la réutilisation de profils d'apprenants." Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00327198.
Full textIl n'existe pas à l'heure actuelle de solution technique permettant de réutiliser ces profils hétérogènes. Cette thèse cherche donc à proposer des modèles et des outils permettant la réutilisation pour les différents acteurs de profils d'apprenants créés par d'autres.
Dans nos travaux, nous proposons le modèle de processus de gestion de profils REPro (Reuse of External Profiles). Pour permettre la réutilisation de profils hétérogènes, nous proposons de les réécrire selon un formalisme commun qui prend la forme d'un langage de modélisation de profils, le langage PMDL (Profiles MoDeling Language). Nous définissons ensuite un ensemble d'opérateurs permettant la transformation des profils ainsi harmonisés, ou de leur structure, tels que l'ajout d'éléments dans le profil, ou la création d'un profil de groupe à partir de profils individuels. Ces propositions ont été mises en œuvre au sein de l'environnement EPROFILEA du projet PERLEA (Profils d'Élèves Réutilisés pour L'Enseignant et l'Apprenant), avant d'être mises à l'essai auprès d'enseignants en laboratoire.
Ficheur, Grégoire. "Réutilisation de données hospitalières pour la recherche d'effets indésirables liés à la prise d'un médicament ou à la pose d'un dispositif médical implantable." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S015/document.
Full textIntroduction:The adverse events associated with drug administration or placement of an implantable medical device should be sought systematically after the beginning of the commercialisation. Studies conducted in this phase are observational studies that can be performed from hospital databases. The objective of this work is to study the interest of the re-use of hospital data for the identification of such an adverse event.Materials and methods:Two hospital databases have been re-used between the years 2007 to 2013: the first contains 171 million inpatient stays including diagnostic codes, procedures and demographic data. This data is linked with a single patient identifier; the second database contains the same kinds of information for 80,000 stays and also the laboratory results and drug administrations for each inpatient stay. Four studies were conducted on these pieces of data to identify adverse drug events and adverse events following the placement of an implantable medical device.Results:The first study demonstrates the ability of a set of detection of rules to automatically identify adverse drug events with hyperkalaemia. The second study describes the variation of a laboratory results associated with the presence of a frequent sequential pattern composed of drug administrations and laboratory results. The third piece of work enables the user to build a web tool exploring on the fly the reasons for rehospitalisation of patients with an implantable medical device. The fourth and final study estimates the thrombotic and bleeding risks following a total hip replacement.Conclusion:The re-use of hospital data in a pharmacoepidemiological perspective allows the identification of adverse events associated with drug administration or placement of an implantable medical device. The value of this data is the amount statistical power they bring as well as the types of associations they allow to analyse
Saurel, Claire. "Contribution aux systèmes experts : développement d'un cas concret et étude du problème de la génération d'explications négatives." Toulouse, ENSAE, 1987. http://www.theses.fr/1987ESAE0008.
Full textLebis, Alexis. "Capitaliser les processus d'analyse de traces d'apprentissage : modélisation ontologique & assistance à la réutilisation." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS523.
Full textThis thesis in computer science focuses on the problem of capitalizing analysis processes of elearning traces within the Learning Analytics (LA) community. The aim is to allow these analysis processes to be shared, adapted and reused. Currently, this capitalization is limited by two important factors: the analysis processes are dependent on the analysis tools that implement them - their technical context - and the pedagogical context for which they are conducted. This prevents them from being shared, but also from being simply reused outside their original contexts, even if the new contexts are similar. The objective of this thesis is to provide models and methods for the capitalisation of analysis processes of elearning traces, as well as to assist the various actors involved in the analysis, particularly during the reuse phase. To do this, we answer the following three scientific questions: how to share and combine analysis processes implemented in different analysis tools; how to reuse an existing analysis process to meet another analysis need; how to assist the different actors in the development and exploitation of analysis processes; and how to support them in the development and exploitation of analysis processes. Our first contribution, resulting from a synthesis of the state of the art, is the formalization of a cycle of elaboration and exploitation of the analysis processes, in order to define the different stages, the different actors and their different roles. This formalization is accompanied by a definition of capitalization and its properties. Our second contribution responds to the first barrier related to the technical dependence of current analysis processes and their sharing. We propose a meta-model that allows to describe the analysis processes independently of the analysis tools. This meta-model formalizes the description of the operations used in the analysis processes, the processes themselves and the traces used, in order to avoid the technical constraints caused by these tools. This formalism, common to the analysis processes, also makes it possible to consider their sharing. It has been implemented and evaluated in one of our prototypes. Our third contribution deals with the second lock on the reuse of analysis processes. We propose an ontological framework for analysis processes, which allows semantic elements to be directly introduced, in a structured way, during the description of analysis processes. This narrative approach thus enriches the previous formalism and makes it possible to satisfy the properties of understanding, adaptation and reuse necessary for capitalisation. This ontological approach was implemented and evaluated in another of our prototypes. Finally, our last contribution responds to the last lock identified and concerns new assistances to actors, in particular a new method of researching analysis processes, based on our previous proposals. We use the ontological framework of the narrative approach to define inference rules and heuristics to reason about the analysis processes as a whole (e.g. steps, configurations) during the research. We also use the semantic network underlying this ontological modeling to strengthen assistance to actors by providing them with inspection and understanding tools during the research. This assistance was implemented in one of our prototypes, and empirically evaluated
Poirier, Canelle. "Modèles statistiques pour les systèmes d'aide à la décision basés sur la réutilisation des données massives en santé : application à la surveillance syndromique en santé publique." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B019.
Full textOver the past few years, the Big Data concept has been widely developed. In order to analyse and explore all this data, it was necessary to develop new methods and technologies. Today, Big Data also exists in the health sector. Hospitals in particular are involved in data production through the adoption of electronic health records. The objective of this thesis was to develop statistical methods reusing these data in order to participate in syndromic surveillance and to provide decision-making support. This study has 4 major axes. First, we showed that hospital Big Data were highly correlated with signals from traditional surveillance networks. Secondly, we showed that hospital data allowed to obtain more accurate estimates in real time than web data, and SVM and Elastic Net models had similar performances. Then, we applied methods developed in United States reusing hospital data, web data (Google and Twitter) and climatic data to predict influenza incidence rates for all French regions up to 2 weeks. Finally, methods developed were applied to the 3-week forecast for cases of gastroenteritis at the national, regional and hospital levels
Devogele, Thomas. "Processus d'intégration et d'appariement de Bases de Données Géographiques; Application à une base de donnéesroutières multi-échelles." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 1997. http://tel.archives-ouvertes.fr/tel-00085113.
Full textA l'issue de celle-ci, plusieurs représentations de phénomènes du monde réel sont disponibles selon des points de vue différents et des échelles distinctes. Ces représentations multiples sont nécessaires pour des applications très diverses : cartographie électronique multi-échelle, propagation des mises à jour, aide à la navigation.
L'objectif de cette thèse consiste donc à définir un processus d'intégration de BDG sur un seul site, le processus étant limité aux données en mode vecteur à deux dimensions. Il propose l'extension d'un processus d'intégration classique à trois phases [Spaccapietra et al. 92] (pré-intégration, déclaration des correspondances, intégration). L'extension est fondée sur une taxonomie des conflits d'intégration entre BDG et sur l'ajout d'un processus d'appariement géométrique et topologique. Ce processus a été mis en œuvre sur les trois principales bases de données de l'IGN (BD TOPO®, BD CARTO® et GEOROUTE®) pour le thème routier dans la région de Lagny (environ 900 km de tronçons routiers).
Etant donnée la complexité des phénomènes géographiques, plusieurs interprétations et donc plusieurs modélisations des phénomènes peuvent être définies. La taxonomie des conflits d'intégration de BDG effectue une structuration de ces différences : conflits de définition de classe (conflits de classification, conflits de fragmentation, conflits de spécification), conflits d'hétérogénéité, conflit de description,...Six catégories de conflits ont été traitées dans le processus d'intégration.
Certains conflits sont pris en compte dans la phase de pré-intégration. D'autres font l'objet d'un traitement spécifique : extension du langage de déclaration des correspondances, ajout d'opérations de résolution de ce conflit. De plus, la phase d'intégration doit suivre une stratégie. Cette stratégie détermine le choix des opérations et fixe l'objectif de l'intégration. Au vu de nos bases d'expérimentations, deux stratégies d'intégration (et leurs opérations d'intégration associées) sont présentées.
Le processus d'appariement consiste à identifier les données représentant le même phénomène du monde réel et permet le regroupement d'informations. Cette étape est précieuse car elle enrichit les BDG d'opérations inter-représentations, opérations nécessaires aux applications multi-représentations.
Un processus d'appariement a été développé pour les données de types routières à différentes échelles. Les résultats obtenus font apparaître un taux de correspondance de l'ordre de 90 %. Un processus générique en a été déduit afin de guider la conception des processus d'appariement concernant d'autres types de données.
Cette thèse apporte donc un cadre général et détaillé pour les intégrations de BDG et contribue ainsi à l'essor d'applications multi-représentations et de l'interopérabilité entre les BDG en adaptant ces processus à des BDG réparties sur un réseau.
Cecchinel, Cyril. "DEPOSIT : une approche pour exprimer et déployer des politiques de collecte sur des infrastructures de capteurs hétérogènes et partagées." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4094/document.
Full textSensing infrastructures are classically used in the IoT to collect data. However, a deep knowledge of sensing infrastructures is needed to properly interact with the deployed systems. For software engineers, targeting these systems is tedious. First, the specifies of the platforms composing the infrastructure compel them to work with little abstractions and heterogeneous devices. This can lead to code that badly exploit the network infrastructure. Moreover, by being infrastructure specific, these applications cannot be easily reused across different systems. Secondly, the deployment of an application is outside the domain expertise of a software engineer as she needs to identify the required platform(s) to support her application. Lastly, the sensing infrastructure might not be designed to support the concurrent execution of various applications leading to redundant deployments when a new application is contemplated. In this thesis we present an approach that supports (i) the definition of data collection policies at high level of abstraction with a focus on their reuse, (ii) their deployment over a heterogeneous infrastructure driven by models designed by a network export and (iii) the automatic composition of the policy on top of the heterogeneous sensing infrastructures. Based on these contributions, a software engineer can exploit sensor networks without knowing the associated details, while reusing architectural abstractions available off-the-shelf in their policy. The network will also be shared automatically between the policies
Bentounsi, Mohamed el Mehdi. "Les processus métiers en tant que services - BPaaS : sécurisation des données et des services." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCB156/document.
Full textCloud computing has become one of the fastest growing segments of the IT industry. In such open distributed computing environments, security is of paramount concern. This thesis aims at developing protocols and techniques for private and reliable outsourcing of design and compute-intensive tasks on cloud computing infrastructures. The thesis enables clients with limited processing capabilities to use the dynamic, cost-effective and powerful cloud computing resources, while having guarantees that their confidential data and services, and the results of their computations, will not be compromised by untrusted cloud service providers. The thesis contributes to the general area of cloud computing security by working in three directions. First, the design by selection is a new capability that permits the design of business processes by reusing some fragments in the cloud. For this purpose, we propose an anonymization-based protocol to secure the design of business processes by hiding the provenance of reused fragments. Second, we study two di_erent cases of fragments' sharing : biometric authentication and complex event processing. For this purpose, we propose techniques where the client would only do work which is linear in the size of its inputs, and the cloud bears all of the super-linear computational burden. Moreover, the cloud computational burden would have the same time complexity as the best known solution to the problem being outsourced. This prevents achieving secure outsourcing by placing a huge additional overhead on the cloud servers. This thesis has been carried out in Université Paris Descartes (LIPADE - diNo research group) and in collaboration with SOMONE under a Cifre contract. The convergence of the research fields of those teams led to the development of this manuscrit
Bentounsi, Mohamed el Mehdi. "Les processus métiers en tant que services - BPaaS : sécurisation des données et des services." Electronic Thesis or Diss., Sorbonne Paris Cité, 2015. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=1698&f=9066.
Full textCloud computing has become one of the fastest growing segments of the IT industry. In such open distributed computing environments, security is of paramount concern. This thesis aims at developing protocols and techniques for private and reliable outsourcing of design and compute-intensive tasks on cloud computing infrastructures. The thesis enables clients with limited processing capabilities to use the dynamic, cost-effective and powerful cloud computing resources, while having guarantees that their confidential data and services, and the results of their computations, will not be compromised by untrusted cloud service providers. The thesis contributes to the general area of cloud computing security by working in three directions. First, the design by selection is a new capability that permits the design of business processes by reusing some fragments in the cloud. For this purpose, we propose an anonymization-based protocol to secure the design of business processes by hiding the provenance of reused fragments. Second, we study two di_erent cases of fragments' sharing : biometric authentication and complex event processing. For this purpose, we propose techniques where the client would only do work which is linear in the size of its inputs, and the cloud bears all of the super-linear computational burden. Moreover, the cloud computational burden would have the same time complexity as the best known solution to the problem being outsourced. This prevents achieving secure outsourcing by placing a huge additional overhead on the cloud servers. This thesis has been carried out in Université Paris Descartes (LIPADE - diNo research group) and in collaboration with SOMONE under a Cifre contract. The convergence of the research fields of those teams led to the development of this manuscrit
Butoianu, Valentin. "Share and reuse of context metadata resulting from interactions between users and heterogeneous web-based learning environments." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2004/.
Full textAn interest for the observation, instrumentation, and evaluation of online educational systems has become more and more important within the Technology Enhanced Learning community in the last few years. Conception and development of Adaptive Web-based Learning Environments (AdWLE) in order to facilitate the process of re-engineering, to help understand users' behavior, or to support the creation of Intelligent Tutoring Systems represent a major concern today. These systems handle their adaptation process on the basis of detailed information reflecting the context in which students evolve while learning: consulted resources, mouse clicks, chat messages, forum discussions, visited URLs, quizzes selections, and so on. The works presented in this document are intended to overcome some issues of the actual systems by providing a privacy-enabled framework dedicated to the collect, share and reuse of context represented at two abstraction levels: raw context (resulting from direct interactions between users and applications) and inferred context (calculated on the basis of raw context). The framework is based on an open standard dedicated to system, network and application management, where the context specific to heterogeneous tools is represented as a unified and extensible structure and stored into a central repository. To facilitate access to this context repository, we introduced a middleware layer composed of a set of tools. Some of them allow users and applications to define, collect, share and search for the context data they are interested in, while others are dedicated to the design, calculation and delivery of inferred context. To validate our approach, an implementation of the suggested framework manages context data provided by three systems: two Moodle servers (one running at the Paul Sabatier University of Toulouse, and the other one hosting the CONTINT project funded by the French National Research Agency) and a local instantiation of the Ariadne Finder. Based on the collected context, relevant indicators have been calculated for each one of these environments. Furthermore, two applications which reuse the encapsulated context have been developed on top of the framework: a personalized system for recommending learning objects to students, and a visualization application which uses multi-touch technologies to facilitate the navigation among collected context entities
Ngo, Thanh Nghi. "Une approche PLM pour supporter les collaborations et le partage des connaissances dans le secteur médical : Application aux processus de soins par implantation de prothèses." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0013/document.
Full textMedical sector is a dynamic domain that requires continuous improvement of its business processes and assistance to the actors involved. This research focuses on the medical treatment process requiring prosthesis implantation. The specificity of such a process is that it makes in connection two lifecyclesbelonging to medical and engineering domains respectively. This implies several collaborative actions between stake holders from heterogeneous disciplines. However, several problems of communication and knowledge sharing may occur because of the variety of semantic used and the specific business practices in each domain. In this context, this PhD work is interested in the potential of knowledge engineering and product lifecycle management approaches to cope with the above problems. To do so, a conceptual framework is proposed for the analysis of links between the disease (medicaldomain) and the prosthesis (engineering domain) lifecycles. Based on this analysis, a semantic ontology model for medical domain is defined as part of a global knowledge-based PLM approach proposition. The application of the proposition is demonstrated through an implementation of useful function in the AUDROS PLM software
Mefteh, Wafa. "Approche ontologique pour la modélisation et le raisonnement sur les trajectoires : prise en compte des aspects thématiques, temporels et spatiaux." Thesis, La Rochelle, 2013. http://www.theses.fr/2013LAROS405/document.
Full textThe evolution of systems capture data on moving objects has given birth to new generations of applications in various fields. Captured data, commonly called ”trajectories”, are at the heart of applications that analyze and monitor road, maritime and air traffic or also those that optimize public transport. They are also used in the video game, movies, sports and field biology to study animal behavior, by motion capture systems. Today, the data produced by these sensors are raw spatio-temporal characters hiding semantically rich and meaningful informations to an expert data. So, the objective of this thesis is to automatically associate the spatio-temporal data descriptions or concepts related to the behavior of moving objects, interpreted by humans, but also by machines. Based on this observation, we propose a process based on the experience of real-world moving objects, including vessel and plane, to an ontological model for the generic path. We present some applications of interest to experts in the field and show the inability to use the paths in their raw state. Indeed, the analysis of these queries identified three types of semantic components : thematic, spatial and temporal. These components must be attached to data paths leading to enter an annotation that transforms raw semantic paths process trajectories. To exploit the semantic trajectories, we construct a high-level ontology for the domain of the path which models the raw data and their annotations. Given the need of complete reasoning with concepts and spatial and temporal operators, we propose the solution for reuse of ontologies time space. In this thesis, we also present our results from a collaboration with a research team that focuses on the analysis and understanding of the behavior of marine mammals in their natural environment. We describe the process used in the first two areas, which share raw data representing the movement of seals to ontological trajectory model seals. We pay particular attention to the contribution of the upper ontology defined in a contextual framework for ontology application. Finally, this thesis presents the difficulty of implementation on real data size (hundreds of thousands) when reasoning through inference mechanisms using business rules
Assouroko, Ibrahim. "Gestion de données et dynamiques des connaissances en ingénierie numérique : contribution à l'intégration de l'ingénierie des exigences, de la conception mécanique et de la simulation numérique." Compiègne, 2012. http://www.theses.fr/2012COMP2030.
Full textOver the last twenty years, the deep changes noticed in the field of product development, led to methodological change in the field of design. These changes have, in fact, benefited from the significant development of Information and Communication Technologies (ICT) (such as PLM systems dedicated to the product lifecycle management), and from collaborative engineering approaches, playing key role in the improvement of product development process (PDP). In the current PLM market, PLM solutions from different vendors still present strong heterogeneities, and remain on proprietary technologies and formats for competitiveness and profitability reasons, what does not ease communication and sharing between various ICTs contributing to the PDP. Our research work focuses on PDP, and aims to contribute to the improvement of the integrated management of mechanical design and numerical simulation data in a PLM context. The research contribution proposes an engineering knowledge capitalization solution based on a product semantic relationship management approach, organized as follows : (1) a data structuring approach driven by so called semi-structured entities with a structure able to evolve along the PDP, (2) a conceptual model describing the fundamental concepts of the proposed approach, (3) a methodology that facilitates and improves the management and reuse of engineering knowledge within design project, and (4) a knowledge capitalization approach based on the management of semantic relationships that exist or may exist between engineering entities within the product development process
Claude, Grégory. "Modélisation de documents et recherche de points communs - Proposition d'un framework de gestion de fiches d'anomalie pour faciliter les maintenances corrective et préventive." Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00701752.
Full textMancosu, Giorgio. "La transparence publique à l'ère de l'Open Data. Etude comparée Italie-France." Thesis, Paris 2, 2016. http://www.theses.fr/2016PA020010.
Full textObjects, medium, sources, governance, content, actors, purposes and forms of public transparency are experiencing a rapid and profound evolution, which transcends national borders, and depends on the interaction between political, technological, legal and socio-cultural drivers. This happens when transparency exploiting the Open Government Data means and falls under the Open Government framework.Through the Italian and French legal systems, this thesis aims to highlight the recent advancements in public transparency. At first, we will look at the interplay between the concepts of transparency and openness, to identify the legal issues raised by the disclosure of public data. Subsequently, we will turn to the supranational context, which plays a key role in developing guidelines, standards and recommendations. A special place will be reserved to the right (and political) of the European Union. In the second part, we will analyse the above-mentioned legal systems, which are actively engaged in the wider reform of their Public Information Acts, within the framework of multi-stakeholder initiatives, such as the Open Government Partnership.On the whole, we will see how the shift from “transparency through documents” ” to “transparency through data” challenges the public action models
Ait, Mouhoub Louali Nadia. "Le service public à l’heure de l’Open Data." Thesis, Paris 2, 2018. http://www.theses.fr/2018PA020022.
Full textThe public service has experienced a massive opening of public data known as "Open Data". This phenomenon has developed with the emergence of new information technologies in public administrations, becoming then an important factor in the renewal and modernization of the public service. This new trend that the world has been exploring for a few years aims to share and reuse the public data held by the public service, while keeping as objective the democratic transparency in response to the requirement to be accountable to citizens to fight against corruption and promote Open Government in favor of citizen involvement.In this respect, the Open Data concept leads us to question ourselves about the importance of opening up data in the public service, about the degree of obligation to adapt to this opening, also about the consequences of the Open Data’s intrusion into the sphere of the public service and the limits that Open Data may encounter.To answer these questions, we will focus on the emergence and development of Open Data in the public service, with a depiction of its impact on democracy evolution and its eminent role in the creation of new public services, such as the case of data public service in France. Thus, the best angle to study the opening of public data in the public service is the comparative public law, this allows us to analyze the practice of Open Data in the pioneer countries in this field and the Maghreb countries who recently integrated this new way of work. This study also aims to prove what are the benefits of Open Data for the administration and the citizen
Troussier, Nadège. "Contribution à l'intégration du calcul mécanique dans la conception de produits techniques : proposition méthodologique pour l'utilisation et la réutilisation." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE10218.
Full textEl, Ghosh Mirna. "Automatisation du raisonnement et décision juridiques basés sur les ontologies." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR16/document.
Full textThis thesis analyses the problem of building well-founded domain ontologies for reasoning and decision support purposes. Specifically, it discusses the building of legal ontologies for rule-based reasoning. In fact, building well-founded legal domain ontologies is considered as a difficult and complex process due to the complexity of the legal domain and the lack of methodologies. For this purpose, a novel middle-out approach called MIROCL is proposed. MIROCL tends to enhance the building process of well-founded domain ontologies by incorporating several support processes such as reuse, modularization, integration and learning. MIROCL is a novel modular middle-out approach for building well-founded domain ontologies. By applying the modularization process, a multi-layered modular architecture of the ontology is outlined. Thus, the intended ontology will be composed of four modules located at different abstraction levels. These modules are, from the most abstract to the most specific, UOM(Upper Ontology Module), COM(Core Ontology Module), DOM(Domain Ontology Module) and DSOM(Domain-Specific Ontology Module). The middle-out strategy is composed of two complementary strategies: top-down and bottom-up. The top-down tends to apply ODCM (Ontology-Driven Conceptual Modeling) and ontology reuse starting from the most abstract categories for building UOM and COM. Meanwhile, the bottom-up starts from textual resources, by applying ontology learning process, in order to extract the most specific categories for building DOM and DSOM. After building the different modules, an integration process is performed for composing the whole ontology. The MIROCL approach is applied in the criminal domain for modeling legal norms. A well-founded legal domain ontology called CriMOnto (Criminal Modular Ontology) is obtained. Therefore, CriMOnto has been used for modeling the procedural aspect of the legal norms by the integration with a logic rule language (SWRL). Finally, an hybrid approach is applied for building a rule-based system called CORBS. This system is grounded on CriMOnto and the set of formalized rules
Rieu, Dominique. "Ingénierie des systèmes d'information : bases de données, bases de connaissances et méthodes de conception." Habilitation à diriger des recherches, 1999. http://tel.archives-ouvertes.fr/tel-00004846.
Full textBilasco, Ioan Marius. "Une approche sémantique pour la réutilisation et l'adaptation de donnée 3D." Phd thesis, 2007. http://tel.archives-ouvertes.fr/tel-00206220.
Full textNous proposons une solution qui complète la description par des annotations sémantiques et les organise au sein d'un modèle (3DSEAM) suivant la dimension décrite (géométrie, apparence, topologie, sémantique et profil média). Afin d'accommoder la variabilité du modèle en termes de descriptions sémantiques, nous proposons une extension d'OQL spécifique au modèle 3DSEAM. Une plate-forme (3DAF) fédère les moyens d'instanciation et d'interrogation des descriptions. Au-dessus de 3DAF, nous construisons une plate-forme (3DSDL) pour la réutilisation de données 3D et de leur sémantique. Nous adoptons une approche d'adaptation de données 3D à base de règles interprétées et exécutées par une plate-forme d'adaptation (Adapt3D). La réutilisation et l'adaptation s'appuient sur l'utilisation d'un standard XML de représentation de données 3D : le X3D. Les transformations apportées aux données sont décrites en utilisant XSLT.
Une application de gestion de scènes 3D urbaines est proposée pour valider les divers formalismes introduits par notre approche. L'application s'appuie sur une représentation MPEG-7 de l'entrepôt de descriptions. L'interrogation est assurée par XQuery.