Letteratura scientifica selezionata sul tema "OMOP common data model"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "OMOP common data model".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "OMOP common data model"

1

Kang, Mengjia, Jose A. Alvarado-Guzman, Luke V. Rasmussen e Justin B. Starren. "Evolution of a Graph Model for the OMOP Common Data Model". Applied Clinical Informatics 15, n. 05 (ottobre 2024): 1056–65. https://doi.org/10.1055/s-0044-1791487.

Testo completo
Abstract (sommario):
Abstract Objective Graph databases for electronic health record (EHR) data have become a useful tool for clinical research in recent years, but there is a lack of published methods to transform relational databases to a graph database schema. We developed a graph model for the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) that can be reused across research institutions. Methods We created and evaluated four models, representing two different strategies, for converting the standardized clinical and vocabulary tables of OMOP into a property graph model within the Neo4j graph database. Taking the Successful Clinical Response in Pneumonia Therapy (SCRIPT) and Collaborative Resource for Intensive care Translational science, Informatics, Comprehensive Analytics, and Learning (CRITICAL) cohorts as test datasets with different sizes, we compared two of the resulting graph models with respect to database performance including database building time, query complexity, and runtime for both cohorts. Results Utilizing a graph schema that was optimized for storing critical information as topology rather than attributes resulted in a significant improvement in both data creation and querying. The graph database for our larger cohort, CRITICAL, can be built within 1 hour for 134,145 patients, with a total of 749,011,396 nodes and 1,703,560,910 edges. Discussion To our knowledge, this is the first generalized solution to convert the OMOP CDM to a graph-optimized schema. Despite being developed for studies at a single institution, the modeling method can be applied to other OMOP CDM v5.x databases. Our evaluation with the SCRIPT and CRITICAL cohorts and comparison between the current and previous versions show advantages in code simplicity, database building, and query speed. Conclusion We developed a method for converting OMOP CDM databases into graph databases. Our experiments revealed that the final model outperformed the initial relational-to-graph transformation in both code simplicity and query efficiency, particularly for complex queries.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Maier, Christian, Lorenz A. Kapsner, Sebastian Mate, Hans-Ulrich Prokosch e Stefan Kraus. "Patient Cohort Identification on Time Series Data Using the OMOP Common Data Model". Applied Clinical Informatics 12, n. 01 (gennaio 2021): 057–64. http://dx.doi.org/10.1055/s-0040-1721481.

Testo completo
Abstract (sommario):
Abstract Background The identification of patient cohorts for recruiting patients into clinical trials requires an evaluation of study-specific inclusion and exclusion criteria. These criteria are specified depending on corresponding clinical facts. Some of these facts may not be present in the clinical source systems and need to be calculated either in advance or at cohort query runtime (so-called feasibility query). Objectives We use the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) as the repository for our clinical data. However, Atlas, the graphical user interface of OMOP, does not offer the functionality to perform calculations on facts data. Therefore, we were in search for a different approach. The objective of this study is to investigate whether the Arden Syntax can be used for feasibility queries on the OMOP CDM to enable on-the-fly calculations at query runtime, to eliminate the need to precalculate data elements that are involved with researchers' criteria specification. Methods We implemented a service that reads the facts from the OMOP repository and provides it in a form which an Arden Syntax Medical Logic Module (MLM) can process. Then, we implemented an MLM that applies the eligibility criteria to every patient data set and outputs the list of eligible cases (i.e., performs the feasibility query). Results The study resulted in an MLM-based feasibility query that identifies cases of overventilation as an example of how an on-the-fly calculation can be realized. The algorithm is split into two MLMs to provide the reusability of the approach. Conclusion We found that MLMs are a suitable technology for feasibility queries on the OMOP CDM. Our method of performing on-the-fly calculations can be employed with any OMOP instance and without touching existing infrastructure like the Extract, Transform and Load pipeline. Therefore, we think that it is a well-suited method to perform on-the-fly calculations on OMOP.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Chechulina, Anna, Jasmin Carus, Philipp Breitfeld, Christopher Gundler, Hanna Hees, Raphael Twerenbold, Stefan Blankenberg, Frank Ückert e Sylvia Nürnberg. "Semi-Automated Mapping of German Study Data Concepts to an English Common Data Model". Applied Sciences 13, n. 14 (13 luglio 2023): 8159. http://dx.doi.org/10.3390/app13148159.

Testo completo
Abstract (sommario):
The standardization of data from medical studies and hospital information systems to a common data model such as the Observational Medical Outcomes Partnership (OMOP) model can help make large datasets available for analysis using artificial intelligence approaches. Commonly, automatic mapping without intervention from domain experts delivers poor results. Further challenges arise from the need for translation of non-English medical data. Here, we report the establishment of a mapping approach which automatically translates German data variable names into English and suggests OMOP concepts. The approach was set up using study data from the Hamburg City Health Study. It was evaluated against the current standard, refined, and tested on a separate dataset. Furthermore, different types of graphical user interfaces for the selection of suggested OMOP concepts were created and assessed. Compared to the current standard our approach performs slightly better. Its main advantage lies in the automatic processing of German phrases into English OMOP concept suggestions, operating without the need for human intervention. Challenges still lie in the adequate translation of nonstandard expressions, as well as in the resolution of abbreviations into long names.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Garneau, William, Benjamin Martin, Kelly Gebo, Paul Nagy, Johns Hopkins, Danielle Boyce, Michael Cook e Matthew Robinson. "76 Lessons learned during implementation of OMOP common data model across multiple health systems". Journal of Clinical and Translational Science 8, s1 (aprile 2024): 20. http://dx.doi.org/10.1017/cts.2024.77.

Testo completo
Abstract (sommario):
OBJECTIVES/GOALS: Adoption of the Observational Medical Outcomes Partnership (OMOP) common data model promises to transform large-scale observational health research. However, there are diverse challenges for operationalizing OMOP in terms of interoperability and technical skills among coordinating centers throughout the US. METHODS/STUDY POPULATION: A team from the Critical Path Institute (C-Path) collaborated with the informatics team members at Johns Hopkins to provide technical support to participating sites as part of the Extract, Transform, and Load (ETL) process linking existing concepts to OMOP concepts. Health systems met regularly via teleconference to review challenges and progress in ETL process. Sites were responsible for performing the local ETL process with assistance and securely provisioning de-identified data as part of the CURE ID program. RESULTS/ANTICIPATED RESULTS: More than twenty health systems participated in the CURE ID effort.Laboratory measures, basic demographics, disease diagnoses and problem list were more easily mapped to OMOP concepts by CURE ID partner institutions. Outcomes, social determinants of health, medical devices, and specific treatments were less easily characterized as part of the project. Concepts within the medical record presented very different technical challenges in terms of representation. There is a lack of standardization in OMOP implementation even among centers using the same electronic medical health record. Readiness to adopt OMOP varied across the institutions who participated. Health systems achieved variable level of coverage using OMOP medical concepts as part of the initiative. DISCUSSION/SIGNIFICANCE: Adoption of OMOP involves local stakeholder knowledge and implementation. Variable complexity of health concepts contributed to variable coverage. Documentation and support require extensive time and effort. Open-source software can be technically challenging. Interoperability of secure data systems presents unique problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lamer, Antoine, Osama Abou-Arab, Alexandre Bourgeois, Adrien Parrot, Benjamin Popoff, Jean-Baptiste Beuscart, Benoît Tavernier e Mouhamed Djahoum Moussa. "Transforming Anesthesia Data Into the Observational Medical Outcomes Partnership Common Data Model: Development and Usability Study". Journal of Medical Internet Research 23, n. 10 (29 ottobre 2021): e29259. http://dx.doi.org/10.2196/29259.

Testo completo
Abstract (sommario):
Background Electronic health records (EHRs, such as those created by an anesthesia management system) generate a large amount of data that can notably be reused for clinical audits and scientific research. The sharing of these data and tools is generally affected by the lack of system interoperability. To overcome these issues, Observational Health Data Sciences and Informatics (OHDSI) developed the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) to standardize EHR data and promote large-scale observational and longitudinal research. Anesthesia data have not previously been mapped into the OMOP CDM. Objective The primary objective was to transform anesthesia data into the OMOP CDM. The secondary objective was to provide vocabularies, queries, and dashboards that might promote the exploitation and sharing of anesthesia data through the CDM. Methods Using our local anesthesia data warehouse, a group of 5 experts from 5 different medical centers identified local concepts related to anesthesia. The concepts were then matched with standard concepts in the OHDSI vocabularies. We performed structural mapping between the design of our local anesthesia data warehouse and the OMOP CDM tables and fields. To validate the implementation of anesthesia data into the OMOP CDM, we developed a set of queries and dashboards. Results We identified 522 concepts related to anesthesia care. They were classified as demographics, units, measurements, operating room steps, drugs, periods of interest, and features. After semantic mapping, 353 (67.7%) of these anesthesia concepts were mapped to OHDSI concepts. Further, 169 (32.3%) concepts related to periods and features were added to the OHDSI vocabularies. Then, 8 OMOP CDM tables were implemented with anesthesia data and 2 new tables (EPISODE and FEATURE) were added to store secondarily computed data. We integrated data from 5,72,609 operations and provided the code for a set of 8 queries and 4 dashboards related to anesthesia care. Conclusions Generic data concerning demographics, drugs, units, measurements, and operating room steps were already available in OHDSI vocabularies. However, most of the intraoperative concepts (the duration of specific steps, an episode of hypotension, etc) were not present in OHDSI vocabularies. The OMOP mapping provided here enables anesthesia data reuse.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ward, Roger, Christine Mary Hallinan, David Ormiston-Smith, Christine Chidgey e Dougie Boyle. "The OMOP common data model in Australian primary care data: Building a quality research ready harmonised dataset". PLOS ONE 19, n. 4 (18 aprile 2024): e0301557. http://dx.doi.org/10.1371/journal.pone.0301557.

Testo completo
Abstract (sommario):
Background The use of routinely collected health data for secondary research purposes is increasingly recognised as a methodology that advances medical research, improves patient outcomes, and guides policy. This secondary data, as found in electronic medical records (EMRs), can be optimised through conversion into a uniform data structure to enable analysis alongside other comparable health metric datasets. This can be achieved with the Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM), which employs a standardised vocabulary to facilitate systematic analysis across various observational databases. The concept behind the OMOP-CDM is the conversion of data into a common format through the harmonisation of terminologies, vocabularies, and coding schemes within a unique repository. The OMOP model enhances research capacity through the development of shared analytic and prediction techniques; pharmacovigilance for the active surveillance of drug safety; and ‘validation’ analyses across multiple institutions across Australia, the United States, Europe, and the Asia Pacific. In this research, we aim to investigate the use of the open-source OMOP-CDM in the PATRON primary care data repository. Methods We used standard structured query language (SQL) to construct, extract, transform, and load scripts to convert the data to the OMOP-CDM. The process of mapping distinct free-text terms extracted from various EMRs presented a substantial challenge, as many terms could not be automatically matched to standard vocabularies through direct text comparison. This resulted in a number of terms that required manual assignment. To address this issue, we implemented a strategy where our clinical mappers were instructed to focus only on terms that appeared with sufficient frequency. We established a specific threshold value for each domain, ensuring that more than 95% of all records were linked to an approved vocabulary like SNOMED once appropriate mapping was completed. To assess the data quality of the resultant OMOP dataset we utilised the OHDSI Data Quality Dashboard (DQD) to evaluate the plausibility, conformity, and comprehensiveness of the data in the PATRON repository according to the Kahn framework. Results Across three primary care EMR systems we converted data on 2.03 million active patients to version 5.4 of the OMOP common data model. The DQD assessment involved a total of 3,570 individual evaluations. Each evaluation compared the outcome against a predefined threshold. A ’FAIL’ occurred when the percentage of non-compliant rows exceeded the specified threshold value. In this assessment of the primary care OMOP database described here, we achieved an overall pass rate of 97%. Conclusion The OMOP CDM’s widespread international use, support, and training provides a well-established pathway for data standardisation in collaborative research. Its compatibility allows the sharing of analysis packages across local and international research groups, which facilitates rapid and reproducible data comparisons. A suite of open-source tools, including the OHDSI Data Quality Dashboard (Version 1.4.1), supports the model. Its simplicity and standards-based approach facilitates adoption and integration into existing data processes.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lamer, Antoine, Nicolas Depas, Matthieu Doutreligne, Adrien Parrot, David Verloop, Marguerite-Marie Defebvre, Grégoire Ficheur, Emmanuel Chazard e Jean-Baptiste Beuscart. "Transforming French Electronic Health Records into the Observational Medical Outcome Partnership's Common Data Model: A Feasibility Study". Applied Clinical Informatics 11, n. 01 (gennaio 2020): 013–22. http://dx.doi.org/10.1055/s-0039-3402754.

Testo completo
Abstract (sommario):
Abstract Background Common data models (CDMs) enable data to be standardized, and facilitate data exchange, sharing, and storage, particularly when the data have been collected via distinct, heterogeneous systems. Moreover, CDMs provide tools for data quality assessment, integration into models, visualization, and analysis. The observational medical outcome partnership (OMOP) provides a CDM for organizing and standardizing databases. Common data models not only facilitate data integration but also (and especially for the OMOP model) extends the range of available statistical analyses. Objective This study aimed to evaluate the feasibility of implementing French national electronic health records in the OMOP CDM. Methods The OMOP's specifications were used to audit the source data, specify the transformation into the OMOP CDM, implement an extract–transform–load process to feed data from the French health care system into the OMOP CDM, and evaluate the final database. Results Seventeen vocabularies corresponding to the French context were added to the OMOP CDM's concepts. Three French terminologies were automatically mapped to standardized vocabularies. We loaded nine tables from the OMOP CDM's “standardized clinical data” section, and three tables from the “standardized health system data” section. Outpatient and inpatient data from 38,730 individuals were integrated. The median (interquartile range) number of outpatient and inpatient stays per patient was 160 (19–364). Conclusion Our results demonstrated that data from the French national health care system can be integrated into the OMOP CDM. One of the main challenges was the use of international OMOP concepts to annotate data recorded in a French context. The use of local terminologies was an obstacle to conceptual mapping; with the exception of an adaptation of the International Classification of Diseases 10th Revision, the French health care system does not use international terminologies. It would be interesting to extend our present findings to the 65 million people registered in the French health care system.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Lee, Geun Hyeong, Jonggul Park, Jihyeong Kim, Yeesuk Kim, Byungjin Choi, Rae Woong Park, Sang Youl Rhee e Soo-Yong Shin. "Feasibility Study of Federated Learning on the Distributed Research Network of OMOP Common Data Model". Healthcare Informatics Research 29, n. 2 (30 aprile 2023): 168–73. http://dx.doi.org/10.4258/hir.2023.29.2.168.

Testo completo
Abstract (sommario):
Objectives: Since protecting patients’ privacy is a major concern in clinical research, there has been a growing need for privacy-preserving data analysis platforms. For this purpose, a federated learning (FL) method based on the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) was implemented, and its feasibility was demonstrated.Methods: We implemented an FL platform on FeederNet, which is a distributed clinical data analysis platform based on the OMOP CDM in Korea. We trained it through an artificial neural network (ANN) using data from patients who received steroid prescriptions or injections, with the aim of predicting the occurrence of side effects depending on the prescribed dose. The ANN was trained using the FL platform with the OMOP CDMs of Kyung Hee University Medical Center (KHMC) and Ajou University Hospital (AUH).Results: The area under the receiver operating characteristic curves (AUROCs) for predicting bone fracture, osteonecrosis, and osteoporosis using only data from each hospital were 0.8426, 0.6920, and 0.7727 for KHMC and 0.7891, 0.7049, and 0.7544 for AUH, respectively. In contrast, when using FL, the corresponding AUROCs were 0.8260, 0.7001, and 0.7928 for KHMC and 0.7912, 0.8076, and 0.7441 for AUH, respectively. In particular, FL led to a 14% improvement in performance for osteonecrosis at AUH.Conclusions: FL can be performed with the OMOP CDM, and FL often shows better performance than using only a single institution's data. Therefore, research using OMOP CDM has been expanded from statistical analysis to machine learning so that researchers can conduct more diverse research.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Hallinan, Christine Mary, Roger Ward, Graeme K. Hart, Clair Sullivan, Nicole Pratt, Ashley P. Ng, Daniel Capurro et al. "Seamless EMR data access: Integrated governance, digital health and the OMOP-CDM". BMJ Health & Care Informatics 31, n. 1 (febbraio 2024): e100953. http://dx.doi.org/10.1136/bmjhci-2023-100953.

Testo completo
Abstract (sommario):
ObjectivesIn this overview, we describe theObservational Medical Outcomes Partnership Common Data Model (OMOP-CDM), the established governance processes employed in EMR data repositories, and demonstrate how OMOP transformed data provides a lever for more efficient and secure access to electronic medical record (EMR) data by health service providers and researchers.MethodsThrough pseudonymisation and common data quality assessments, the OMOP-CDM provides a robust framework for converting complex EMR data into a standardised format. This allows for the creation of shared end-to-end analysis packages without the need for direct data exchange, thereby enhancing data security and privacy. By securely sharing de-identified and aggregated data and conducting analyses across multiple OMOP-converted databases, patient-level data is securely firewalled within its respective local site.ResultsBy simplifying data management processes and governance, and through the promotion of interoperability, the OMOP-CDM supports a wide range of clinical, epidemiological, and translational research projects, as well as health service operational reporting.DiscussionAdoption of the OMOP-CDM internationally and locally enables conversion of vast amounts of complex, and heterogeneous EMR data into a standardised structured data model, simplifies governance processes, and facilitates rapid repeatable cross-institution analysis through shared end-to-end analysis packages, without the sharing of data.ConclusionThe adoption of the OMOP-CDM has the potential to transform health data analytics by providing a common platform for analysing EMR data across diverse healthcare settings.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bardenheuer, Kristina, Alun Passey, Maria d'Errico, Barbara Millier, Carine Guinard-Azadian, Johan Aschan e Michel van Speybroeck. "Honeur (Heamatology Outcomes Network in Europe): A Federated Model to Support Real World Data Research in Hematology". Blood 132, Supplement 1 (29 novembre 2018): 4839. http://dx.doi.org/10.1182/blood-2018-99-111093.

Testo completo
Abstract (sommario):
Abstract Introduction: The Haematology Outcomes Network in EURope (HONEUR) is an interdisciplinary initiative aimed at improving patient outcomes by analyzing real world data across hematological centers in Europe. Its overarching goal is to create a secure network which facilitates the development of a collaborative research community and allows access to big data tools for analysis of the data. The central paradigm in the HONEUR network is a federated model whereby the data stays at the respective sites and the analysis is executed at the local data sources. To allow for a uniform data analysis, the common data model 'OMOP' (Observational Medical Outcomes Partnership) was selected and extended to accommodate specific hematology data elements. Objective: To demonstrate feasibility of the OMOP common data model for the HONEUR network. Methods: In order to validate the architecture of the HONEUR network and the applicability of the OMOP common data model, data from the EMMOS registry (NCT01241396) have been used. This registry is a prospective, non-interventional study that was designed to capture real world data regarding treatments and outcomes for multiple myeloma at different stages of the disease. Data was collected between Oct 2010 and Nov 2014 on more than 2,400 patients across 266 sites in 22 countries. Data was mapped to the OMOP common data model version 5.3. Additional new concepts to the standard OMOP were provided to preserve the semantic mapping quality and reduce the potential loss of granularity. Following the mapping process, a quality analysis was performed to assess the completeness and accuracy of the mapping to the common data model. Specific critical concepts in multiple myeloma needed to be represented in OMOP. This applies in particular for concepts like treatment lines, cytogenetic observations, disease progression, risk scales (in particular ISS and R-ISS). To accommodate these concepts, existing OMOP structures were used with the definition of new concepts and concept-relationships. Results: Several elements of mapping data from the EMMOS registry to the OMOP common data model (CDM) were evaluated via integrity checks. Core entities from the OMOP CDM were reconciled against the source data. This was applied for the following entities: person (profile of year of birth and gender), drug exposure (profile of number of drug exposures per drug, at ATC code level), conditions (profile of number of occurrences of conditions per condition code, converted to SNOMED), measurement (profile of number of measurements and value distribution per (lab) measurement, converted to LOINC) and observation (profile of number of observations per observation concept). Figure 1 shows the histogram of year of birth distribution between the EMMOS registry and the OMOP CDM. No discernible differences exist, except for subjects which have not been included in the mapping to the OMOP CDM due to lacking confirmation of a diagnosis of multiple myeloma. As additional part of the architecture validation, the occurrence of the top 20 medications in the EMMOS registry and the OMOP CDM were compared, with a 100% concordance for the drug codes, which is shown in Figure 2. In addition to the reconciliation against the different OMOP entities, a comparison was also made against 'derived' data, in particular 'time to event' analysis. Overall survival was plotted from calculated variables in the analysis level data from the EMMOS registry and derived variables in the OMOP CDM. Probability of overall survival over time was virtually identical with only one day difference in median survival and 95% confidence intervals identically overlapping over the period of measurement (Figure 3). Conclusions: The concordance of year of birth, drug code mapping and overall survival between the EMMOS registry and the OMOP common data model indicates the reliability of mapping potential in HONEUR, especially where auxiliary methods have been developed to handle outcomes and treatment data in a way that can be harmonized across platform datasets. Disclosures No relevant conflicts of interest to declare.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "OMOP common data model"

1

Lang, Lukas [Verfasser], Hans-Ulrich [Akademischer Betreuer] Prokosch e Hans-Ulrich [Gutachter] Prokosch. "Mapping eines deutschen, klinischen Datensatzes nach OMOP Common Data Model / Lukas Lang ; Gutachter: Hans-Ulrich Prokosch ; Betreuer: Hans-Ulrich Prokosch". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1220911135/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Fruchart, Mathilde. "Réutilisation des données de soins premiers : spécificités, standardisation et suivi de la prise en charge dans les Maisons de Santé Pluridisciplinaires". Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS040.

Testo completo
Abstract (sommario):
Contexte : La réutilisation des données de santé, au-delà de leur usage initial, permet d’améliorer la prise en charge des patients, de faciliter la recherche et d’optimiser le pilotage des établissements de santé. Pour cela, les données sont extraites des logiciels de santé, transformées et stockées dans un entrepôt de données grâce à un processus extract-transform-load (ETL). Des modèles de données communs, comme le modèle OMOP, existent pour stocker les données dans un format homogène,indépendant de la source. Les données de facturation des soins centralisées dans la base nationale (SNDS), les données hospitalières, les données des réseaux sociaux et des forums, et les données de villes ont des sources de données représentatives du parcours de soins des patients. La dernière source de données est encore peu exploitée. Objectif : L’objectif de cette thèse a été d’intégrer les spécificités de la réutilisation des soins premiers pour implémenter un entrepôt de données, tout en montrant la contribution des soins premiers au domaine de la recherche. Méthodes : Dans un premier temps, les données de soins premiers d’une maison de santé ont été extraites du logiciel de soins WEDA. Un entrepôt de données de soins premiers a été implémenté à l’aide d’un processus ETL. La transformation structurelle (harmonisation de la structure de la base de données) et sémantique (harmonisation du vocabulaire utilisé dans les données) ont été mises en place pour aligner les données avec le modèle de données commun OMOP. Pour intégrer les données des médecins généralistes de plusieurs maisons de santé, un outil de généralisation des processus ETL a été développé et testé sur quatre maisons de santé. Par la suite, un algorithme d’évaluation de la persistance à un traitement prescrit et des tableaux de bord ont été développés. Grâce à l’utilisation du modèle OMOP, ces outils sont partageables avec d’autres maisons de santé. Enfin, des études rétrospectives ont été réalisées sur la population de patients diabétiques des quatre maisons de santé. Résultats : Sur plus de 20 ans, les données des 117 005 patients de quatre maisons de santé ont été chargées dans le modèle OMOP, grâce à notre outil d’optimisation des processus ETL. Ces données couvrent les résultats de biologie des laboratoires de ville et les données relatives aux consultations de médecins généralistes. Le vocabulaire propre aux soins premiers a été aligné avec les concepts standards du modèle. Un algorithme pour évaluer la persistance à un traitement prescrit par le médecin généraliste,ainsi qu’un tableau de bord pour le suivi des indicateurs de performance (ROSP) et de l’activité du cabinet ont été développés. Basés sur les entrepôt de données des quatre maisons de santé, nous avons décrit le suivi des patients diabétiques. Ces études utilisent les données de résultats de biologie, les données de consultation et les prescriptions médicamenteuses, au format OMOP. Les scripts de ces études et les outils développés pourront être partagés.Conclusion : Les données de soins premiers représentent un potentiel pour la réutilisation des données à des fins de recherche et d’amélioration de la qualité des soins. Elles complètent les bases de données existantes (hospitalières, nationales et réseaux sociaux) en intégrant les données cliniques de ville. L’utilisation d’un modèle de données commun facilite le développement d’outils et la conduite d’études, tout en permettant leur partage. Les études pourront être répliquées dans différents centres,afin de comparer les résultats
Context : Reusing healthcare data beyond its initial use helps to improve patient care, facilitate research, and optimize the management of healthcare organizations. To achieve this, data is extracted from healthcare software, transformed and stored in a data warehouse through an extract-transform-load(ETL) process. Common data models, such as the OMOP model, exist to store data in a homogeneous,source-independent format. Data from healthcare claims centralized in the national database (SNDS), hospital, social networks and forums, and primary care are different data sources representative of the patient care pathway. The last data source has not been fully exploited. Objective : The aim of this thesis was to incorporate the specificities of primary care data reuse to implement a data warehouse while highlighting the contribution of primary care to the field of research. Methods : The first step was to extract the primary care data of a multidisciplinary health center (MHC) from the WEDA care software. A primary care data warehouse was implemented using an ETL process. Structural transformation (harmonization of the database structure) and semantic transformation (harmonization of the vocabulary used in the data) were implemented to align the data with the common OMOP data model. A process generalization tool was developed to integrate general practitioners (GP) data from multiple care structures and tested on four MHCs. Subsequently, algorithm for assessing the persistence of a prescribed treatment and dashboards were developed. Thanks to the use of the OMOP model, these tools can be shared with other MHCs. Finally, retrospective studies were conducted on the diabetic population of the four MHCs. Results : Over a period of more than 20 years, data of 117,005 patients from four MHCs wereloaded into the OMOP model using our ETL process optimization tool. These data include biological results from laboratories and GP consultation data. The vocabulary specific to primary care was aligned with the standard concepts of the model. An algorithm for assessing persistence with treatment prescribed by the GP and also a dashboard for monitoring performance indicators (ROSP) and practice activity have been developed. Based on the data warehouses of four MHCs, we described the follow-up of diabetic patients. These studies use biological results, consultation and drug prescriptions data in OMOP format. The scripts of these studies and the tools developed can be shared. Conclusion : Primary care data represent a potential for reusing data for research purposes and improving the quality of care. They complement existing databases (hospital, national and social networks) by integrating clinical data from the city. The use of a common data model facilitates the development of tools and the conduct of studies, while enabling their sharing. Studies can be replicated in different centers to compare results
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Kovács, Zsolt. "The integration of product data with workflow management systems through a common data model". Thesis, University of Bristol, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312062.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Davis, Duane T. "Design, implementation and testing of a common data model supporting autonomous vehicle compatibility and interoperability". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FDavis%5FPhD.pdf.

Testo completo
Abstract (sommario):
Dissertation (PhD. in Computer Science)--Naval Postgraduate School, September 2006.
Dissertation Advisor(s): Don Brutzman. "September 2006." Includes bibliographical references (p. 317-328). Also available in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Hodges, Glenn A. "Designing a common interchange format for unit data using the Command and Control information exchange data model (C2IEDM) and XSLT". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FHodges.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S. in Modeling Virtual Environments and Simulation (MOVES))--Naval Postgraduate School, Sept. 2004.
Thesis advisor(s): Curtis Blais, Don Brutzman. Includes bibliographical references (p. 95-98). Also available online.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Neto, Mario Barreto de Moura. "Application of IEC 61970 for data standardization and smart grid interoperability". Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11627.

Testo completo
Abstract (sommario):
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior
In the context of the current modernization process through which the electrical power systems go through, the concept of Smart Grids and their foundations serve as guidelines. In the search for interoperability, communication between heterogeneous systems has been the subject of constant and increasing developments. Under this scenario, the work presented in this dissertation focuses primarily on the study and application of the data model contained in the IEC 61970 series of standards, best known as the Common Information Model (CIM). With this purpose, the general aspects of the standard are exposed and assisted by the concepts of UML and XML, which are essential for a complete understanding of the model. Certain features of the CIM, as its extensibility and generality are emphasized, which qualify it as ideal data model for the establishment of interoperability. In order to exemplify the use of the model, a case study was performed which modeled an electrical distribution network in medium voltage so as to make it suitable for integration with a multi-agent system in a standardized format and, consequently, adequate to interoperability. The complete process of modeling an electrical network using the CIM is shown. Finally, the development of an interface is proposed as a mechanism that enables human intervention in the data flow between the integrated systems. The use of PHP with a MySQL database, are justified because of their suitability in various usage environments.
No processo atual de modernizaÃÃo pelo qual passam os sistemas de energia elÃtrica, o conceito de Redes ElÃtricas Inteligentes e seus fundamentos tÃm servido de diretrizes. Na busca pela interoperabilidade, a comunicaÃÃo entre sistemas heterogÃneos tem sido objeto de constantes e crescentes avanÃos. Este trabalho tem como objetivo o estudo e a aplicaÃÃo do modelo de dados da sÃrie de normas IEC 61970, denominado Common Information Model (CIM). Com esse objetivo, os aspectos gerais da norma sÃo apresentados, auxiliados pelos conceitos de UML (Unified Modeling Language) e XML (eXtensible Markup Language), essenciais para a compreensÃo integral do modelo. Determinadas caracterÃsticas do modelo CIM, como sua extensibilidade e generalidade, sÃo enfatizadas, as quais o credenciam como modelo com excelentes caracterÃsticas para o estabelecimento da interoperabilidade. Com o intuito de exemplificar a utilizaÃÃo do modelo, realizou-se um estudo de caso em que se modelou uma rede elÃtrica de distribuiÃÃo em mÃdia tensÃo de maneira a tornÃ-la prÃpria para integraÃÃo com um sistema multiagente em um formato padronizado e, consequentemente, adequado à interoperabilidade. O processo completo de modelagem da rede elÃtrica utilizando o CIM foi demonstrado. Por fim, uma interface foi desenvolvida como mecanismo de manipulaÃÃo dos dados nos documentos XML que possam fazer parte do fluxo de informaÃÃes. A utilizaÃÃo do PHP, juntamente com um banco de dados MySQL, à justificada em decorrÃncia de sua adequaÃÃo de uso em ambientes diversos. Os conjuntos formados pela interface, simulador da rede elÃtrica e sistema multiagente para recomposiÃÃo automÃtica, constituÃram um sistema cujas informaÃÃes foram plenamente integradas.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lanman, Jeremy Thomas. "A governance reference model for service-oriented architecture-based common data initialization a case study of military simulation federation systems". Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4516.

Testo completo
Abstract (sommario):
Military simulation and command and control federations have become large, complex distributed systems that integrate with a variety of legacy and current simulations, and real command and control systems locally as well as globally. As these systems continue to become increasingly more complex so does the data that initializes them. This increased complexity has introduced a major problem in data initialization coordination which has been handled by many organizations in various ways. Service-oriented architecture (SOA) solutions have been introduced to promote easier data interoperability through the use of standards-based reusable services and common infrastructure. However, current SOA-based solutions do not incorporate formal governance techniques to drive the architecture in providing reliable, consistent, and timely information exchange. This dissertation identifies the need to establish governance for common data initialization service development oversight, presents current research and applicable solutions that address some aspects of SOA-based federation data service governance, and proposes a governance reference model for development of SOA-based common data initialization services in military simulation and command and control federations.
ID: 029094323; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 253-261).
Ph.D.
Doctorate
Department of Modeling and Simulation
Engineering and Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nguyen, Huu Du. "System Reliability : Inference for Common Cause Failure Model in Contexts of Missing Information". Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS530.

Testo completo
Abstract (sommario):
Le bon fonctionnement de l’ensemble d’un système industriel est parfois fortement dépendant de la fiabilité de certains éléments qui le composent. Une défaillance de l’un de ces éléments peut conduire à une défaillance totale du système avec des conséquences qui peuvent être catastrophiques en particulier dans le secteur de l’industrie nucléaire ou dans le secteur de l’industrie aéronautique. Pour réduire ce risque de panne catastrophique, une stratégie consiste à dupliquer les éléments sensibles dans le dispositif. Ainsi, si l’un de ces éléments tombe en panne, un autre pourra prendre le relais et le bon fonctionnement du système pourra être maintenu. Cependant, on observe couramment des situations qui conduisent à des défaillances simultanées d’éléments du système : on parle de défaillance de cause commune. Analyser, modéliser, prédire ce type d’événement revêt donc une importance capitale et sont l’objet des travaux présentés dans cette thèse. Il existe de nombreux modèles pour les défaillances de cause commune. Des méthodes d’inférence pour étudier les paramètres de ces modèles ont été proposées. Dans cette thèse, nous considérons la situation où l’inférence est menée sur la base de données manquantes. Nous étudions en particulier le modèle BFR (Binomial Failure Rate) et la méthode des facteurs alpha. En particulier, une approche bayésienne est développée en s’appuyant sur des techniques algorithmiques (Metropolis, IBF). Dans le domaine du nucléaire, les données de défaillances sont peu abondantes et des techniques particulières d’extrapolations de données doivent être mis en oeuvre pour augmenter l’information. Nous proposons dans le cadre de ces stratégies, des techniques de prédiction des défaillances de cause commune. L’actualité récente a mis en évidence l’importance de la fiabilité des systèmes redondants et nous espérons que nos travaux contribueront à une meilleure compréhension et prédiction des risques de catastrophes majeures
The effective operation of an entire industrial system is sometimes strongly dependent on the reliability of its components. A failure of one of these components can lead to the failure of the system with consequences that can be catastrophic, especially in the nuclear industry or in the aeronautics industry. To reduce this risk of catastrophic failures, a redundancy policy, consisting in duplicating the sensitive components in the system, is often applied. When one of these components fails, another will take over and the normal operation of the system can be maintained. However, some situations that lead to simultaneous failures of components in the system could be observed. They are called common cause failure (CCF). Analyzing, modeling, and predicting this type of failure event are therefore an important issue and are the subject of the work presented in this thesis. We investigate several methods to deal with the statistical analysis of CCF events. Different algorithms to estimate the parameters of the models and to make predictive inference based on various type of missing data are proposed. We treat confounded data using a BFR (Binomial Failure Rare) model. An EM algorithm is developed to obtain the maximum likelihood estimates (MLE) for the parameters of the model. We introduce the modified-Beta distribution to develop a Bayesian approach. The alpha-factors model is considered to analyze uncertainties in CCF. We suggest a new formalism to describe uncertainty and consider Dirichlet distributions (nested, grouped) to make a Bayesian analysis. Recording of CCF cause data leads to incomplete contingency table. For a Bayesian analysis of this type of tables, we propose an algorithm relying on inverse Bayes formula (IBF) and Metropolis-Hasting algorithm. We compare our results with those obtained with the alpha- decomposition method, a recent method proposed in the literature. Prediction of catastrophic event is addressed and mapping strategies are described to suggest upper bounds of prediction intervals with pivotal method and Bayesian techniques. Recent events have highlighted the importance of reliability redundant systems and we hope that our work will contribute to a better understanding and prediction of the risks of major CCF events
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kang, Heechan. "Essays on methodologies in contingent valuation and the sustainable management of common pool resources". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141240444.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

McMorran, Alan Walter. "Using the Common Information Model for power systems as a framework for applications to support network data interchange for operations and planning". Thesis, University of Strathclyde, 2006. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21648.

Testo completo
Abstract (sommario):
The Common Information Model (CIM) is an object-oriented representation of a power system used primarily as a data exchange format for power system operational control systems and as a common semantic model to facilitate enterprise application integration. The CIM has the potential to be used as much more than an intermediary exchange language and this thesis explores the use of the CIM as the core of a power systems toolkit for storing, processing, extracting and exchanging data directly as CIM objects. This thesis looks at the evolving nature of the CIM standard and proposes a number of extensions to support the use of the CIM in the UK power industry while maintaining, where possible, backwards compatibility with the IEC standard. The challenges in storing and processing large power system network models as native objects without sacrificing reliability and robustness are discussed and solutions proposed. A number of applications of this CIM software framework are described in this thesis aimed at facilitating the use of the CIM for exchanging data for network planning and operations. The development of novel algorithms is described that use the underlying CIM class structure to convert power system network data in a CIM format to the native, proprietary format of an external analysis application. The problem of validating CIM data against pre-defined profiles and the deficiencies of existing validation techniques is discussed. A novel validation system based on the CIM software framework is proposed that provides a means of performing a level of validation beyond any existing tools. Algorithms to allow the integration of independent power system network models in a CIM format are proposed that allow the automatic identification and removal of overlapping areas and integration of neighbouring networks. The development of an application to dynamically generate network diagrams of power system network models in CIM format via the novel application of existing, generic data visualisation tools is described. The use of web application technologies to create a remotely-accessible tool for creating power system network models in CIM format is described. Each of these applications supports a stage of the planning process allowing both planning and operational engineers to create, exchange and use data in the CIM format by providing tools with a native CIM architecture that can adapt to the evolving CIM standard.
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "OMOP common data model"

1

Gavin, William T. A common model approach to macroeconomics: Using panel data to reduce sampling error. [St. Louis, Mo.]: Federal Reserve Bank of St. Louis, 2003.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Grassini, Maurizio, e Rossella Bardazzi, a cura di. Structural changes, international trade and multisectoral modelling. Florence: Firenze University Press, 2008. http://dx.doi.org/10.36253/978-88-8453-740-9.

Testo completo
Abstract (sommario):
In September 2007 the national team members of the International Inforum (Interindustry Forecasting Project at the University of Maryland) group held the XV annual World Conference in Truijllo, Spain. Such Conferences offer the participants to report their achievements in the different fields concerning the macroeconomic multisectoral modeling approach and data development. The national partners build their country model based on a common input-output accounting structure and a similar econometric modeling approach for sectoral and macroeconomic variables. In each Conference, the contributions refer to the wide spectrum of research activities carried on within the Inforum system of models.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Jaworski, Barbara, Josef Rebenda, Reinhard Hochmuth, Stephanie Thomas, Michèle Artigue, Inés Gómez-Chacón, Sarah Khellaf et al. Inquiry in University Mathematics Teaching and Learning. Brno: Masaryk University Press, 2021. http://dx.doi.org/10.5817/cz.muni.m210-9983-2021.

Testo completo
Abstract (sommario):
The book presents developmental outcomes from an EU Erasmus+ project involving eight partner universities in seven countries in Europe. Its focus is the development of mathematics teaching and learning at university level to enhance the learning of mathematics by university students. Its theoretical focus is inquiry-based teaching and learning. It bases all activity on a three-layer model of inquiry: (1) Inquiry in mathematics and in the learning of mathematics in lecture, tutorial, seminar or workshop, involving students and teachers; (2) Inquiry in mathematics teaching involving teachers exploring and developing their own practices in teaching mathematics; (3) Inquiry as a research process, analysing data from layers (1) and (2) to advance knowledge inthe field. As required by the Erasmus+ programme, it defines Intellectual Outputs (IOs) that will develop in the project. PLATINUM has six IOs: The Inquiry-based developmental model; Inquiry communities in mathematics learning and teaching; Design of mathematics tasks and teaching units; Inquiry-based professional development activity; Modelling as an inquiry process; Evalutation of inquiry activity with students. The project has developed Inquiry Communities, in each of the partner groups, in which mathematicians and educators work together in supportive collegial ways to promote inquiry processes in mathematics learning and teaching. Through involving students in inquiry activities, PLATINUM aims to encourage students` own in-depth engagement with mathematics, so that they develop conceptual understandings which go beyond memorisation and the use of procedures. Indeed the eight partners together have formed an inquiry community, working together to achieve PLATINUM goals within the specific environments of their own institutions and cultures. Together we learn from what we are able to achieve with respect to both common goals and diverse environments, bringing a richness of experience and learning to this important area of education. Inquiry communities enable participants to address the tensions and issues that emerge in developmental processes and to recognise the critical nature of the developmental process. Through engaging in inquiry-based development, partners are enabled and motivated to design activities for their peers, and for newcomers to university teaching of mathematics, to encourage their participation in new forms of teaching, design of teaching, and activities for students. Such professional development design is an important outcome of PLATINUM. One important area of inquiry-based activity is that of “modelling” in mathematics. Partners have worked together across the project to investigate the nature of modelling activities and their use with students. Overall, the project evaluates its activity in these various parts to gain insights to the sucess of inquiry based teaching, learning and development as well as the issues and tensions that are faced in putting into practice its aims and goals.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Designing a Common Interchange Format for Unit Data Using the Command and Control Information Exchange Data Model (C2IEDM) and XSLT. Storming Media, 2004.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Munn, Michael, Sara Robinson e Valliappa Lakshmanan. Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps. O'Reilly Media, Incorporated, 2020.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Chance, Kelly, e Randall V. Martin. Data Fitting. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199662104.003.0011.

Testo completo
Abstract (sommario):
This chapter explores several of the most common and useful approaches to atmospheric data fitting as well as the process of using air mass factors to produce vertical atmospheric column abundances from line-of-sight slant columns determined by data fitting. An atmospheric spectrum or other type of atmospheric sounding is usually fitted to a parameterized physical model by minimizing a cost function, usually chi-squared. Linear fitting, when the model of the measurements is linear in the model parameters is described, followed by the more common nonlinear fitting case. For nonlinear fitting, the standard Levenberg-Marquardt method is described, followed by the use of optimal estimation, one of several retrieval methods that make use of a priori information to providing regularization for the solution. In the context of optimal estimation, weighting functions, contribution functions, and averaging kernels are described. The Twomey-Tikhonov regularization procedure is presented. Correlated parameters, with the important example of Earth’s atmospheric ozone, are discussed.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Cheng, Russell. Embedded Model Problem. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0005.

Testo completo
Abstract (sommario):
This chapter introduces embedded models. This is a special case of a parametric model which cannot be obtained simply by setting the parameters to particular values in a simple way. An example is the regression function y = b[1−exp(−ax)], which is always curved when a and b have fixed values. But letting a tend to zero and b tend to infinity simultaneously, whilst keeping ab = c fixed, yields y = cx, a straight-line special case. When this is the true model, fitting the original two-parameter model leads to very unstable and individually meaningless estimates of a and b. Such embedded models are actually very common in the literature, leading to confusion in interpretation of results when undetected. In this chapter, embeddedness is defined and a large number of regression embedded model examples given. Detection and removal of embeddedness by reparametrization is discussed. Two real data numerical examples are given.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Brazier, John, Julie Ratcliffe, Joshua A. Salomon e Aki Tsuchiya. Design and analysis of health state valuation data for model-based economic evaluations and for economic evaluations alongside clinical trials. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198725923.003.0009.

Testo completo
Abstract (sommario):
This chapter focuses upon the needs of two approaches, economic evaluations based on decision analytic models, and those alongside clinical trials in terms of the collection and analysis of health state values. The first section of the chapter presents requirements that are likely to be common to any study in which health state values are collected from patients and/or members of the general population, including: who to ask, mode of administration, timing of assessments, sample size, and handling uncertainty. The second section of the chapter considers issues specific to trial-based economic evaluations, and the final section considers issues specific to the design and analysis of health state valuation data for economic models.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lattman, Eaton E., Thomas D. Grant e Edward H. Snell. Shape Reconstructions from Small Angle Scattering Data. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199670871.003.0004.

Testo completo
Abstract (sommario):
This chapter discusses recovering shape or structural information from SAXS data. Key to any such process is the ability to generate a calculated intensity from a model, and to compare this curve with the experimental one. Models for the particle scattering density can be approximated as pure homogenenous geometric shapes. More complex particle surfaces can be represented by spherical harmonics or by a set of close-packed beads. Sometimes structural information is known for components of a particle. Rigid body modeling attempts to rotate and translate structures relative to one another, such that the resulting scattering profile calculated from the model agrees with the experimental SAXS data. More advanced hybrid modelling procedures aim to incorporate as much structural information as is available, including modelling protein dynamics. Solutions may not always contain a homogeneous set of particles. A common case is the presence of two or more conformations of a single particle or a mixture of oligomeric species. The method of singular value decomposition can extract scattering for conformationally distinct species.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Cai, Zongwu. Functional Coefficient Models for Economic and Financial Data. A cura di Frédéric Ferraty e Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.6.

Testo completo
Abstract (sommario):
This article discusses the use of functional coefficient models for economic and financial data analysis. It first provides an overview of recent developments in the nonparametric estimation and testing of functional coefficient models, with particular emphasis on the kernel local polynomial smoothing method, before considering misspecification testing as an important econometric question when fitting a functional (varying) coefficient model or a trending time-varying coefficient model. It then describes two major real-life applications of functional coefficient models in economics and finance: the first deals with the use of functional coefficient instrumental-variable models to investigate the empirical relation between wages and education in a random sample of young Australian female workers from the 1985 wave of the Australian Longitudinal Survey, and the second is concerned with the use of functional coefficient beta models to analyze the common stock price of Microsoft stock (MSFT) during the year 2000 using the daily closing prices.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "OMOP common data model"

1

Martínez Casas, David, Sebastián Villarroya Fernández, Moisés Vilar Vidal, José Manuel Cotos Yáñez, José Ramón Ríos Viqueira e José Angel Taboada González. "Common Data Model in AmI Environments". In Ubiquitous Computing and Ambient Intelligence. Personalisation and User Adapted Services, 212–15. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13102-3_35.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Pearson, Mitchell, Brian Knight, Devin Knight e Manuel Quintana. "Common Data Services and Model-Driven Apps". In Pro Microsoft Power Platform, 61–70. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6008-1_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Oberhofer, Walter, e Klaus Haagen. "Common Factor Model Stochastic Model, Data Analysis Technique or What?" In Advances in GLIM and Statistical Modelling, 151–58. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2952-0_24.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Cao, Jie. "Common Business Big Data Management and Decision Model". In E-Commerce Big Data Mining and Analytics, 125–80. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3588-8_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Curry, Edward, Andreas Metzger, Arne J. Berre, Andrés Monzón e Alessandra Boggio-Marzet. "A Reference Model for Big Data Technologies". In The Elements of Big Data Value, 127–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68176-0_6.

Testo completo
Abstract (sommario):
AbstractThe Big Data Value (BDV) Reference Model has been developed with input from technical experts and stakeholders along the whole big data value chain. The BDV Reference Model may serve as a common reference framework to locate big data technologies on the overall IT stack. It addresses the main technical concerns and aspects to be considered for big data value systems. The BDV Reference Model enables the mapping of existing and future data technologies within a common framework. Within this chapter, we detail the reference model in more detail and show how it can be used to manage a portfolio of research and innovation projects.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Lange, Christoph, Jörg Langkau e Sebastian Bader. "The IDS Information Model: A Semantic Vocabulary for Sovereign Data Exchange". In Designing Data Spaces, 111–27. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_7.

Testo completo
Abstract (sommario):
AbstractThe Information Model of the International Data Spaces (IDS-IM) is the central integration enabler for the semantic interoperability in any IDS ecosystem. It contains the terms and relationships to describe the IDS components, their interactions, and conditions under which data exchange and usage is possible. It thus presents the common denominator for the IDS and the foundation for any IDS communication. As such, its evolution cycles are deeply related with the maturity process of the IDS itself. This chapter makes the following contributions related to the IDS Information Model: a brief overview of the vocabulary, its guiding principles, and general features is supplied. Based on these explanations, several upcoming aspects are discussed that reflect the latest state of discussions about the declaration and cryptographic assurance of identities and decentralized identifiers, and how these need to be treated to ensure compliance with the IDS principles.In addition, we explain the latest developments around the IDS Usage Contract Language, the module of the IDS-IM that expresses Usage Contracts, and data restrictions. These definitions are further implemented with infrastructure components, in particular the presented, newly specified Policy Information Point and the Participant Information Service of the IDS.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bol Raap, Wouter, Maria-Eugenia Iacob, Marten van Sinderen e Sebastian Piest. "An Architecture and Common Data Model for Open Data-Based Cargo-Tracking in Synchromodal Logistics". In On the Move to Meaningful Internet Systems: OTM 2016 Conferences, 327–43. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48472-3_19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Voigt, Hannes, e Wolfgang Lehner. "Flexible Relational Data Model – A Common Ground for Schema-Flexible Database Systems". In Advances in Databases and Information Systems, 25–38. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10933-6_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Ostler, David V., John J. Harrington e Gisle Hannemyr. "A Common Reference Model for Healthcare Data Exchange: P1157 MEDIX System Architecture". In Computers and Medicine, 130–39. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-2698-7_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Fedorov, S. S., S. D. Kazakov, Vu Ngoc Tuyen e M. I. Safin. "Automating the Process of Organizing a Common Data Environment for Information Model". In Building Life-cycle Management. Information Systems and Technologies, 49–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96206-7_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "OMOP common data model"

1

Li, Jianbin, Yonglu Han e ZiLong Yin. "A Data-Model Cycle-Driven Data Anomaly Detection Method Based on Microgrid Point of Common Coupling". In 2024 International Conference on Artificial Intelligence and Power Systems (AIPS), 177–81. IEEE, 2024. http://dx.doi.org/10.1109/aips64124.2024.00046.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gavrilovski, Alek, Kyle Collins e Dimitri Mavris. "Model-Enhanced Analysis of Flight Data for Helicopter Flight Operations Quality Assurance". In Vertical Flight Society 72nd Annual Forum & Technology Display, 1–14. The Vertical Flight Society, 2016. http://dx.doi.org/10.4050/f-0072-2016-11502.

Testo completo
Abstract (sommario):
Helicopter Flight Operations Quality Assurance (HFOQA) systems promise safety improvements in flight operations through the use of on-board data from regular flights. HFOQA systems can provide data pertaining to many types of accidents where human factors have been implicated because they track the manner in which the vehicles are operated. For helicopters, most implementations of such systems on helicopters rely on experts to determine pre-set limits on combinations of flight parameters. These limits are also known as "safety events". A common practical problem that arises in HFOQA systems is the need to have sufficient knowledge of a condition before events can be defined and used in a proactive manner. There has been recent interest in using alternative approaches to detecting faults and unsafe events in aviation and to solve this inherent limitation of HFOQA. In this work, a model-based approach is taken in an effort to extend the capabilities of traditional HFOQA analysis, particularly in terms of definition and detection of monitored conditions. For localized conditions, the use of simple models in place of traditional safety events is investigated and demonstrated. The detection based on the model evaluation shows a good correspondence to the flight condition it was designed to capture, and at the same time incurring a minimal amount of additional computation. The model-based boundaries typically account for changes in vehicle parameters and operating conditions, whereas traditional safety events have to be modified through an iterative process. For a more general approach, a dynamic model was considered. By evaluating the vehicle's response to a set of control inputs spanning a range about the trim state, it was possible to determine the boundaries of safe input. The inputs falling outside this boundary were also evaluated for the risk associated with them based on the time required to reach a critical state. In both cases, the results offer improvements over the current state-of-practice, and can be deployed in a data-monitoring system directly or following intermediate post-processing steps.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Khan, Umair, Huzaifa Kothari, Aditya Kuchekar e Reeta Koshy. "Common Data Model for Healthcare data". In 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT). IEEE, 2018. http://dx.doi.org/10.1109/rteict42901.2018.9012520.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Khan, Umair M., Huzaifa Kothari, Aditya Kuchekar e Reeta Koshy. "Common Data Model for Healthcare Data". In 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, 2018. http://dx.doi.org/10.1109/icccnt.2018.8493901.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Cerasoli, Caramen, Wiley Zhao, John J. Santapietro, R. E. McAlinden, B. F. Smith e P. A. Jacyk. "Common data link (CDL) interference model". In AeroSense 2002, a cura di Nickolas L. Faust, James L. Kurtz e Robert Trebits. SPIE, 2002. http://dx.doi.org/10.1117/12.488299.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhang, Qing-Hua. "The Model of Common Data Security Access". In 2008 International Conference on Apperceiving Computing and Intelligence Analysis (ICACIA 2008). IEEE, 2008. http://dx.doi.org/10.1109/icacia.2008.4769993.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Klímek, Jakub, e Martin Nečaský. "Integration and evolution of XML data via common data model". In the 1st International Workshop. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1754239.1754283.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Amato, Flora, Valentina Casola, Andrea Gaglione e Antonino Mazzeo. "A Common Data Model for Sensor Network Integration". In 2010 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS). IEEE, 2010. http://dx.doi.org/10.1109/cisis.2010.124.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

"YEARBOOK DATA INTEGRATION BASED ON COMMON WAREHOUSE MODEL". In Special Session on Project Management and Service Science. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003586205690573.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Niu, Gengtian, Feng Zhu, Zhong Chen e Yanjie Liu. "Efficient Visualization System Construction Using Common Data Model". In 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT). IEEE, 2020. http://dx.doi.org/10.1109/iccasit50869.2020.9368519.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "OMOP common data model"

1

Anderson, Alexander, Eric Stephan e Thomas McDermott. Enabling Data Exchange and Data Integration with the Common Information Model. Office of Scientific and Technical Information (OSTI), marzo 2022. http://dx.doi.org/10.2172/1922947.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Huynh, Giap, e Yansen Wang. Implementing Network Common Data Form (netCDF) for the 3DWF Model. Fort Belvoir, VA: Defense Technical Information Center, febbraio 2016. http://dx.doi.org/10.21236/ad1005366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Barguil, S., e Q. Wu. A Common YANG Data Model for Layer 2 and Layer 3 VPNs. A cura di O. Gonzalez de Dios e M. Boucadair. RFC Editor, febbraio 2022. http://dx.doi.org/10.17487/rfc9181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Gavin, William T., e Athena T. Theodorou. A Common Model Approach to Macroeconomics: Using Panel Data to Reduce Sampling Error. Federal Reserve Bank of St. Louis, 2003. http://dx.doi.org/10.20955/wp.2003.045.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Loomis, Mary. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 28. Data Aggregators Development Specification. Fort Belvoir, VA: Defense Technical Information Center, novembre 1985. http://dx.doi.org/10.21236/ada181711.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Althoff, J. L., e W. J. Bradley. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 29. Data Aggregators Product Specification. Fort Belvoir, VA: Defense Technical Information Center, novembre 1985. http://dx.doi.org/10.21236/ada182015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Apicella, M. L., e S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 28. Data Aggregators Development Specification. Fort Belvoir, VA: Defense Technical Information Center, settembre 1990. http://dx.doi.org/10.21236/ada252455.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Apicella, M., J. Slaton e B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 29. Data Aggregators Product Specification. Fort Belvoir, VA: Defense Technical Information Center, settembre 1990. http://dx.doi.org/10.21236/ada252531.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rollins, D., M. Loomis, J. Hogan e B. Leifeste. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 1. CDM Administrator's Manual. Fort Belvoir, VA: Defense Technical Information Center, novembre 1985. http://dx.doi.org/10.21236/ada181577.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Althoff, J. L., M. L. Apicella, M. P. Bernier, S. Singh e D. B. Thompson. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 7. NDDL User's Guide. Fort Belvoir, VA: Defense Technical Information Center, novembre 1985. http://dx.doi.org/10.21236/ada181955.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia