Segui questo link per vedere altri tipi di pubblicazioni sul tema: OMOP common data model.

Tesi sul tema "OMOP common data model"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-26 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "OMOP common data model".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Lang, Lukas [Verfasser], Hans-Ulrich [Akademischer Betreuer] Prokosch e Hans-Ulrich [Gutachter] Prokosch. "Mapping eines deutschen, klinischen Datensatzes nach OMOP Common Data Model / Lukas Lang ; Gutachter: Hans-Ulrich Prokosch ; Betreuer: Hans-Ulrich Prokosch". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1220911135/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Fruchart, Mathilde. "Réutilisation des données de soins premiers : spécificités, standardisation et suivi de la prise en charge dans les Maisons de Santé Pluridisciplinaires". Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS040.

Testo completo
Abstract (sommario):
Contexte : La réutilisation des données de santé, au-delà de leur usage initial, permet d’améliorer la prise en charge des patients, de faciliter la recherche et d’optimiser le pilotage des établissements de santé. Pour cela, les données sont extraites des logiciels de santé, transformées et stockées dans un entrepôt de données grâce à un processus extract-transform-load (ETL). Des modèles de données communs, comme le modèle OMOP, existent pour stocker les données dans un format homogène,indépendant de la source. Les données de facturation des soins centralisées dans la base nationale (SNDS), les données hospitalières, les données des réseaux sociaux et des forums, et les données de villes ont des sources de données représentatives du parcours de soins des patients. La dernière source de données est encore peu exploitée. Objectif : L’objectif de cette thèse a été d’intégrer les spécificités de la réutilisation des soins premiers pour implémenter un entrepôt de données, tout en montrant la contribution des soins premiers au domaine de la recherche. Méthodes : Dans un premier temps, les données de soins premiers d’une maison de santé ont été extraites du logiciel de soins WEDA. Un entrepôt de données de soins premiers a été implémenté à l’aide d’un processus ETL. La transformation structurelle (harmonisation de la structure de la base de données) et sémantique (harmonisation du vocabulaire utilisé dans les données) ont été mises en place pour aligner les données avec le modèle de données commun OMOP. Pour intégrer les données des médecins généralistes de plusieurs maisons de santé, un outil de généralisation des processus ETL a été développé et testé sur quatre maisons de santé. Par la suite, un algorithme d’évaluation de la persistance à un traitement prescrit et des tableaux de bord ont été développés. Grâce à l’utilisation du modèle OMOP, ces outils sont partageables avec d’autres maisons de santé. Enfin, des études rétrospectives ont été réalisées sur la population de patients diabétiques des quatre maisons de santé. Résultats : Sur plus de 20 ans, les données des 117 005 patients de quatre maisons de santé ont été chargées dans le modèle OMOP, grâce à notre outil d’optimisation des processus ETL. Ces données couvrent les résultats de biologie des laboratoires de ville et les données relatives aux consultations de médecins généralistes. Le vocabulaire propre aux soins premiers a été aligné avec les concepts standards du modèle. Un algorithme pour évaluer la persistance à un traitement prescrit par le médecin généraliste,ainsi qu’un tableau de bord pour le suivi des indicateurs de performance (ROSP) et de l’activité du cabinet ont été développés. Basés sur les entrepôt de données des quatre maisons de santé, nous avons décrit le suivi des patients diabétiques. Ces études utilisent les données de résultats de biologie, les données de consultation et les prescriptions médicamenteuses, au format OMOP. Les scripts de ces études et les outils développés pourront être partagés.Conclusion : Les données de soins premiers représentent un potentiel pour la réutilisation des données à des fins de recherche et d’amélioration de la qualité des soins. Elles complètent les bases de données existantes (hospitalières, nationales et réseaux sociaux) en intégrant les données cliniques de ville. L’utilisation d’un modèle de données commun facilite le développement d’outils et la conduite d’études, tout en permettant leur partage. Les études pourront être répliquées dans différents centres,afin de comparer les résultats
Context : Reusing healthcare data beyond its initial use helps to improve patient care, facilitate research, and optimize the management of healthcare organizations. To achieve this, data is extracted from healthcare software, transformed and stored in a data warehouse through an extract-transform-load(ETL) process. Common data models, such as the OMOP model, exist to store data in a homogeneous,source-independent format. Data from healthcare claims centralized in the national database (SNDS), hospital, social networks and forums, and primary care are different data sources representative of the patient care pathway. The last data source has not been fully exploited. Objective : The aim of this thesis was to incorporate the specificities of primary care data reuse to implement a data warehouse while highlighting the contribution of primary care to the field of research. Methods : The first step was to extract the primary care data of a multidisciplinary health center (MHC) from the WEDA care software. A primary care data warehouse was implemented using an ETL process. Structural transformation (harmonization of the database structure) and semantic transformation (harmonization of the vocabulary used in the data) were implemented to align the data with the common OMOP data model. A process generalization tool was developed to integrate general practitioners (GP) data from multiple care structures and tested on four MHCs. Subsequently, algorithm for assessing the persistence of a prescribed treatment and dashboards were developed. Thanks to the use of the OMOP model, these tools can be shared with other MHCs. Finally, retrospective studies were conducted on the diabetic population of the four MHCs. Results : Over a period of more than 20 years, data of 117,005 patients from four MHCs wereloaded into the OMOP model using our ETL process optimization tool. These data include biological results from laboratories and GP consultation data. The vocabulary specific to primary care was aligned with the standard concepts of the model. An algorithm for assessing persistence with treatment prescribed by the GP and also a dashboard for monitoring performance indicators (ROSP) and practice activity have been developed. Based on the data warehouses of four MHCs, we described the follow-up of diabetic patients. These studies use biological results, consultation and drug prescriptions data in OMOP format. The scripts of these studies and the tools developed can be shared. Conclusion : Primary care data represent a potential for reusing data for research purposes and improving the quality of care. They complement existing databases (hospital, national and social networks) by integrating clinical data from the city. The use of a common data model facilitates the development of tools and the conduct of studies, while enabling their sharing. Studies can be replicated in different centers to compare results
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Kovács, Zsolt. "The integration of product data with workflow management systems through a common data model". Thesis, University of Bristol, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312062.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Davis, Duane T. "Design, implementation and testing of a common data model supporting autonomous vehicle compatibility and interoperability". Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FDavis%5FPhD.pdf.

Testo completo
Abstract (sommario):
Dissertation (PhD. in Computer Science)--Naval Postgraduate School, September 2006.
Dissertation Advisor(s): Don Brutzman. "September 2006." Includes bibliographical references (p. 317-328). Also available in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Hodges, Glenn A. "Designing a common interchange format for unit data using the Command and Control information exchange data model (C2IEDM) and XSLT". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FHodges.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S. in Modeling Virtual Environments and Simulation (MOVES))--Naval Postgraduate School, Sept. 2004.
Thesis advisor(s): Curtis Blais, Don Brutzman. Includes bibliographical references (p. 95-98). Also available online.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Neto, Mario Barreto de Moura. "Application of IEC 61970 for data standardization and smart grid interoperability". Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11627.

Testo completo
Abstract (sommario):
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior
In the context of the current modernization process through which the electrical power systems go through, the concept of Smart Grids and their foundations serve as guidelines. In the search for interoperability, communication between heterogeneous systems has been the subject of constant and increasing developments. Under this scenario, the work presented in this dissertation focuses primarily on the study and application of the data model contained in the IEC 61970 series of standards, best known as the Common Information Model (CIM). With this purpose, the general aspects of the standard are exposed and assisted by the concepts of UML and XML, which are essential for a complete understanding of the model. Certain features of the CIM, as its extensibility and generality are emphasized, which qualify it as ideal data model for the establishment of interoperability. In order to exemplify the use of the model, a case study was performed which modeled an electrical distribution network in medium voltage so as to make it suitable for integration with a multi-agent system in a standardized format and, consequently, adequate to interoperability. The complete process of modeling an electrical network using the CIM is shown. Finally, the development of an interface is proposed as a mechanism that enables human intervention in the data flow between the integrated systems. The use of PHP with a MySQL database, are justified because of their suitability in various usage environments.
No processo atual de modernizaÃÃo pelo qual passam os sistemas de energia elÃtrica, o conceito de Redes ElÃtricas Inteligentes e seus fundamentos tÃm servido de diretrizes. Na busca pela interoperabilidade, a comunicaÃÃo entre sistemas heterogÃneos tem sido objeto de constantes e crescentes avanÃos. Este trabalho tem como objetivo o estudo e a aplicaÃÃo do modelo de dados da sÃrie de normas IEC 61970, denominado Common Information Model (CIM). Com esse objetivo, os aspectos gerais da norma sÃo apresentados, auxiliados pelos conceitos de UML (Unified Modeling Language) e XML (eXtensible Markup Language), essenciais para a compreensÃo integral do modelo. Determinadas caracterÃsticas do modelo CIM, como sua extensibilidade e generalidade, sÃo enfatizadas, as quais o credenciam como modelo com excelentes caracterÃsticas para o estabelecimento da interoperabilidade. Com o intuito de exemplificar a utilizaÃÃo do modelo, realizou-se um estudo de caso em que se modelou uma rede elÃtrica de distribuiÃÃo em mÃdia tensÃo de maneira a tornÃ-la prÃpria para integraÃÃo com um sistema multiagente em um formato padronizado e, consequentemente, adequado à interoperabilidade. O processo completo de modelagem da rede elÃtrica utilizando o CIM foi demonstrado. Por fim, uma interface foi desenvolvida como mecanismo de manipulaÃÃo dos dados nos documentos XML que possam fazer parte do fluxo de informaÃÃes. A utilizaÃÃo do PHP, juntamente com um banco de dados MySQL, à justificada em decorrÃncia de sua adequaÃÃo de uso em ambientes diversos. Os conjuntos formados pela interface, simulador da rede elÃtrica e sistema multiagente para recomposiÃÃo automÃtica, constituÃram um sistema cujas informaÃÃes foram plenamente integradas.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lanman, Jeremy Thomas. "A governance reference model for service-oriented architecture-based common data initialization a case study of military simulation federation systems". Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4516.

Testo completo
Abstract (sommario):
Military simulation and command and control federations have become large, complex distributed systems that integrate with a variety of legacy and current simulations, and real command and control systems locally as well as globally. As these systems continue to become increasingly more complex so does the data that initializes them. This increased complexity has introduced a major problem in data initialization coordination which has been handled by many organizations in various ways. Service-oriented architecture (SOA) solutions have been introduced to promote easier data interoperability through the use of standards-based reusable services and common infrastructure. However, current SOA-based solutions do not incorporate formal governance techniques to drive the architecture in providing reliable, consistent, and timely information exchange. This dissertation identifies the need to establish governance for common data initialization service development oversight, presents current research and applicable solutions that address some aspects of SOA-based federation data service governance, and proposes a governance reference model for development of SOA-based common data initialization services in military simulation and command and control federations.
ID: 029094323; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 253-261).
Ph.D.
Doctorate
Department of Modeling and Simulation
Engineering and Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nguyen, Huu Du. "System Reliability : Inference for Common Cause Failure Model in Contexts of Missing Information". Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS530.

Testo completo
Abstract (sommario):
Le bon fonctionnement de l’ensemble d’un système industriel est parfois fortement dépendant de la fiabilité de certains éléments qui le composent. Une défaillance de l’un de ces éléments peut conduire à une défaillance totale du système avec des conséquences qui peuvent être catastrophiques en particulier dans le secteur de l’industrie nucléaire ou dans le secteur de l’industrie aéronautique. Pour réduire ce risque de panne catastrophique, une stratégie consiste à dupliquer les éléments sensibles dans le dispositif. Ainsi, si l’un de ces éléments tombe en panne, un autre pourra prendre le relais et le bon fonctionnement du système pourra être maintenu. Cependant, on observe couramment des situations qui conduisent à des défaillances simultanées d’éléments du système : on parle de défaillance de cause commune. Analyser, modéliser, prédire ce type d’événement revêt donc une importance capitale et sont l’objet des travaux présentés dans cette thèse. Il existe de nombreux modèles pour les défaillances de cause commune. Des méthodes d’inférence pour étudier les paramètres de ces modèles ont été proposées. Dans cette thèse, nous considérons la situation où l’inférence est menée sur la base de données manquantes. Nous étudions en particulier le modèle BFR (Binomial Failure Rate) et la méthode des facteurs alpha. En particulier, une approche bayésienne est développée en s’appuyant sur des techniques algorithmiques (Metropolis, IBF). Dans le domaine du nucléaire, les données de défaillances sont peu abondantes et des techniques particulières d’extrapolations de données doivent être mis en oeuvre pour augmenter l’information. Nous proposons dans le cadre de ces stratégies, des techniques de prédiction des défaillances de cause commune. L’actualité récente a mis en évidence l’importance de la fiabilité des systèmes redondants et nous espérons que nos travaux contribueront à une meilleure compréhension et prédiction des risques de catastrophes majeures
The effective operation of an entire industrial system is sometimes strongly dependent on the reliability of its components. A failure of one of these components can lead to the failure of the system with consequences that can be catastrophic, especially in the nuclear industry or in the aeronautics industry. To reduce this risk of catastrophic failures, a redundancy policy, consisting in duplicating the sensitive components in the system, is often applied. When one of these components fails, another will take over and the normal operation of the system can be maintained. However, some situations that lead to simultaneous failures of components in the system could be observed. They are called common cause failure (CCF). Analyzing, modeling, and predicting this type of failure event are therefore an important issue and are the subject of the work presented in this thesis. We investigate several methods to deal with the statistical analysis of CCF events. Different algorithms to estimate the parameters of the models and to make predictive inference based on various type of missing data are proposed. We treat confounded data using a BFR (Binomial Failure Rare) model. An EM algorithm is developed to obtain the maximum likelihood estimates (MLE) for the parameters of the model. We introduce the modified-Beta distribution to develop a Bayesian approach. The alpha-factors model is considered to analyze uncertainties in CCF. We suggest a new formalism to describe uncertainty and consider Dirichlet distributions (nested, grouped) to make a Bayesian analysis. Recording of CCF cause data leads to incomplete contingency table. For a Bayesian analysis of this type of tables, we propose an algorithm relying on inverse Bayes formula (IBF) and Metropolis-Hasting algorithm. We compare our results with those obtained with the alpha- decomposition method, a recent method proposed in the literature. Prediction of catastrophic event is addressed and mapping strategies are described to suggest upper bounds of prediction intervals with pivotal method and Bayesian techniques. Recent events have highlighted the importance of reliability redundant systems and we hope that our work will contribute to a better understanding and prediction of the risks of major CCF events
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Kang, Heechan. "Essays on methodologies in contingent valuation and the sustainable management of common pool resources". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141240444.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

McMorran, Alan Walter. "Using the Common Information Model for power systems as a framework for applications to support network data interchange for operations and planning". Thesis, University of Strathclyde, 2006. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21648.

Testo completo
Abstract (sommario):
The Common Information Model (CIM) is an object-oriented representation of a power system used primarily as a data exchange format for power system operational control systems and as a common semantic model to facilitate enterprise application integration. The CIM has the potential to be used as much more than an intermediary exchange language and this thesis explores the use of the CIM as the core of a power systems toolkit for storing, processing, extracting and exchanging data directly as CIM objects. This thesis looks at the evolving nature of the CIM standard and proposes a number of extensions to support the use of the CIM in the UK power industry while maintaining, where possible, backwards compatibility with the IEC standard. The challenges in storing and processing large power system network models as native objects without sacrificing reliability and robustness are discussed and solutions proposed. A number of applications of this CIM software framework are described in this thesis aimed at facilitating the use of the CIM for exchanging data for network planning and operations. The development of novel algorithms is described that use the underlying CIM class structure to convert power system network data in a CIM format to the native, proprietary format of an external analysis application. The problem of validating CIM data against pre-defined profiles and the deficiencies of existing validation techniques is discussed. A novel validation system based on the CIM software framework is proposed that provides a means of performing a level of validation beyond any existing tools. Algorithms to allow the integration of independent power system network models in a CIM format are proposed that allow the automatic identification and removal of overlapping areas and integration of neighbouring networks. The development of an application to dynamically generate network diagrams of power system network models in CIM format via the novel application of existing, generic data visualisation tools is described. The use of web application technologies to create a remotely-accessible tool for creating power system network models in CIM format is described. Each of these applications supports a stage of the planning process allowing both planning and operational engineers to create, exchange and use data in the CIM format by providing tools with a native CIM architecture that can adapt to the evolving CIM standard.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Kong, Xiangjun. "An approach to open virtual commissioning for component-based automation". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13572.

Testo completo
Abstract (sommario):
Increasing market demands for highly customised products with shorter time-to-market and at lower prices are forcing manufacturing systems to be built and operated in a more efficient ways. In order to overcome some of the limitations in traditional methods of automation system engineering, this thesis focuses on the creation of a new approach to Virtual Commissioning (VC). In current VC approaches, virtual models are driven by pre-programmed PLC control software. These approaches are still time-consuming and heavily control expertise-reliant as the required programming and debugging activities are mainly performed by control engineers. Another current limitation is that virtual models validated during VC are difficult to reuse due to a lack of tool-independent data models. Therefore, in order to maximise the potential of VC, there is a need for new VC approaches and tools to address these limitations. The main contributions of this research are: (1) to develop a new approach and the related engineering tool functionality for directly deploying PLC control software based on component-based VC models and reusable components; and (2) to build tool-independent common data models for describing component-based virtual automation systems in order to enable data reusability.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Lukmanova, Elizaveta, e Gabriele Tondl. "Macroeconomic Imbalances and Business Cycle Synchronization. Why Common Economic Governance is Imperative for the Eurozone". WU Vienna University of Economics and Business, 2016. http://epub.wu.ac.at/5087/1/wp229.pdf.

Testo completo
Abstract (sommario):
This paper investigates a new category of influential factors on business cycle synchronization (BCS), so far hardly regarded in the BCS literature: It provides an empirical assessment of the impact of macroeconomic imbalances, as monitored by the European Commission by the scoreboard indicators since 2011, on BCS in the Euozone. We use a quarterly data set covering the period 2002-2012 and estimate the direct and indirect effects of macroeconomic imbalances in the pre- and post-crisis period in a simultaneous equations model. Business cycle correlation between EA members is measured by the recently proposed dynamic conditional correlation of Engle 2002 which can better identify synchronous and asynchronous behaviour of BC than the commonly used measures. We find that appearing differences between EA members in the current account, in government deficit and public debt, in private debt and unit labor cost developments have reduced BCS in the EA, even more in the post-crisis period than before. Moreover, these explanatory factors of BCS, generally reinforce each other and are also influenced by other critical macro imbalances. Since BCS is essential in a monetary union, this paper provides clear support that a stronger, common economic governance would be important for the functioning and survival of the Eurozone. (authors' abstract)
Series: Department of Economics Working Paper Series
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Sivakumar, Krish. "CAD feature development and abstraction for process planning". Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1180038784.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Eberhardt, Markus. "Modelling technology in agriculture and manufacturing using cross-country panel data". Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:d60f62f5-43e2-4473-b899-f4358d758e1e.

Testo completo
Abstract (sommario):
Why do we observe such dramatic differences in labour productivity across countries in the macro data? This thesis argues that the growth empirics literature oversimplifies the complexity of the production process across countries and neglects data cross-section and time-series properties, leading to bias in the empirical estimates. Chapter 1 presents two general empirical frameworks for cross-country productivity analysis and demonstrates that they encompass the growth empirics literature of the past decades. We introduce our central argument of cross-country heterogeneity in the impact of observables and unobservables on output and develop this against the background of the pertinent time-series and cross-section properties of macro panel data. Chapter 2 uses data from 48 countries to estimate manufacturing production functions. We discuss standard and novel estimators, focusing on their treatment of parameter heterogeneity and data time-series and cross-section properties. We develop the Augmented Mean Group (AMG) estimator and show its similarity to the Pesaran (2006) Common Correlated Effects (CCE) approach. Our results confirm parameter heterogeneity across countries in the impact of observable inputs on output. We check the robustness of this finding and highlight its implications for empirical measures of TFP. Chapter 3 investigates the heterogeneity of agricultural production technology using data for 128 countries. We develop an extension to the CCE estimators which allows us to suggest that TFP is structured such that countries with similar agro-climatic environment are influenced by the same unobserved factors. This finding offers a possible explanation for the failure of technology-transfer from advanced countries of the temperate 'North' to developing countries of the arid/equatorial 'South'. Our Monte Carlo simulations in Chapter 4 investigate the performance of the AMG, CCE and standard (micro-)panel estimators. Failure to account for cross-section dependence is shown to result in serious distortion of the empirical estimates. We highlight scenarios in which the AMG is biased and offer simple remedies.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Nordström, Lars. "Use of the CIM framework for data management in maintenance of electricity distribution networks". Doctoral thesis, KTH, Industriella informations- och styrsystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3985.

Testo completo
Abstract (sommario):
Aging infrastructure and personnel, combined with stricter financial constraints has put maintenance, or more popular Asset Management, at the top of the agenda for most power utilities. At the same time the industry reports that this area is not properly supported by information systems. Today’s power utilities have very comprehensive and complex portfolios of information systems that serve many different purposes. A common problem in such heterogeneous system architectures is data management, e.g. data in the systems do not represent the true status of the equipment in the power grid or several sources of data are contradictory. The research presented in this thesis concerns how this industrial problem can be better understood and approached by novel use of the ontology standardized in the Common Information Model defined in IEC standards 61970 & 61968. The theoretical framework for the research is that of data management using ontology based frameworks. This notion is not new, but is receiving renewed attention due to emerging technologies, e.g. Service Oriented Architectures, that support implementation of such ontological frameworks. The work presented is empirical in nature and takes its origin in the ontology available in the Common Information Model. The scope of the research is the applicability of the CIM ontology, not as it was intended i.e. in systems integration, but for analysis of business processes, legacy systems and data. The work has involved significant interaction with power distribution utilities in Sweden, in order to validate the framework developed around the CIM ontology. Results from the research have been published continuously, this thesis consists of an introduction and summary and papers describing the main contribution of the work. The main contribution of the work presented in this thesis is the validation of the proposition to use the CIM ontology as a basis for analysis existing legacy systems. By using the data models defined in the standards and combining them with established modeling techniques we propose a framework for information system management. The framework is appropriate for analyzing data quality problems related to power systems maintenance at power distribution utilities. As part of validating the results, the proposed framework has been applied in a case study involving medium voltage overhead line inspection. In addition to the main contribution, a classification of the state of the practice system support for power system maintenance at utilities has been created. Second, the work includes an analysis and classification of how high performance Wide Area communication technologies can be used to improve power system maintenance including improving data quality.
QC 20100614
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Reynolds, Toby J. "Bayesian modelling of integrated data and its application to seabird populations". Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1635.

Testo completo
Abstract (sommario):
Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework. This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period. Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

COSTANTINO, Salvatore. "A Spatial Origin-Destination Analysis of International Tourism Demand. The Case of Italian Provinces". Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/499047.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Hurdal, Monica Kimberly. "Mathematical and computer modelling of the human brain with reference to cortical magnification and dipole source localisation in the visual cortx". Thesis, Queensland University of Technology, 1998.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

PILLAI, Vinoshene. "Intravital two photon clcium imaging of glioblastoma mouse models". Doctoral thesis, Scuola Normale Superiore, 2021. http://hdl.handle.net/11384/109211.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Testo completo
Abstract (sommario):
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Chen, M. C., e 陳明昌. "Common-pole/zero Model on HRTFs Data Sets". Thesis, 1998. http://ndltd.ncl.edu.tw/handle/05392849249597034841.

Testo completo
Abstract (sommario):
碩士
國立交通大學
電信工程研究所
86
HRTFs are impulse responses from sound source to both ears. They are widely used in 3 - D sound applications. However, the huge amount of HRTFs data set makes real - time processing diffcult. Therefore, reducing the huge HRTFs data has been an central issue on 3 - D sound processing.   Due to the resonant characteristic of pinna structure, it is believed that the HRTFs at different eosiions may share common resonant frequencies. In this thesis, thus, we adopt the common - pole / zero model individually. The determination of fitting every HRTF by a pole / zero model to fit the HRTFs data set instead of fitting every HRTF by a pole / zero model individually. The determination of common - poles and zeros will be based on three methods, Prony, Shanks, and iterative prefiltering. Besides, clustering of HRTFs is made before applying common - pole / zero model to lower the model error.   The simulations of HRTFs synthesis will be demonstrated to verify these approaches. Listening tests are also made to justify the validity in human hearing.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Lin-Yu, Cheng, e 林右晟. "Panel Data Analysis Of PPP Cointegration In The Consideration Of Common Factor Model". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/76155831873506013185.

Testo completo
Abstract (sommario):
碩士
輔仁大學
經濟學系碩士班
103
In this paper we use the cointegration analysis to discuss the Purchasing Power Parity with panel data. In this line of researches usually assume that cross-section is independent. However, panel cointegration test statistic may have size distortions when cross-section is dependent. Thus, we employ the PANIC approach proposed by Bai and Ng(2010) which takes cross-sectional dependence via factor approach into consideration in our model. We employ monthly on nominal exchange rates and consumer price index from 1974/01 to 1998/12 in the Euro area consisting of eleven countries. When we consider cross-sectional independence, the result indicates that the model is cointegration (PPP is hold). If we consider cross-section is dependent, the result is no cointegration (PPP is not hold) Keyword: Purchasing power parity, panel data, unit root test, cointegration test, cross-sectional dependence.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Slavic, Aida, e Maria Inês Cordeiro. "Sharing and re-use of classification systems: the need for a common data model". 2005. http://hdl.handle.net/10150/105132.

Testo completo
Abstract (sommario):
Classifications can help to overcome difficulties in information retrieval of heterogeneous and multilingual collections for which linguistic and free text searching is not sufficient or applicable. However, there are problems in the machine readability of classification systems which do not facilitate their wider use and full exploitation. The authors focus on issues of automation of analytico-synthetic classification systems such as Universal Decimal Classification (UDC), Bliss Bibliographic Classification (BC2) and Broad System of Ordering (BSO). 'Analytico-syntheticâ means here classification systems that offer the possibility of building compound index/search terms and that lend themselves to post-coordinate searching.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Hsiao-Wen, Lee, e 李筱雯. "Compromise Method for Conditional Regression Analysis of Repeated Event Data Under a Common Baseline Hazard Model". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/14939021208745529600.

Testo completo
Abstract (sommario):
碩士
國立臺灣大學
流行病學研究所
88
Abstract Recurrent event data are commonly encountered in longitudinal studies. Such data arises in various areas such as reliability、medicine、economics、sociology and etc... For examples, in a clinical study people with cancer may experience multiple tumor recurrences and in industrial studies the breakdown of a machine is also a recurrent event. In this study, we review various regression models to describe the event recurrences related to various factors. A conditional hazards model generalized from Cox’s semiparametric hazards models is considered in the study. This model includes two types of effects, global common and episode-specific effects, and the hazards are assumed to be the same for each episode of events. The aim of the study is to develop a more efficient estimating method for the global effects and estimation for the cumulative common baseline hazard function. Under this conditional model, there are two methods to estimate the effects. The first method is based on partial likelihood, which is stratified by episodes of events. The second method is from an unstratified profile likelihood by pooling all of events. But these have their own advantage, the first ( stratified) method has smaller bias and the second ( unstratified) method has smaller variance. In order to get more efficient estimator, we consider a new method, compromise method, to balance the advantages of these two methods. The comparisons of these estimating methods are illustrated by simulation studies. In the analysis of real data, we can apply the plot of the estimated cumulative episode-specific baseline hazard functions against time to select more appropriate compromise method.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Yu, Shuo. "Probabilistic Assessment of Common Cause Failures in Nuclear Power Plants". Thesis, 2013. http://hdl.handle.net/10012/7547.

Testo completo
Abstract (sommario):
Common cause failures (CCF) are a significant contributor to risk in complex technological systems, such as nuclear power plants. Many probabilistic parametric models have been developed to quantify the systems subject to the CCF. Existing models include the beta factor model, the multiple Greek letter model, the basic parameter model, the alpha factor model and the binomial failure rate model. These models are often only capable of providing a point estimate, when there are limited CCF data available. Some recent studies have proposed a Bayesian approach to quantify the uncertainties in CCF modeling, but they are limited in addressing the uncertainty in the common failure factors only. This thesis presents a multivariate Poisson model for CCF modeling, which combines the modeling of individual and common cause failures into one process. The key idea of the approach is that failures in a common cause component group of n components are decomposed into superposition of k (>n) independent Poisson processes. Empirical Bayes method is utilized for simultaneously estimating the independent and common cause failure rates which are mutually exclusive. In addition, the conventional CCF parameters can be evaluated using the outcomes of the new approach. Moreover, the uncertainties in the CCF modeling can also be addressed in an integrated manner. The failure rate is estimated as the mean value of the posterior density function while the variance of the posterior represents the variation of the estimate. A MATLAB program of the Monte Carlo simulation was developed to check the behavior of the proposed multivariate Poisson (MVP) model. Superiority over the traditional CCF models has been illustrated. Furthermore, due to the rarity of the CCF data observed at one nuclear power plant, data of the target plant alone are insufficient to produce reliable estimates of the failure rates. Data mapping has been developed to make use of the data from source plants of different sizes. In this thesis, data mapping is combined with EB approach to partially assimilate information from source plants and also respect the data of the target plant. Two case studies are presented using different database. The results are compared to the empirical values provided by the United States Nuclear Regulatory Commission (USNRC).
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Conradie, Pieter Wynand. "A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)". Diss., 2000. http://hdl.handle.net/10500/17924.

Testo completo
Abstract (sommario):
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation.
Computing
M. Sc. (Computer Science)
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia