Dissertations / Theses on the topic 'Data driven model'

To see the other types of publications on this topic, follow the link: Data driven model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data driven model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aboulsamh, Mohammed A. "Model-driven data migration." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:44ddbf8b-a6a0-4830-baeb-13b2c746802f.

Full text
Abstract:
Information systems often hold data of considerable value. Their continuing development or maintenance will often necessitate evolution of the system and migration of the data from one version to the next: a process that may be expensive, time-consuming, and prone to error. That such a process remains a source of challenges, is recognized by both academia and industry. In current practice, data migration is often considered only in the later stages of development, leaving critical data to be transformed and loaded by hand-written scripts, long after the design process has been completed. The advent of model-driven engineering offers an opportunity to consider the question of information system evolution and data migration earlier in the development process. A precise account of the proposed changes to an existing system model can be used to predict the consequences for existing data, and to generate the necessary data migration implementation. This dissertation shows how automatic data migration can be achieved by extending the definition of a data modeling language to include model level operations, each of which corresponds to the addition, modification, or deletion of a model component. Using the Unified Modeling Language (UML) notation as an example, we show how the specification of these operations may be translated into an abstract program in the Abstract Machine Notation (AMN), employed in the B-method, and then formally checked for consistency and applicability prior to translation into a concrete programming notation, such as Structured Query Language (SQL).
APA, Harvard, Vancouver, ISO, and other styles
2

Matusik, Wojciech 1973. "A data-driven reflectance model." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87454.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
"September 2003."
Includes bibliographical references (leaves 112-115).
I present a data-driven model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytic reflectance models, each BRDF is represented as a dense set of measurements. This representation allows interpolation and extrapolation in the space of acquired BRDFs to create new BRDFs. Each acquired BRDF is treated as a single high-dimensional vector taken from the space of all possible BRDFs. Both linear (subspace) and non-linear (manifold) dimensionality reduction tools are applied in an effort to discover a lower-dimensional representation that characterizes the acquired BRDFs. To complete the model, users are provided with the means for defining perceptually meaningful parametrizations that allow them to navigate in the reduced-dimension BRDF space. On the low-dimensional manifold, movement along these directions produces novel, but valid, BRDFs. By analyzing a large collection of reflectance data, I also derive two novel reflectance sampling procedures that require fewer total measurements than standard uniform sampling approaches. Using densely sampled measurements the general surface reflectance function is analyzed to determine the local signal variation at each point in the function's domain. Wavelet analysis is used to derive a common basis for all of the acquired reflectance functions, as well as a non-uniform sampling pattern that corresponds to all non-zero wavelet coefficients. Second, I show that the reflectance of an arbitrary material can be represented as a linear combination of the surface reflectance functions. Furthermore, this analysis specifies a reduced set of sampling points that permits the robust estimation of the coefficients of this linear combination.
(cont.) These procedures dramatically shorten the acquisition time for isotropic reflectance measurements.
by Wojciech Matusik.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Safie, Lily Suryani Binti. "A software component model that is both control-driven and data-driven." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/a-software-component-model-that-is-both-controldriven-and-datadriven(ce21c34b-7257-4b8f-aa79-f6456b49a3a0).html.

Full text
Abstract:
A software component model is the cornerstone of any Component-based Software Development (CBSD) methodology. Such a model defines the modelling elements for constructing software systems. In software system modelling, it is necessary to capture the three elements of a system's behaviour: (i) control (ii) computation and (iii) data. Within a system, computations are performed according to the flow of control or the flow of data, depending on whether computations are control-driven or data-driven. Computations are function evaluations, assignments, etc., which transform data when invoked by control or data flow. Therefore a component model should be able to model control flow, data flow as well as computations. Current component models all model computations, but beside computations tend to model either control flow only or data flow only, but not both. In this thesis, we present a new component model which can model both control flow and data flow. It contains modelling elements that capture control flow and data flow explicitly. Furthermore, the modelling of control flow is separate from that of data flow; this enables the modelling of both control-driven and data-driven computations. The feasibility of the model is shown by means of an implementation of the model, in the form of a prototype tool. The usefulness of the model is then demonstrated for a specific domain, the embedded systems domain, as well as a generic domain. For the embedded systems domain, unlike current models, our model can be used to construct systems that are both control-driven and data-driven. In a generic domain, our model can be used to construct domain models, by constructing control flows and data flows which together define a domain model.
APA, Harvard, Vancouver, ISO, and other styles
4

Elbekai, Ali Sayeh. "Generic model for application driven XML data processing." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/55/.

Full text
Abstract:
XML technology has emerged during recent years as a popular choice for representing and exchanging semi-structured data on the Web. It integrates seamlessly with web-based applications. If data is stored and represented as XML documents, then it should be possible to query the contents of these documents in order to extract, synthesize and analyze their contents. This thesis for experimental study of Web architecture for data processing is based on semantic mapping of XML Schema. The thesis involves complex methods and tools for specification, algorithmic transformation and online processing of semi-structured data over the Web in XML format with persistent storage into relational databases. The main focus of the research is preserving the structure of original data for data reconciliation during database updates and also to combine different technologies for XML data processing such as storing (SQL), transformation (XSL Processors), presenting (HTML), querying (XQUERY) and transporting (Web services) using a common framework, which is both theoretically and technologically well grounded. The experimental implementation of the discussed architecture requires a Web server (Apache), Java container (Tomcat) and object-relational DBMS (Oracle 9) equipped with Java engine and corresponding libraries for parsing and transformation of XML data (Xerces and Xalan). Furthermore the central idea behind the research is to use a single theoretical model of the data to be processed by the system (XML algebra) controlled by one standard metalanguage specification (XML Schema) for solving a class of problems (generic architecture). The proposed work combines theoretical novelty and technological advancement in the field of Internet computing. This thesis will introduce a generic approach since both our model (XML algebra) and our problem solver (the architecture of the integrated system) are XML Schema- driven. Starting with the XML Schema of the data, we first develop domain-specific XML algebra suitable for data processing of the specific data and then use it for implementing the main offline components of the system for data processing.
APA, Harvard, Vancouver, ISO, and other styles
5

Boruvka, Audrey. "Data-driven estimation for Aalen's additive risk model." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Syed, Mofazzal. "Data driven modelling for environmental water management." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54592/.

Full text
Abstract:
Management of water quality is generally based on physically-based equations or hypotheses describing the behaviour of water bodies. In recent years models built on the basis of the availability of larger amounts of collected data are gaining popularity. This modelling approach can be called data driven modelling. Observational data represent specific knowledge, whereas a hypothesis represents a generalization of this knowledge that implies and characterizes all such observational data. Traditionally deterministic numerical models have been used for predicting flow and water quality processes in inland and coastal basins. These models generally take a long time to run and cannot be used as on-line decision support tools, thereby enabling imminent threats to public health risk and flooding etc. to be predicted. In contrast, Data driven models are data intensive and there are some limitations in this approach. The extrapolation capability of data driven methods are a matter of conjecture. Furthermore, the extensive data required for building a data driven model can be time and resource consuming or for the case predicting the impact of a future development then the data is unlikely to exist. The main objective of the study was to develop an integrated approach for rapid prediction of bathing water quality in estuarine and coastal waters. Faecal Coliforms (FC) were used as a water quality indicator and two of the most popular data mining techniques, namely, Genetic Programming (GP) and Artificial Neural Networks (ANNs) were used to predict the FC levels in a pilot basin. In order to provide enough data for training and testing the neural networks, a calibrated hydrodynamic and water quality model was used to generate input data for the neural networks. A novel non-linear data analysis technique, called the Gamma Test, was used to determine the data noise level and the number of data points required for developing smooth neural network models. Details are given of the data driven models, numerical models and the Gamma Test. Details are also given of a series experiments being undertaken to test data driven model performance for a different number of input parameters and time lags. The response time of the receiving water quality to the input boundary conditions obtained from the hydrodynamic model has been shown to be a useful knowledge for developing accurate and efficient neural networks. It is known that a natural phenomenon like bacterial decay is affected by a whole host of parameters which can not be captured accurately using solely the deterministic models. Therefore, the data-driven approach has been investigated using field survey data collected in Cardiff Bay to investigate the relationship between bacterial decay and other parameters. Both of the GP and ANN models gave similar, if not better, predictions of the field data in comparison with the deterministic model, with the added benefit of almost instant prediction of the bacterial levels for this recreational water body. The models have also been investigated using idealised and controlled laboratory data for the velocity distributions along compound channel reaches with idealised rods have located on the floodplain to replicate large vegetation (such as mangrove trees).
APA, Harvard, Vancouver, ISO, and other styles
7

Bugtai, Nilo T. "Fixturing information models in data model-driven product design and manufacture." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/34654.

Full text
Abstract:
In order to ensure effective decisions are made at each stage in the design and manufacture process, it is important that software tools should provide sufficient information to support the decision making of both designers and manufacturing engineers. This requirement can be applied to fixturing where research to date has typically focused on narrow functional support issues in fixture design and planning. The research reported in this thesis has explored how models of fixturing information can be defined, within an integrated information environment, and utilised across product design as well as manufacture. The work has focused on the definition of fixturing information within the context of a wide-ranging model that can capture the full capability of a manufacturing facility.
APA, Harvard, Vancouver, ISO, and other styles
8

Kis, Filip. "Prototyping with Data : Opportunistic Development of Data-Driven Interactive Applications." Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196851.

Full text
Abstract:
There is a growing amount of digital information available from Open-Data initiatives, Internet-of-Things technologies, and web APIs in general. At the same time, an increasing amount of technology in our lives is creating a desire to take advantage of the generated data for personal or professional interests. Building interactive applications that would address this desire is challenging since it requires advanced engineering skills that are normally reserved for professional software developers. However, more and more interactive applications are prototyped outside of enterprise environments, in more opportunistic settings. For example, knowledge workers apply end-user development techniques to solve their tasks, or groups of friends get together for a weekend hackathon in the hope of becoming the next big startup. This thesis focuses on how to design prototyping tools that support opportunistic development of interactive applications that take advantage of the growing amount of available data. In particular, the goal of this thesis is to understand what are the current challenges of prototyping with data and to identify important qualities of tools addressing these challenges. To accomplish this, declarative development tools were explored, while keeping focus on what data and interaction the application should afford rather than on how they should be implemented (programmed). The work presented in this thesis was carried out as an iterative process which started with a design exploration of Model-based UI Development, followed by observations of prototyping practices through a series of hackathon events and an iterative design of Endev – a prototyping tool for data-driven web applications. Formative evaluations of Endev were conducted with programmers and interaction designers.  The main results of this thesis are the identified challenges for prototyping with data and the key qualities required of prototyping tools that aim to address these challenges. The identified key qualities that lower the threshold for prototyping with data are: declarative prototyping, familiar and setup-free environment, and support tools. Qualities that raise the ceiling for what can be prototyped are: support for heterogeneous data and for advanced look and feel.
Mer och mer digital information görs tillgänglig på olika sätt, t.ex. via öppna data-initiativ, Sakernas internet och API:er. Med en ökande teknikanvändning så skapas även ett intresse för att använda denna data i olika sammanhang, både privat och professionellt. Att bygga interaktiva applikationer som adresserar dessa intressen är en utmaning då det kräver avancerade ingenjörskunskaper, vilket man vanligtvis endast hittar hos professionella programmerare. Sam­tidigt byggs allt fler interaktiva applikationer utanför företagsmiljöer, i mer opportunistiska sammanhang. Detta kan till exempel vara kunskapsarbetare som använder sig av s.k. anveckling (eng. end-user development) för att lösa en uppgift, eller kompisar som träffas en helg för att hålla ett hackaton med hopp om att bli nästa framgångsrika startup-företag. Den här avhandlingen fokuserar på hur prototypverktyg kan utformas för att stödja utveckling av interaktiva applikationer i sådana opportunistiska sammanhang, som utnyttjar den ökande mängden av tillgänglig data. Målet med arbetet som presenteras i den här avhandlingen har varit att förstå utmaningarna som det innebär att använda data i prototyparbete och att identifiera viktiga kvalitéer för de verktyg som ska kunna hantera detta. För att uppnå detta mål har verktyg för deklarativ programmering utforskats med ett fokus kring vilken data och interaktion en applikationen ska erbjuda snarare än hur den ska implementeras. Arbetet som presenteras i den här avhandlingen har genomförts som en iterativ process, med en startpunkt i en utforskning av modellbaserad gränssnittsutveckling, vilket sedan följdes av observationer av prototyparbete i praktiken genom en serie hackathon och en iterativ design av Endev, som är ett prototypverktyg för att skapa datadrivna webbapplikationer. Formativa utvärderingar av Endev har utförts med programmerare och interaktionsdesigners. De viktigaste resultaten av den här avhandlingen är de utmaningar som har identifierats kring hur man skapar prototyper och de kvalitéer som krävs av prototypverktyg som ska adressera dessa utmaningar. De identifierade kvalitéerna som sänker trösklarna för att inkludera data i prototyper är: deklarativt prototyparbete, välbekanta och installationsfria miljöer, och supportverktyg. Kvalitéer som höjer taket för vad som kan göras i en prototyp är: stöd för olika typer av data och för avancerad “look and feel”.
APA, Harvard, Vancouver, ISO, and other styles
9

Malatesta, William, and Clay Fink. "MEASUREMENT-CENTRIC DATA MODEL FOR INSTRUMENTATION CONFIGURATION." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604525.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
CTEIP has launched the integrated Network Enhanced Telemetry (iNET) project to foster advances in networking and telemetry technology to meet emerging needs of major test programs. In the past these programs have been constrained by vendor proprietary equipment configuration utilities that force a significant learning curve on the part of instrumentation personnel to understand hardware idiosyncrasies and require significant human interaction and manipulation of data to be exchanged between different components of the end-to-end test system. This paper describes an ongoing effort to develop a measurement-centric data model of airborne data acquisition systems. The motivation for developing such a model is to facilitate hardware and software interoperability and to alleviate the need for vendor-specific knowledge on the part of the instrumentation engineer. This goal is driven by requirements derived from scenarios collected by the iNET program. This approach also holds the promise of decreased human interaction with and manipulation of data to be exchanged between system components.
APA, Harvard, Vancouver, ISO, and other styles
10

Koc, Birgul. "Numerical Analysis for Data-Driven Reduced Order Model Closures." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103202.

Full text
Abstract:
This dissertation contains work that addresses both theoretical and numerical aspects of reduced order models (ROMs). In an under-resolved regime, the classical Galerkin reduced order model (G-ROM) fails to yield accurate approximations. Thus, we propose a new ROM, the data-driven variational multiscale ROM (DD-VMS-ROM) built by adding a closure term to the G-ROM, aiming to increase the numerical accuracy of the ROM approximation without decreasing the computational efficiency. The closure term is constructed based on the variational multiscale framework. To model the closure term, we use data-driven modeling. In other words, by using the available data, we find ROM operators that approximate the closure term. To present the closure term's effect on the ROMs, we numerically compare the DD-VMS-ROM with other standard ROMs. In numerical experiments, we show that the DD-VMS-ROM is significantly more accurate than the standard ROMs. Furthermore, to understand the closure term's physical role, we present a theoretical and numerical investigation of the closure term's role in long-time integration. We theoretically prove and numerically show that there is energy exchange from the most energetic modes to the least energetic modes in closure terms in a long time averaging. One of the promising contributions of this dissertation is providing the numerical analysis of the data-driven closure model, which has not been studied before. At both the theoretical and the numerical levels, we investigate what conditions guarantee that the small difference between the data-driven closure model and the full order model (FOM) closure term implies that the approximated solution is close to the FOM solution. In other words, we perform theoretical and numerical investigations to show that the data-driven model is verifiable. Apart from studying the ROM closure problem, we also investigate the setting in which the G-ROM converges optimality. We explore the ROM error bounds' optimality by considering the difference quotients (DQs). We theoretically prove and numerically illustrate that both the ROM projection error and the ROM error are suboptimal without the DQs, and optimal if the DQs are used.
Doctor of Philosophy
In many realistic applications, obtaining an accurate approximation to a given problem can require a tremendous number of degrees of freedom. Solving these large systems of equations can take days or even weeks on standard computational platforms. Thus, lower-dimensional models, i.e., reduced order models (ROMs), are often used instead. The ROMs are computationally efficient and accurate when the underlying system has dominant and recurrent spatial structures. Our contribution to reduced order modeling is adding a data-driven correction term, which carries important information and yields better ROM approximations. This dissertation's theoretical and numerical results show that the new ROM equipped with a closure term yields more accurate approximations than the standard ROM.
APA, Harvard, Vancouver, ISO, and other styles
11

Nguyen, Hoang-Phuong. "Model-based and data-driven prediction methods for prognostics." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC021.

Full text
Abstract:
La dégradation est un phénomène inévitable qui affecte les composants et les systèmes d'ingénierie, et qui peut entraîner leurs défaillances avec des conséquences potentiellement catastrophiques selon l'application. La motivation de cette Thèse est d'essayer de modéliser, d'analyser et de prédire les défaillances par des méthodes pronostiques qui peuvent permettre une gestion prédictive de la maintenance des actifs. Cela permettrait aux décideurs d'améliorer la planification de la maintenance, augmentant ainsi la disponibilité et la sûreté du système en minimisant les arrêts imprévus. Dans cet objectif, la recherche au cours de la thèse a été consacrée à l'adaptation et à l'utilisation d'approches basées sur des modèles et d'approches pilotées par les données pour traiter les processus de dégradation qui peuvent conduire à différents modes de défaillance dans les composants industriels, en utilisant différentes sources d'informations et de données pour effectuer des prédictions sur l'évolution de la dégradation et estimer la durée de vie utile restante (RUL).Les travaux de thèse ont porté sur deux applications pronostiques spécifiques: les pronostics basés sur des modèles pour la prédiction de la croissance des fissures par fatigue et les pronostics pilotées par les données pour les prédictions à pas multiples des données de séries chronologiques des composants des Centrales Nucléaires.Les pronostics basé sur des modèles compter sur le choix des modèles adoptés de Physics-of-Failure (PoF). Cependant, chaque modèle de dégradation ne convient qu'à certains processus de dégradation dans certaines conditions de fonctionnement, qui souvent ne sont pas connues avec précision. Pour généraliser, des ensembles de multiples modèles de dégradation ont été intégrés dans la méthode pronostique basée sur les modèles afin de tirer profit des différentes précisions des modèles spécifiques aux différentes dégradations et conditions. Les principales contributions des approches pronostiques proposées basées sur l'ensemble des modèles sont l'intégration d'approches de filtrage, y compris le filtrage Bayésien récursif et le Particle Filtering (PF), et de nouvelles stratégies d'ensemble pondérées tenant compte des précisions des modèles individuels dans l'ensemble aux étapes de prédiction précédentes. Les méthodes proposées ont été validées par des études de cas de croissance par fissures de fatigue simulées dans des conditions de fonctionnement variables dans le temps.Quant à la prédictions à pas multiples, elle reste une tâche difficile pour le Prognostics and Health Management (PHM) car l'incertitude de prédiction a tendance à augmenter avec l'horizon temporel de la prédiction. La grande incertitude de prédiction a limité le développement de pronostics à pas multiples dans les applications. Pour résoudre le problème, de nouveaux modèles de prédiction à pas multiples basés sur la Long Short-Term Memory (LSTM), un réseau de neurones profond développé pour traiter les dépendances à long terme dans les données de séries chronologiques, ont été développés dans cette Thèse. Pour des applications pratiques réalistes, les méthodes proposées abordent également les problèmes supplémentaires de détection d'anomalie, d'optimisation automatique des hyper-paramètres et de quantification de l'incertitude de prédiction. Des études de cas pratiques ont été envisagées, concernant les données de séries chronologiques collectées auprès des Générateurs de Vapeur et de Pompes de Refroidissement de Réacteurs de Centrales Nucléaires
Degradation is an unavoidable phenomenon that affects engineering components and systems, and which may lead to their failures with potentially catastrophic consequences depending on the application. The motivation of this Thesis is trying to model, analyze and predict failures with prognostic methods that can enable a predictive management of asset maintenance. This would allow decision makers to improve maintenance planning, thus increasing system availability and safety by minimizing unexpected shutdowns. To this aim, research during the Thesis has been devoted to the tailoring and use of both model-based and data-driven approaches to treat the degradation processes that can lead to different failure modes in industrial components, making use of different information and data sources for performing predictions on the degradation evolution and estimating the Remaining Useful Life (RUL).The Ph.D. work has addressed two specific prognostic applications: model-based prognostics for fatigue crack growth prediction and data-driven prognostics for multi-step ahead predictions of time series data of Nuclear Power Plant (NPP) components.Model-based prognostics relies on the choice of the adopted Physics-of-Failure (PoF) models. However, each degradation model is appropriate only to certain degradation process under certain operating conditions, which are often not precisely known. To generalize this, ensembles of multiple degradation models have been embedded in the model-based prognostic method in order to take advantage of the different accuracies of the models specific to different degradations and conditions. The main contributions of the proposed ensemble of models-based prognostic approaches are the integration of filtering approaches, including recursive Bayesian filtering and Particle Filtering (PF), and novel weighted ensemble strategies considering the accuracies of the individual models in the ensemble at the previous time steps of prediction. The proposed methods have been validated by case studies of fatigue crack growth simulated with time-varying operating conditions.As for multi-step ahead prediction, it remains a difficult task of Prognostics and Health Management (PHM) because prediction uncertainty tends to increase with the time horizon of the prediction. Large prediction uncertainty has limited the development of multi-step ahead prognostics in applications. To address the problem, novel multi-step ahead prediction models based on Long Short- Term Memory (LSTM), a deep neural network developed for dealing with the long-term dependencies in the time series data have been developed in this Thesis. For realistic practical applications, the proposed methods also address the additional issues of anomaly detection, automatic hyperparameter optimization and prediction uncertainty quantification. Practical case studies have been considered, concerning time series data collected from Steam Generators (SGs) and Reactor Coolant Pumps (RCPs) of NPPs
APA, Harvard, Vancouver, ISO, and other styles
12

Howe, Bill. "Gridfields: Model-Driven Data Transformation in the Physical Sciences." PDXScholar, 2006. https://pdxscholar.library.pdx.edu/open_access_etds/2676.

Full text
Abstract:
Scientists' ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains. Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating these gridded datasets are incomplete. The performance of SQL queries that manipulate large numeric datasets is not competitive with that of specialized tools, and the up-front effort required to deploy a relational database makes them unpopular for dynamic scientific applications. Tools for processing multidimensional arrays can only capture regular, rectilinear grids. Visualization libraries accommodate arbitrary grids, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent—physical changes to data characteristics break user programs. We adopt the grid as a first-class citizen, separating topology from geometry and separating structure from data. Our model is agnostic with respect to dimension, uniformly capturing, for example, particle trajectories (1-D), sea-surface temperatures (2-D), and blood flow in the heart (3-D). Equipped with data, a grid becomes a gridfield. We provide operators for constructing, transforming, and aggregating gridfields that admit algebraic laws useful for optimization. We implement the model by analyzing several candidate data structures and incorporating their best features. We then show how to deploy gridfields in practice by injecting the model as middleware between heterogeneous, ad hoc file formats and a popular visualization library. In this dissertation, we define, develop, implement, evaluate and deploy a model of gridded datasets that accommodates a variety of complex grid structures and a variety of complex data products. We evaluate the applicability and performance of the model using datasets from oceanography, seismology, and medicine and conclude that our model-driven approach offers significant advantages over the status quo.
APA, Harvard, Vancouver, ISO, and other styles
13

Barry, Timothy John, and timothyjbarry@yahoo com au. "A data driven approach to constrained control." RMIT University. Electrical and Computer Engineering, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20091214.161712.

Full text
Abstract:
This thesis presents a data-driven approach to constrained control in the form of a subspace-based state-space system identification algorithm integrated into a model predictive controller. Generally this approach has been termed model-free predictive control in the literature. Previous research into this area focused on the system identification aspects resulting in an omission of many of the features that would make such a control strategy attractive to industry. These features include constraint handling, zero-offset setpoint tracking and non-stationary disturbance rejection. The link between non-stationary disturbance rejection in subspace-based state-space system identification and integral action in state-space based model predictive control was shown. Parameterization with Laguerre orthonormal functions was proposed for the reduction in computational load of the controller. Simulation studies were performed using three real-world systems demonstrating: identification capabilities in the presence of white noise and non-stationary disturbances; unconstrained and constrained control; and the benefits and costs of parameterization with Laguerre polynomials.
APA, Harvard, Vancouver, ISO, and other styles
14

Van, den Bergh F., Wyk MA Van, Wyk BJ Van, and G. Udahemuka. "A comparison of data-driven and model-driven approaches to brightness temperature diurnal cycle interpolation." SAIEE Africa Research Journal, 2007. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001082.

Full text
Abstract:
This paper presents two new schemes for interpolating missing samples in satellite diurnal temperature cycles (DTCs). The first scheme, referred to here as the cosine model, is an improvement of the model proposed in [2] and combines a cosine and exponential function for modelling the DTC. The second scheme uses the notion of a Reproducing Kernel Hilbert Space (RKHS) interpolator [1] for interpolating the missing samples. The application of RKHS interpolators to the DTC interpolation problem is novel. Results obtained by means of computer experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
15

Silva, Josildo Pereira da. "A Data-Driven Approach for Mass-Spring Model Parametrization Based on Continuous Models." Instituto de Matemática, 2015. http://repositorio.ufba.br/ri/handle/ri/22848.

Full text
Abstract:
Submitted by Kleber Silva (kleberbs@ufba.br) on 2017-06-01T20:42:22Z No. of bitstreams: 1 Tese de Josildo.pdf: 4648596 bytes, checksum: 3ed7ce11dd70e411aa2271e15aeca67c (MD5)
Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-06-07T11:45:27Z (GMT) No. of bitstreams: 1 Tese de Josildo.pdf: 4648596 bytes, checksum: 3ed7ce11dd70e411aa2271e15aeca67c (MD5)
Made available in DSpace on 2017-06-07T11:45:27Z (GMT). No. of bitstreams: 1 Tese de Josildo.pdf: 4648596 bytes, checksum: 3ed7ce11dd70e411aa2271e15aeca67c (MD5)
Nowadays, the behavior simulation of deformable objects plays important roles in several fields such as computer graphics, computer aided design, computer aided surgery And robotics. The two main categories of deformable models are: based on continuum mechanics, like Finite Element Model (FEM) or Isogeometric Analysis (IGA); and using discrete representations, as a Mass - Spring Model (MSM). FEM methods are known for their high computational cost and precision, while MSM methods, although simple and affordable for real-time applications, are di cult to parameterize. There is no general physically based or systematic method in the literature to determine the mesh topology or MSM parameters from a known material. Therefore, in this thesis, we proposea methodology to parametrize the MSM based on continuous models with focus on the simulation of deformable objects in real-time for application in virtual environments. We developed two data-driven approaches to the parametrization of the MSM by using FEM and IGA models as reference of derivation with higher order elements. Based on experimental results, the precision achieved by these new methodologies is higher than other approaches in literature. In particular, our proposal achieves excellent results in the parametrization of the MSM with higher order elements which does not occur with other methodologies
Atualmente, a simula¸c˜ao de objetos deform´aveis desempenha papel importante em v´arios campos ligados `a Ciˆencia da Computa¸c˜ao, como a computa¸c˜ao gr´afica, projeto assistido por computador, cirurgias assistidas por computador e rob´otica. Nesse contexto, a simula¸c˜ao de objetos deform´aveis com acur´acia e em tempo-real ´e uma tarefa extremamente dificil para as aplica¸c˜oes que requerem simula¸c˜oes mecˆanicas interativas como s˜ao os casos dos ambientes virtuais, simuladores cir´urgicos e jogos. Podemos dividir as abordagens que d˜ao suporte ao tratamento de modelos deform´aveis em dois grandes grupos: baseados em mecˆanica do cont´ınuo, como M´etodo de Elementos Finitos (FEM - Finite Element Method) ou An´alise Isogeom´etrica (IGA - Isogeometric Analysis); e usando representa¸c˜oes discretas, como modelo massa-mola (MSM - Mass Spring Model). M´etodos baseados na abordagem cont´ınua s˜ao conhecidos por seu alto custo computacional e acur´acia, enquanto que os m´etodos discretos, embora simples e adequados para simula¸c˜oes mecˆanicas interativas, s˜ao dif´ıceis de parametrizar. A falta de um m´etodo geral baseado em f´ısica ou sistem´atico para determinar a topologia de malha ou os parˆametros do MSM a partir de um material conhecido foi a principal motiva¸c˜ao desse trabalho, no sentido de gerar um modelo de baixo custo computacional, como o MSM, a partir de um modelo de alta precis˜ao como o FEM. Portanto, partindo da premissa de simplicidade e adequa¸c˜ao do MSM para simula¸c˜oes mecˆanicas interativas, nesta tese propomos uma metodologia para parametrizar o MSM baseada em modelos cont´ınuos. Desenvolvemos duas abordagens orientadas `a dados (data-driven) para a parametriza¸c˜ao do MSM usando modelos FEM e IGA, este ´ultimo como referˆencia de deriva¸c˜ao com elementos de ordem superior. Com base nos resultados experimentais, a precis˜ao alcan¸cada por estas novas metodologias ´e mais elevada do que a de outros trabalhos similiares na literatura. Em particular, a nossa proposta alcan¸ca excelentes resultados na parametriza¸c˜ao do MSM com elementos de ordem superior
APA, Harvard, Vancouver, ISO, and other styles
16

Kunnamkumarath, Dhinu Johnson. "A model driven data gathering algorithm for Wireless Sensor Networks." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lega, Joceline, and Heidi E. Brown. "Data-driven outbreak forecasting with a simple nonlinear growth model." ELSEVIER SCIENCE BV, 2016. http://hdl.handle.net/10150/622814.

Full text
Abstract:
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. (C) 2016 The Authors. Published by Elsevier B.V.
APA, Harvard, Vancouver, ISO, and other styles
18

Nesterko, Sergiy O. "Respondent-Driven Sampling and Homophily in Network Data." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10378.

Full text
Abstract:
Data that can be represented as a network, where there are measurements both on units and on pairs of units, are becoming increasingly prevalent in the social sciences and public health. Homophily in network data, or the tendency of units to connect based on similar nodal attribute values (i.e. income, HIV status) more often than expected by chance is receiving strong attention from researchers in statistics, medicine, sociology, public health and others. Respondent-Driven Sampling (RDS) is a link-tracing network sampling strategy heavily used in public health worldwide that is cost efficient and allows us to survey populations inaccessible by conventional techniques. Via extensive simulation we study the performance of existing methods of estimating population averages, and show that they have poor performance if there is homophily on the quantity surveyed. We propose the first model-based approach for this setting and show its superiority as a point estimator and in terms of uncertainty intervals coverage rates, and demonstrate its application to a real life RDS-based survey. We study how the strength of homophily effects can be estimated and compared across networks and different binary attributes under several network sampling schemes. We give a proof that homophily can be effectively estimated under RDS and propose a new homophily index. This work moves towards a deeper understanding of network structure as a function of nodal attributes and network sampling under homophily.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
19

Chang, Kerry Shih-Ping. "A Spreadsheet Model for Using Web Services and Creating Data-Driven Applications." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/769.

Full text
Abstract:
Web services have made many kinds of data and computing services available. However, to use web services often requires significant programming efforts and thus limits the people who can take advantage of them to only a small group of skilled programmers. In this dissertation, I will present a tool called Gneiss that extends the spreadsheet model to support four challenging aspects of using web services: programming two-way data communications with web services, creating interactive GUI applications that use web data sources, using hierarchical data, and using live streaming data. Gneiss contributes innovations in spreadsheet languages, spreadsheet user interfaces and interaction techniques to allow programming tasks that currently require writing complex, lengthy code to instead be done using familiar spreadsheet mechanisms. Spreadsheets are arguably the most successful and popular data tools among people of all programming levels. This work advances the use of spreadsheets to new domains and could benefit a wide range of users from professional programmers to end-user programmers.
APA, Harvard, Vancouver, ISO, and other styles
20

Drobek, Marc. "Data-driven system dynamics modelling : model formulation and KPI prediction in data-rich environments." Thesis, Queen's University Belfast, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.725834.

Full text
Abstract:
System Dynamics (SD) is a key methodology for analysing complex, highly non-linear feedback systems. The SD modelling procedure is traditionally based on domain expert knowledge, manual modelling tasks and a parameter estimation and equation formulation process. These tasks are, however, heavily manual and complex since the required information was not expected to be available from written and numerical data sources. In recent years, we have seen an explosion in monitored and tracked system data that became known as the Big Data paradigm shift. This change has not yet found its way into the SD domain. Within this thesis, a novel data-driven SD modelling methodology for data-rich environments is proposed to address this paradigm shift. The research work carried out in this thesis exptores the potential of utilising massively available data sources for the SD modelling process. Based on these data sources, a modelling methodology (Fexda) is presented that supports the SD modeller in a systematic fashion whilst preserving the key principles of SD modelling. Unlike the traditional SD modelling, Fexda as a data-driven approach is highly sensitive to changes in the given data, which enables a continuous evolution ana optimisation of the computed SD models ana their parameters and equations. These contributions are based on advances in other domains, such as econometric modelling, data mining and machine learning, which are incorporated in a novel way for Fexda. A detailed evaluation of the proposed Fexda methodology is further provided against a business use-case scenario to demonstrate technical feasibility of the approach and to provide comparative results with traditional approaches. The evaluation clearly shows that Fexda can be employed to produce reliable and accurate SD models and provide insightful simulation results. The proposed Fexda methodology is the ground work towards data-driven SD modelling. A range of potential future research directions are proposed to further strengthen Fexda. The thesis concludes by presenting a revised version of the traditional information sources model that caters the reality of the Big Data paradigm shift.
APA, Harvard, Vancouver, ISO, and other styles
21

Lukes, Laura. "Analysis of Model-driven vs. Data-driven Approaches to Engaging Student Learning in Introductory Geoscience Laboratories." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9906.

Full text
Abstract:
Increasingly, teachers are encouraged to use data resources in their classrooms, which are becoming more widely available on the web through organizations such as Digital Library for Earth System Education, National Science Digital Library, Project Kaleidoscope, and the National Science Teachers Association. As "real" data becomes readily accessible, studies are needed to assess and describe how to effectively use data to convey both content material and the nature of scientific inquiry and discovery. In this study, we created two introductory undergraduate physical geology lab modules for calculating plate motion. One engages students with a model-driven approach using contrived data. Students are taught a descriptive model and work with a set of contrived data that supports the model. The other lab exercise uses a data-driven approach with real data. Students are given the real data and are asked to make sense of it. They must use the data to create a descriptive model. Student content knowledge and understanding of the nature of science were assessed in a pretest-posttest experimental design using a survey containing 11 Likert-like scale questions covering the nature of science and 9 modified true/false format questions covering content knowledge. Survey results indicated that students gained content knowledge and increased their understanding of the nature of science with both approaches. Lab observations and written interviews indicate these gains resulted from students experiencing different pedagogical approaches used in each of the two labs.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Jomaa, Diala. "A data driven approach for automating vehicle activated signs." Doctoral thesis, Högskolan Dalarna, Datateknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:du-21504.

Full text
Abstract:
Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.
APA, Harvard, Vancouver, ISO, and other styles
23

Kondeti, Yashwanth Reddy. "Enhancing the Verification-Driven Learning Model for Data Structures with Visualization." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/461.

Full text
Abstract:
The thesis aims at teaching various data structures algorithms using the Visualization Learning tool. The main objective of the work is to provide a learning opportunity for novice computer science students to gain a broader exposure towards data structure programming. The visualization learning tool is based on the Verification-Driven Learning model developed for software engineering. The tool serves as a platform for demonstrating visualizations of various data structures algorithms. All the visualizations are designed to emphasize the important operational features of various data structures. The learning tool encourages students into learning data structures by designing Learning Cases. The Learning Cases have been carefully designed to systematically implant bugs in a properly functioning visualization. Students are assigned the task of analyzing the code and also identify the bugs through quizzing. This provides students with a challenging hands-on learning experience that complements students’ textbook knowledge. It also serves as a significant foundation for pursuing future courses in data structures.
APA, Harvard, Vancouver, ISO, and other styles
24

Essaidi, Moez. "Model-Driven Data Warehouse and its Automation Using Machine Learning Techniques." Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_essaidi.pdf.

Full text
Abstract:
L'objectif de ce travail de thèse est de proposer une approche permettant l'automatisation complète du processus de transformation de modèles pour le développement d'entrepôts de données. L'idée principale est de réduire au mieux l'intervention des experts humains en utilisant les traces de transformations réalisées sur des projets similaires. L'objectif est d'utiliser des techniques d'apprentissage supervisées pour traiter les définitions de concepts avec le même niveau d'expression que les données manipulées. La nature des données manipulées nous a conduits à choisir les langages relationnels pour la description des exemples et des hypothèses. Ces langages ont l'avantage d'être expressifs en donnant la possibilité d'exprimer les relations entres les objets manipulés mais présente l'inconvénient majeur de ne pas disposer d'algorithmes permettant le passage à l'échelle pour des applications industrielles. Pour résoudre ce problème, nous avons proposé une architecture permettant d'exploiter au mieux les connaissances issues des invariants de transformations entre modèles et métamodèles. Cette manière de procéder a mis en lumière des dépendances entre les concepts à apprendre et nous a conduits à proposer un paradigme d'apprentissage dit de concepts-dépendants. Enfin, cette thèse présente plusieurs aspects qui peuvent influencer la prochaine génération de plates-formes décisionnelles. Elle propose, en particulier, une architecture de déploiement pour la business intelligence en tant que service basée sur les normes industrielles et les technologies les plus récentes et les plus prometteuses
This thesis aims at proposing an end-to-end approach which allows the automation of the process of model transformations for the development of data warehousing components. The main idea is to reduce as much as possible the intervention of human experts by using once again the traces of transformations produced on similar projects. The goal is to use supervised learning techniques to handle concept definitions with the same expressive level as manipulated data. The nature of the manipulated data leads us to choose relational languages for the description of examples and hypothesises. These languages have the advantage of being expressive by giving the possibility to express relationships between the manipulated objects, but they have the major disadvantage of not having algorithms allowing the application on large scales of industrial applications. To solve this problem, we have proposed an architecture that allows the perfect exploitation of the knowledge obtained from transformations' invariants between models and metamodels. This way of proceeding has highlighted the dependencies between the concepts to learn and has led us to propose a learning paradigm, called dependent-concept learning. Finally, this thesis presents various aspects that may inuence the next generation of data warehousing platforms. The latter suggests, in particular, an architecture for business intelligence-as-a-service based on the most recent and promising industrial standards and technologies
APA, Harvard, Vancouver, ISO, and other styles
25

Aldherwi, Aiman. "Conceptualising a Procurement 4.0 Model for a truly Data Driven Procurement." Thesis, KTH, Hållbar produktionsutveckling (ML), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297583.

Full text
Abstract:
Purpose - Procurement is an integrated part of the supply chain and crucial for the success of manufacturing. Many organisations have already started the digitalisation of their manufacturing processes using Industry 4.0 technologies and consequently trying to understand how this would impact the procurement function. The research purpose is to conceptualize a procurement of 4.0 model for a truly data driven procurement. Two research questions were proposed to address the model from digital capabilities and sustainability preceptive. Design/Methodology/approach - This study is based on a systematic literature review. A method of reviewing the literature and the current research for the propose of conceptualizing a procurement 4.0 model. Findings - The findings from the literature review contributed to the development of a proposed procurement 4.0 model based on Industry 4.0 technologies, applications, mathematical algorithms and procurement processes automation. The model contributes to the research field by addressing the gap in the literature about the lack of visualization and conceptualization of procurement 4.0. Originality/Value - The current literature discusses the advantages, implementation and impact of individual or a group of industry 4.0 technologies and applications on procurement but lacks visualization of the transformation process of combining the technologies to enable a truly data driven procurement. This research supports the creation of knowledge in this area. Practical Implementation /Managerial Implications - The proposed model can support managers and digital consultants to have practical knowledge from an academic perspective in the area of procurement 4.0. The knowledge from the literature and the systematic literature review is used to create knowledge on procurement 4.0 applications and analytics taking in to consideration the importance of visibility, transparency, optimization and the automation of the procurement function and its sustainability.
Syfte - Upphandling är en integrerad del av supply chain och avgörande för tillverkningens framgång. Många organisationer har redan börjat digitalisera sina tillverkningsprocesser med hjälp av Industry 4.0-teknologier och försöker därför förstå hur detta skulle påverka upphandlingsfunktionen. Målet med studien är att konceptualisera en upphandling av 4.0-modellen för en verkligt datadriven upphandling. Två forskningsfrågor föreslogs för att ta itu med modellen från digital kapacitet och hållbarhet. Design / metod / tillvägagångssätt - Denna studie baseras på en systematisk litteraturstudie. En metod för att granska litteraturen och den aktuella forskningen för att föreslå konceptualisering av en upphandlings 4.0-modell. Resultat - Resultaten från litteraturstudien bidrog till utvecklingen av en föreslagen upphandlings 4.0-modell baserad på Industry 4.0-teknologier, applikationer, matematiska algoritmer och automatisering av upphandlingsprocesser. Modellen bidrar till forskningsområdet genom att ta itu med klyftan i litteraturen om bristen på visualisering och konceptualisering av upphandling 4.0. Originalitet / värde - Den nuvarande litteraturen diskuterar fördelarna, implementeringen och effekten av individer eller en grupp av industri 4.0-teknologier och applikationer på upphandling men saknar visualisering av transformationsprocessen för att kombinera teknologierna för att skapa en verklig datadriven upphandling. Denna forskning stöder skapandet av kunskap inom detta område. Praktisk implementering / chefsimplikationer - Den föreslagna modellen kan stödja chefer och digitala konsulter att ha praktisk kunskap ur ett akademiskt perspektiv inom området upphandling 4.0. Kunskapen från litteraturen och den systematiska litteraturstudien används för att skapa kunskap om inköp 4.0 applikationer och analyser med beaktande av vikten av synlighet, transparens, optimering och automatisering av upphandlingsfunktionen och dess hållbarhet.
APA, Harvard, Vancouver, ISO, and other styles
26

Karlsson, Axel, and Bohan Zhou. "Model-Based versus Data-Driven Control Design for LEACH-based WSN." Thesis, KTH, Maskinkonstruktion (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272197.

Full text
Abstract:
In relation to the increasing interest in implementing smart cities, deployment of widespread wireless sensor networks (WSNs) has become a current hot topic. Among the application’s greatest challenges, there is still progress to be made concerning energy consumption and quality of service. Consequently, this project aims to explore a series of feasible solutions to improve the WSN energy efficiency for data aggregation by the WSN. This by strategically adjusting the position of the receiving base station and the packet rate of the WSN nodes. Additionally, the low-energy adaptive clustering hierarchy (LEACH) protocol is coupled with the WSN state of charge (SoC). For this thesis, a WSN was defined as a two dimensional area which contains sensor nodes and a mobile sink, i.e. a movable base station. Subsequent to the rigorous analyses of the WSN data clustering principles and system-wide dynamics, two different developing strategies, model-based and data-driven designs, were employed to develop two corresponding control approaches, model predictive control and reinforcement learning, on WSN energy management. To test their performance, a simulation environment was thus developed in Python, including the extended LEACH protocol. The amount of data transmitted by an energy unit is adopted as the index to estimate the control performance. The simulation results show that the model based controller was able to aggregate over 22% more bits than only using the LEACH protocol. Whilst the data driven controller had a worse performance than the LEACH network but showed potential for smaller sized WSNs containing a fewer amount of nodes. Nonetheless, the extension of the LEACH protocol did not give rise to obvious improvement on energy efficiency due to a wide range of differing results.
I samband med det ökande intresset för att implementera så kallade smart cities, har användningen av utbredda trådlösa sensor nätverk (WSN) blivit ett intresseområde. Bland applikationens största utmaningar, finns det fortfarande förbättringar med avseende på energiförbrukning och servicekvalité. Därmed så inriktar sig detta projekt på att utforska en mängd möjliga lösningar för att förbättra energieffektiviteten för dataaggregation inom WSN. Detta gjordes genom att strategiskt justera positionen av den mottagande basstationen samt paketfrekvensen för varje nod. Dessutom påbyggdes low-energy adaptive clustering hierarchy (LEACH) protokollet med WSN:ets laddningstillstånd. För detta examensarbete definierades ett WSN som ett två dimensionellt plan som innehåller sensor noder och en mobil basstation, d.v.s. en basstation som går att flytta. Efter rigorös analys av klustringsmetoder samt dynamiken av ett WSN, utvecklades två kontrollmetoder som bygger på olika kontrollstrategier. Dessa var en modelbaserad MPC kontroller och en datadriven reinforcement learning kontroller som implementerades för att förbättra energieffektiviteten i WSN. För att testa prestandan på dom två kontrollmetoderna, utvecklades en simulations platform baserat på Python, tillsamans med påbyggnaden av LEACH protokollet. Mängden data skickat per energienhet användes som index för att approximera kontrollprestandan. Simuleringsresultaten visar att den modellbaserade kontrollern kunde öka antalet skickade datapacket med 22% jämfört med när LEACH protokollet användes. Medans den datadrivna kontrollern hade en sämre prestanda jämfört med när enbart LEACH protokollet användes men den visade potential för WSN med en mindre storlek. Påbyggnaden av LEACH protokollet gav ingen tydlig ökning med avseende på energieffektiviteten p.g.a. en mängd avvikande resultat.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhao, Kaiyu. "A Model-driven Visual Analytic Framework for Local Pattern Analysis." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/446.

Full text
Abstract:
The ultimate goal of any visual analytic task is to make sense of the data and gain insights. Unfortunately, the process of discovering useful information is becoming more challenging nowadays due to the growing data scale. Particularly, the human cognitive capabilities remain constant whereas the scale and complexity of data are not. Meanwhile, visual analytics largely relies on human analytic in the loop which imposes challenge to traditional human-driven workflow. It is almost impossible to show every aspect of details to the user while diving into local region of the data to explain phenomenons hidden in the data. For example, while exploring the data subsets, it is always important to determine which partitions of data contain more important information. Also, determining the subset of features is vital before further doing other analysis. Furthermore, modeling on these subsets of data locally can yield great finding but also introduces bias. In this work, a model driven visual analytic framework is proposed to help identify interesting local patterns from the above three aspects. This dissertation work aims to tackle these subproblems in the following three topics: model-driven data exploration, model-driven feature analysis and local model diagnosis. First, the model-driven data exploration focus on the problem of modeling subset of data to identify the co-movement of time-series data within certain subset time partitions, which is an important application in a number of domains such as medical science, finance, business and engineering. Second, the model-driven feature analysis is to discover the important subset of interesting features while analyzing local feature similarities. Within the financial risk dataset collected by domain expert, we discover that the feature correlation among different data partitions (i.e., small and large companies) are very different. Third, local model diagnosis provides a tool to identify interesting local regression models at local regions of the data space which makes it possible for the analysts to model the whole data space with a set of local models while knowing the strength and weakness of them. The three tools provide an integrated solution for identifying interesting patterns within local subsets of data.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Baixi. "Gaussian Process Regression-Based Data-Driven Material Models for Stochastic Structural Analysis." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/28827.

Full text
Abstract:
The data-driven material models have attracted many researchers recently, as they could directly use material data. However, there are limited studies about material uncertainty in previous data-driven models. This thesis proposes a new Gaussian Process Regression (GPR)-based approach to capture the material behaviour and the associated material uncertainty from the dataset. The GPR approach is firstly used for the nonlinear elastic behaviour. The obtained GPR-based model is verified by the material datasets. Then, an improved GPR model, called the Heteroscedastic Sparse Gaussian Process Regression (HSGPR) model, is applied for the plastic flow behaviour. The flow stress predicted by the HSGPR model also agrees with the experiments. As a new data-driven material model is introduced, the associated frameworks, which implement the GPR-based model and HSGPR-based model into the finite element method for structural reliability analysis, are developed. The frame problem is used to demonstrate the GPR-based model in the elastic stochastic structural analysis, while the beam and punch problems validate the HSGPR-based model in the plastic stochastic structural analysis. It is concluded that the GPR-based approach can accurately identify both the elastic and plastic stochastic structural responses. To consider the possible correlation of the stochastic material behaviours, a novel GPR-based approach, which combines the HSGPR model with the Proper Orthogonal Decomposition (POD) algorithm, is proposed. Two case studies on the metal strength and the rock joint behaviour have demonstrated that the material behaviours correlation can be effectively retained in the POD-HSGPR-based model. As indicated by its application in a rock slope problem, it is critical to consider the material properties correlation for the accurate evaluation of structural reliability.
APA, Harvard, Vancouver, ISO, and other styles
29

Wiigh, Oscar. "Visualizing partitioned data in Audience Response Systems : A design-driven approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280847.

Full text
Abstract:
Meetings and presentations are often monological in their nature, creating a barrier of productivity in workplaces around the world. By utilizing modern technologies such as a web-based Audience Response System (ARS), meetings and presentations can be transformed into interactive exercises where the audience’s views, opinions and answers can be expressed. Visualizing these audience responses and relating questions-specific partitioned answers between each other, through visualization structures, was the topic of this report. The thesis project was carried out in collaboration with Mentimeter, creator of a web-based ARS and online presentation tool. The Double Diamond design process model was used to investigate and ground the design and development process. To guide the implementation of the prototypes, a focus group was held with four visualization and design professionals knowledgeable about ARSs, to gather feedback on high fidelity sketches. The final prototypes were evaluated with the extended Technology Acceptance Model (TAM) for information visualization to survey end-users' attitudes and willingness to adopt the visualization structures. Eight end-users tested the final web-based prototypes. The findings of the user tests indicate that both visualizations prototypes showed promise for visualizing partitioned data in novel ways for ARSs, with an emphasis on a circle cluster visualization as it allowed for the desired exploration. The results further imply that there is value to be gained by presenting partitioned data in ways that allows for exploration, and that audiences would likely adopt a full implementation of the visualizations given some added functionalities and adjustments. Future research should focus on fully implementing and testing the visualizations in front of a live audience, as well investigating other contemporary visualization structures and their capabilities for visualizing partitioned ARS data.
Möten och presentationer är ofta sedda som ett produktivitetshinder på arbetsplatser runtom i världen på grund av deras monologiska natur. Genom att använda moderna tekniska lösningar såsom webbaserade Audience Response Systems (ARS) så kan möten och presentationer omvandlas till interaktiva moment där en publiks perspektiv, åsikter och svar kan uttryckas. Att visualisera en publiks svar och relatera frågespecifika partitionerade svar mellan varandra, genom visualiseringar, var denna rapports huvudämne. Projektet utfördes i samarbete med Mentimeter, skapare av ett webbaserat ARS och digitalt presentationsverktyg. Double Diamond-modellen användes för att undersöka och förankra design- och utvecklingsarbetet i projektet. För att guida utvecklingsarbetet, och få feedback på designförslag, genomfördes en fokusgrupp med fyra visualiserings- och designexperter som besatt kunskap om ARS. De framtagna prototyperna utvärderas genom den utökade Technology Acceptance Model (TAM) för att undersöka slutanvändares inställning och villighet att använda visualiseringarna. Totalt testade åtta slutanvändare de framtagna webbaserade prototyperna. Resultatet av användartesterna indikerade att båda visualiseringsprototyperna har potential att visualisera partitionerad data på nya sätt i ARS, men att en klustervisualisering var överlägsen från en utforskningssynpunkt. Resultaten innebär vidare att det finns ett värde i att presentera partitionerad data på sätt som möjliggör utforskning av publikens svar, och att publiken troligen kommer att anta en fullständig implementering av visualiseringarna förutsatt några extra funktioner och justeringar. Framtida forskning bör fokusera på att fullständigt implementera och testa visualiseringarna framför en faktiskt publik, samt undersöka andra samtida visualiseringsstrukturer och deras möjligheter att visualisera partitionerad ARS-data.
APA, Harvard, Vancouver, ISO, and other styles
30

Grimm, Alexander Rudolf. "Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83840.

Full text
Abstract:
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Jee Yun. "Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured Materials." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85210.

Full text
Abstract:
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration. A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty.
Master of Science
A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
APA, Harvard, Vancouver, ISO, and other styles
32

Ibañez, Pinillo Ruben. "Advanced physics-based and data-driven strategies." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0021.

Full text
Abstract:
Les sciences de l'ingénieur basées sur la simulation (Simulation Based Engineering Science, SBES) ont apporté des améliorations majeures dans l'optimisation, le contrôle et l'analyse inverse, menant toutes à une meilleure compréhension de nombreux processus se produisant dans le monde réel. Ces percées notables sont présentes dans une grande variété de secteurs tels que l'aéronautique ou l'automobile, les télécommunications mobiles ou la santé, entre autres. Néanmoins, les SBES sont actuellement confrontées à plusieurs difficultés pour fournir des résultats précis dans des problèmes industriels complexes. Outre les coûts de calcul élevés associés aux applications industrielles, les erreurs introduites par la modélisation constitutive deviennent de plus en plus importantes lorsqu'il s'agit de nouveaux matériaux. Parallèlement, un intérêt sans cesse croissant pour des concepts tels que les données massives (big data), l'apprentissage machine ou l'analyse de données a été constaté. En effet, cet intérêt est intrinsèquement motivé par un développement exhaustif des systèmes d'acquisition et de stockage de données. Par exemple, un avion peut produire plus de 500 Go de données au cours d'un seul vol. Ce panorama apporte une opportunité parfaite aux systèmes d'application dynamiques pilotés par les données (Dynamic Data Driven Application Systems, DDDAS), dont l'objectif principal est de fusionner de manière dynamique des algorithmes de simulation classiques avec des données provenant de mesures expérimentales. Dans ce scénario, les données et les simulations ne seraient plus découplées, mais une symbiose à exploiter permettrait d'envisager des situations jusqu'alors inconcevables. En effet, les données ne seront plus comprises comme un étalonnage statique d'un modèle constitutif donné mais plutôt comme une correction dynamique du modèle dès que les données expérimentales et les simulations auront tendance à diverger. Plusieurs algorithmes numériques seront présentés tout au long de ce manuscrit dont l'objectif principal est de renforcer le lien entre les données et la mécanique computationnelle. La première partie de la thèse est principalement axée sur l'identification des paramètres, les techniques d'analyse des données et les techniques de complétion de données. La deuxième partie est axée sur les techniques de réduction de modèle (MOR), car elles constituent un allié fondamental pour satisfaire les contraintes temps réel découlant du cadre DDDAS
Simulation Based Engineering Science (SBES) has brought major improvements in optimization, control and inverse analysis, all leading to a deeper understanding in many processes occurring in the real world. These noticeable breakthroughs are present in a vast variety of sectors such as aeronautic or automotive industries, mobile telecommunications or healthcare among many other fields. Nevertheless, SBES is currently confronting several difficulties to provide accurate results in complex industrial problems. Apart from the high computational costs associated with industrial applications, the errors introduced by constitutive modeling become more and more important when dealing with new materials. Concurrently, an unceasingly growing interest in concepts such as Big-Data, Machine Learning or Data-Analytics has been experienced. Indeed, this interest is intrinsically motivated by an exhaustive development in both dataacquisition and data-storage systems. For instance, an aircraft may produce over 500 GB of data during a single flight. This panorama brings a perfect opportunity to the socalled Dynamic Data Driven Application Systems (DDDAS), whose main objective is to merge classical simulation algorithms with data coming from experimental measures in a dynamic way. Within this scenario, data and simulations would no longer be uncoupled but rather a symbiosis that is to be exploited would achieve milestones which were inconceivable until these days. Indeed, data will no longer be understood as a static calibration of a given constitutive model but rather the model will be corrected dynamically as soon as experimental data and simulations tend to diverge. Several numerical algorithms will be presented throughout this manuscript whose main objective is to strengthen the link between data and computational mechanics. The first part of the thesis is mainly focused on parameter identification, data-driven and data completion techniques. The second part is focused on Model Order Reduction (MOR) techniques, since they constitute a fundamental ally to achieve real time constraints arising from DDDAS framework
APA, Harvard, Vancouver, ISO, and other styles
33

Gutierrez, Arturo M. "A manufacturing model to support data-driven applications for design and manufacture." Thesis, Loughborough University, 1995. https://dspace.lboro.ac.uk/2134/7218.

Full text
Abstract:
This thesis is primarily concerned with conceptual work on the Manufacturing Model. The Manufacturing Model is an information model which describes the manufacturing capability of an enterprise. To achieve general applicability, the model consists of the entities that are relevant and important for any type of manufacturing firm, namely: manufacturing resources (e.g. machines, tools, fixtures, machining cells, operators, etc.), manufacturing processes (e.g. injection moulding, machining processes, etc.) and manufacturing strategies (e.g. how these resources and processes are used and organized). The Manufacturing Model is a four level model based on a de—facto standard (i.e. Factory, Shop, Cell, Station) which represents the functionality of the manufacturing facility of any firm. In the course of the research, the concept of data—driven applications has emerged in response to the need of integrated and flexible computer environments for the support of design and manufacturing activities. These data—driven applications require the use of different information models to capture and represent the company's information and knowledge. One of these information models is the Manufacturing Model. The value of this research work is highlighted by the use of two case studies, one related with the representation of a single machining station, and the other, the representation of a multi-cellular manufacturing facility of a high performance company.
APA, Harvard, Vancouver, ISO, and other styles
34

Limam, Lyes. "Usage-driven unified model for user profile and data source profile extraction." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0058/document.

Full text
Abstract:
La problématique traitée dans la thèse s’inscrit dans le cadre de l’analyse d’usage dans les systèmes de recherche d’information. En effet, nous nous intéressons à l’utilisateur à travers l’historique de ses requêtes, utilisées comme support d’analyse pour l’extraction d'un profil d’usage. L’objectif est de caractériser l’utilisateur et les sources de données qui interagissent dans un réseau afin de permettre des comparaisons utilisateur-utilisateur, source-source et source-utilisateur. Selon une étude que nous avons menée sur les travaux existants sur les modèles de profilage, nous avons conclu que la grande majorité des contributions sont fortement liés aux applications dans lesquelles ils étaient proposés. En conséquence, les modèles de profils proposés ne sont pas réutilisables et présentent plusieurs faiblesses. Par exemple, ces modèles ne tiennent pas compte de la source de données, ils ne sont pas dotés de mécanismes de traitement sémantique et ils ne tiennent pas compte du passage à l’échelle (en termes de complexité). C'est pourquoi, nous proposons dans cette thèse un modèle d’utilisateur et de source de données basé sur l’analyse d’usage. Les caractéristiques de ce modèle sont les suivantes. Premièrement, il est générique, permettant de représenter à la fois un utilisateur et une source de données. Deuxièmement, il permet de construire le profil de manière implicite à partir de l’historique de requêtes de recherche. Troisièmement, il définit le profil comme un ensemble de centres d’intérêts, chaque intérêt correspondant à un cluster sémantique de mots-clés déterminé par un algorithme de clustering spécifique. Et enfin, dans ce modèle le profil est représenté dans un espace vectoriel. Les différents composants du modèle sont organisés sous la forme d’un Framework, la complexité de chaque composant y est évaluée. Le Framework propose : - une méthode pour la désambigüisation de requêtes; - une méthode pour la représentation sémantique des logs sous la forme d’une taxonomie ; - un algorithme de clustering qui permet l’identification rapide et efficace des centres d’intérêt représentés par des clusters sémantiques de mots clés ; - une méthode pour le calcul du profil de l’utilisateur et du profil de la source de données à partir du modèle générique. Le Framework proposé permet d'effectuer différentes tâches liées à la structuration d’un environnement distribué d’un point de vue usage. Comme exemples d’application, le Framework est utilisé pour la découverte de communautés d’utilisateurs et la catégorisation de sources de données. Pour la validation du Framework, une série d’expérimentations est menée en utilisant des logs du moteur de recherche AOL-search, qui ont démontrées l’efficacité de la désambigüisation sur des requêtes courtes, et qui ont permis d’identification de la relation entre le clustering basé sur une fonction de qualité et le clustering basé sur la structure
This thesis addresses a problem related to usage analysis in information retrieval systems. Indeed, we exploit the history of search queries as support of analysis to extract a profile model. The objective is to characterize the user and the data source that interact in a system to allow different types of comparison (user-to-user, source-to-source, user-to-source). According to the study we conducted on the work done on profile model, we concluded that the large majority of the contributions are strongly related to the applications within they are proposed. As a result, the proposed profile models are not reusable and suffer from several weaknesses. For instance, these models do not consider the data source, they lack of semantic mechanisms and they do not deal with scalability (in terms of complexity). Therefore, we propose a generic model of user and data source profiles. The characteristics of this model are the following. First, it is generic, being able to represent both the user and the data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest, each topic corresponding to a semantic cluster of keywords extracted by a specific clustering algorithm. Finally, the profile is represented according to the vector space model. The model is composed of several components organized in the form of a framework, in which we assessed the complexity of each component. The main components of the framework are: - a method for keyword queries disambiguation; - a method for semantically representing search query logs in the form of a taxonomy; - a clustering algorithm that allows fast and efficient identification of topics of interest as semantic clusters of keywords; - a method to identify user and data source profiles according to the generic model. This framework enables in particular to perform various tasks related to usage-based structuration of a distributed environment. As an example of application, the framework is used to the discovery of user communities, and the categorization of data sources. To validate the proposed framework, we conduct a series of experiments on real logs from the search engine AOL search, which demonstrate the efficiency of the disambiguation method in short queries, and show the relation between the quality based clustering and the structure based clustering
APA, Harvard, Vancouver, ISO, and other styles
35

Larsson, Olsson Christoffer, and Erik Svensson. "Early Warning Leakage Detection for Pneumatic Systems on Heavy Duty Vehicles : Evaluating Data Driven and Model Driven Approach." Thesis, KTH, Mekatronik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261207.

Full text
Abstract:
Modern Heavy Duty Vehicles consist of a multitude of components and operate in various conditions. As there is value in goods transported, there is an incentive to avoid unplanned breakdowns. For this, condition based maintenance can be applied.\newline This thesis presents a study comparing the applicability of the data-driven Consensus SelfOrganizing Models (COSMO) method and the model-driven patent series introduced by Fogelstrom, applied on the air processing system for leakage detection on Scania Heavy Duty Vehicles. The comparison of the two methods is done using the Area Under Curve value given by the Receiver Operating Characteristics curves for features in order to reach a verdict.\newline For this purpose, three criteria were investigated. First, the effects of the hyper-parameters were explored to conclude a necessary vehicle fleet size and time period required for COSMO to function. The second experiment regarded whether environmental factors impact the predictability of the method, and finally the effect on the predictability for the case of nonidentical vehicles was determined.\newline The results indicate that the number of representations ought to be at least 60, rather with a larger set of vehicles in the fleet than with a larger window size, and that the vehicles should be close to identical on a component level and be in use in comparable ambient conditions.\newline In cases where the vehicle fleet is heterogeneous, a physical model of each system is preferable as this produces more stable results compared to the COSMO method.
Moderna tunga fordon består av ett stort antal komponenter och används i många olika miljöer. Då värdet för tunga fordon ofta består i hur mycket gods som transporteras uppstår ett incitament till att förebygga oplanerade stopp. Detta görs med fördel med hjälp av tillståndsbaserat underhåll. Denna avhandling undersöker användbarheten av den data-drivna metoden Consensus SelfOrganizing Models (COSMO) kontra en modellbaserad patentserie för att upptäcka läckage på luftsystem i tunga fordon. Metoderna ställs mot varandra med hjälp av Area Under Curve-värdet som kommer från Receiver Operating Characteristics-kurvor från beskrivande signaler. Detta gjordes genom att utvärdera tre kriterier. Dels hur hyperparametrar influerar COSMOmetoden för att avgöra en rimlig storlek på fordonsflottan, dels huruvida omgivningsförhållanden påverkar resultatet och slutligen till vilken grad metoden påverkas av att fordonsflottan inte är identisk. Slutsatsen är att COSMO-metoden med fördel kan användas sålänge antalet representationer överstiger 60 och att fordonen inom flottan är likvärdiga och har använts inom liknande omgivningsförhållanden. Om fordonsflottan är heterogen så föredras en fysisk modell av systemet då detta ger ett mer stabilt resultat jämfört med COSMO-metoden.
APA, Harvard, Vancouver, ISO, and other styles
36

Sidhu, Bobjot Singh. "Exploring Data Driven Models of Transit Travel Time and Delay." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1601.

Full text
Abstract:
Transit travel time and operating speed influence service attractiveness, operating cost, system efficiency and sustainability. The Tri-County Metropolitan Transportation District of Oregon (TriMet) provides public transportation service in the tri-county Portland metropolitan area. TriMet was one of the first transit agencies to implement a Bus Dispatch System (BDS) as a part of its overall service control and management system. TriMet has had the foresight to fully archive the BDS automatic vehicle location and automatic passenger count data for all bus trips at the stop level since 1997. More recently, the BDS system was upgraded to provide stop-level data plus 5-second resolution bus positions between stops. Rather than relying on prediction tools to determine bus trajectories (including stops and delays) between stops, the higher resolution data presents actual bus positions along each trip. Bus travel speeds and intersection signal/queuing delays may be determined using this newer information. This thesis examines the potential applications of higher resolution transit operations data for a bus route in Portland, Oregon, TriMet Route 14. BDS and 5-second resolution data from all trips during the month of October 2014 are used to determine the impacts and evaluate candidate trip time models. Comparisons are drawn between models and some conclusions are drawn regarding the utility of the higher resolution transit data. In previous research inter-stop models were developed based on the use of average or maximum speed between stops. We know that this does not represent realistic conditions of stopping at a signal/crosswalk or traffic congestion along the link. A new inter-stop trip time model is developed using the 5-second resolution data to determine the number of signals encountered by the bus along the route. The variability in inter-stop time is likely due to the effect of the delay superimposed by signals encountered. This newly developed model resulted in statistically significant results. This type of information is important to transit agencies looking to improve bus running times and reliability. These results, the benefits of archiving higher resolution data to understand bus movement between stops, and future research opportunities are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Kergus, Pauline. "Data-driven model reference control in the frequency-domain : From model reference selection to controller validation." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0031.

Full text
Abstract:
Dans de nombreuses applications, aucun modèle physique du système n'est disponible, il s'agit alors de contrôler le système à partir de mesures entrées-sorties. Deux types d'approches sont envisageables : identifier un modèle du système puis l'utiliser afin de synthétiser un contrôleur, ce sont les méthodes indirectes, ou identifier le contrôleur directement à partir des données du système, ce sont les méthodes directes. Cette thèse se concentre sur les méthodes directes : l'objectif du travail présenté est de mettre en place une nouvelle méthode directe basée sur des données fréquentielles du système à contrôler. Après un tour d’horizon des méthodes indirectes existantes la méthode proposée est introduite. Il s’agit de résoudre un problème de suivi de modèle de référence : le problème d’identification est déporté du système vers le contrôleur. Dans ce cadre, deux techniques d’identification sont considérées dans cette thèse : l’interpolation de Loewner et l’approche des sous-espaces. De plus, les instabilités du système sont estimées en projetant les données fréquentielles disponibles. Cela permet de connaître les limites en performances du système et, par conséquent, de choisir des spécifications atteignables. Enfin, une analyse de la stabilité en boucle fermée permet d’obtenir un contrôleur stabilisant d’ordre réduit. Tout au long de ce travail, les différentes étapes de la méthode sont appliquées progressivement sur des exemples numériques. Pour finir, la méthode proposée est appliquée sur deux systèmes irrationnels, décrits par des équations aux dérivées partielles: un cristalliseur continu et un canal de génération. Ces deux exemples sont représentatifs de la catégorie de systèmes pour lesquels utiliser une méthode de contrôle directe est plus pertinent
In many applications, no physical description of the plant is available and the control law has to be designed on the basis of input-output measurements only. Two control strategies can then be considered : one can either identify a model of the plant and then use any kind of model-based technique (indirect methods) to obtain a control law, or use a data-driven strategy that directly compute the controller from the experimental data (direct methods). This work focuses on data-driven techniques : the objective of this thesis is to propose a new data-driven control technique based on frequency-domain data collected from the system to be controlled. After recalling some basics in feedback control, an overview of data-driven control is given. Then, the proposed method is introduced. It is a model reference technique : the identification problem is moved from the plant to the controller. In this work, two identification techniques are used to that purpose: the Loewner framework and the subspace approach. In addition, a technique is proposed to estimate the system’s instabilities. It allows to determine the performance limitations and to select achievable specifications. Finally, a stability condition, already known in data-driven control, is used during the reduction of the controller to ensure closed-loop stability. Along this thesis, the different steps of the method are progressively applied on two numercial examples. In the end, the proposed technique is applied on two irrational systems described by partial differential equations : a continuous crystallizer and an open-channel for hydroelectricity generation. These two examples illustrate the type of applications for which using a data-driven control method is indicated
APA, Harvard, Vancouver, ISO, and other styles
38

Koseler, Kaan Tamer. "Realization of Model-Driven Engineering for Big Data: A Baseball Analytics Use Case." Miami University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=miami1524832924255132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tan, Lujiao. "Data-Driven Marketing: Purchase Behavioral Targeting in Travel Industry based on Propensity Model." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14745.

Full text
Abstract:
By means of data-driven marketing as well as big data technology, this paper presents the investigation of a case study from travel industry implemented by a combination of propensity model and a business model “2W1H”. The business model “2W1H” represents the purchasing behavior “What to buy”, “When to buy”, and “How to buy”. This paper presents the process of building propensity models for the application in behavioral targeting in travel industry.     Combined the propensity scores from predictive analysis and logistic regression with proper marketing and CRM strategies when communicating with travelers, the business model “2W1H” can perform personalized targeting for evaluating of marketing strategy and performance. By analyzing the business model “2W1H” and the propensity model on each business model, both the validation of the model based on training model and test data set, and the validation of actual marketing activities, it has been proven that predictive analytics plays a vital role in the implementation of travelers’ purchasing behavioral targeting in marketing.
APA, Harvard, Vancouver, ISO, and other styles
40

Hardcastle, J. J. "Model-driven analysis of high-throughput genomic data in late-stage ovarian cancer." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603681.

Full text
Abstract:
In this thesis, a number of techniques are developed for the integration of high-throughput genomic and clinical data. These techniques are motivated by, and demonstrated upon, a small scale study of advanced sporadic invasive epithelial ovarian cancer, CTCR-OV01. In the first part of this thesis, clinical data from the CTCR-OV01 study are introduced. A set of biologically motivated hypotheses on the CTCR-OV01 study, based on existing literature, is described. A novel approach to analysis of continuous mRNA expression in terms of hypotheses on discrete clinical sets is developed; this work extends conventional methods by allowing hypotheses that predict both similarities within and differences between sets of clinical sets. These methods are demonstrated on simulated data, following which tests on real data from the CTCR-OV01 study show low false discovery rates in assessing hypotheses on the data. Comparisons with alternative approaches show that the method is of value. An alterative approach to mRNA expression analysis, in which mRNA expression data is integrated with both continuous and discrete clinical data in a mixed-effects model is then presented. Methods of producing a continuous measure of response are discussed. A number of genes selected by the methods developed are validated by experiment. A set of novel statistical methods are developed for the analysis of array CGH data. Empirical Bayes techniques that are able to assess a number of hypotheses on array CGH data are established and tested on CTCR-OV01 data. Results from this analysis are encouraging from a biological standpoint and show some correlation with results acquired in mRNA expression analysis.
APA, Harvard, Vancouver, ISO, and other styles
41

Haase, Daniel [Verfasser]. "Robust Data- and Model-Driven Anatomical Landmark Localization in Biomedical Scenarios / Daniel Haase." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409039/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Markowitz, Jared (Jared John). "A data-driven neuromuscular model of walking and its application to prosthesis control." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/83822.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 119-123).
In this thesis we present a data-driven neuromuscular model of human walking and its application to prosthesis control. The model is novel in that it leverages tendon elasticity to more accurately predict the metabolic consumption of walking than conventional models. Paired with a reflex-based neural drive the model has been applied in the control of a robotic ankle-foot prosthesis, producing speed adaptive behavior. Current neuromuscular models significantly overestimate the metabolic demands of walking. We believe this is because they do not adequately consider the role of elasticity; specifically the parameters that govern the force-length relations of tendons in these models are typically taken from published values determined from cadaver studies. To investigate this issue we first collected kinematic, kinetic, electromyographic (EMG), and metabolic data from five subjects walking at six different speeds. The kinematic and kinetic data were used to estimate muscle lengths, muscle moment arms, and joint moments while the EMG data were used to estimate muscle activations. For each subject we performed a kinematically clamped optimization, varying the parameters that govern the force-length curve of each tendon while simultaneously seeking to minimize metabolic cost and maximize agreement with the observed joint moments. We found a family of parameter sets that excel at both objectives, providing agreement with both the collected kinetic and metabolic data. This identification allows us to accurately predict the metabolic cost of walking as well as the force and state of individual muscles, lending insight into the roles and control objectives of different muscles throughout the gait cycle. This optimized muscle-tendon morphology was then applied with an optimized linear reflex architecture in the control of a powered ankle-foot prosthesis. Specifically, the model was fed the robot's angle and state and used to command output torque. Clinical trials were conducted that demonstrated speed adaptive behavior; commanded net work was seen to increase with walking speed. This result supports both the efficacy of the modeling approach and its potential utility in controlling life-like prosthetic limbs.
by Jared Markowitz.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Qingsong. "MODEL-BASED AND DATA DRIVEN FAULT DIAGNOSIS METHODS WITH APPLICATIONS TO PROCESS MONITORING." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1080246972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hebig, Regina. "Evolution of model-driven engineering settings in practice." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7076/.

Full text
Abstract:
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
Um die steigende Komplexität von Softwaresystemen beherrschen zu können, werden heutzutage unterschiedlichste Techniken gemeinsam eingesetzt. Beispiele sind, Design Pattern, Serviceorientierte Architekturen, Softwareentwicklungsprozesse oder modellgetriebene Entwicklung (MDE). Ziel dabei ist die Erhöhung der Produktivität, so dass Entwicklungsdauer und Qualität stabil bleiben können. Während hoch entwickelte Softwareentwicklungsprozesse heute schon standardmäßig genutzt werden, fangen Firmen gerade erst an MDE einzusetzen. Jedoch zeigen Studien, dass der erhoffte Erfolg von MDE nicht jedes Mal eintritt. So scheint es, dass noch kein ausreichendes Verständnis dafür existiert, inwiefern MDE auch Nachteile für die Produktivität bergen kann. Zusätzlich ist bei der Kombination von unterschiedlichen Techniken damit zu rechnen, dass die erreichten Effekte sich gegenseitig negieren anstatt sich zu ergänzen. Hier entsteht die Frage wie MDE und andere Techniken, wie Softwareentwicklungsprozesse, zusammenwirken. Beide Aspekte, der direkte Einfluss auf Produktivität und die Wechselwirkung mit anderen Techniken, müssen aber verstanden werden um den Risiken für den Produktivitätseinfluss von MDE zu identifizieren. Außerdem, muss auch die Generalisierbarkeit dieser Aspekte untersucht werden. Das betrifft die Fragen, ob der Produktivitätseinfluss bei jedem Einsatz von MDE gleich ist und ob der Produktivitätseinfluss über die Zeit stabil bleibt. Beide Fragen sind entscheidend, will man geeignete Risikobehandlung ermöglichen oder künftige Studien zum Erfolg von MDE planen. Diese Dissertation widmet sich der genannten Fragen. Dafür wird zuerst der Begriff MDE Setting'' eingeführt um eine differenzierte Betrachtung von MDE-Verwendungen zu ermöglichen. Ein MDE Setting ist dabei der technische Aufbau, inklusive manueller und automatische Aktivitäten, Artefakten, Sprachen und Werkzeugen. Welche Produktivitätseinflüsse von MDE Settings möglich sind, wird in der Dissertation mit Fokus auf Änderbarkeit und die Wechselwirkung mit Softwareentwicklungsprozessen betrachtet. Dafür wird einerseits eine Taxonomie von Changeability Concerns'' (potentiell betroffene Aspekte von Änderbarkeit) vorgestellt. Zusätzlich, werden drei MDE Traits'' (Charakteristika von MDE Settings die unterschiedlich ausgeprägt sein können) identifiziert. Es wird untersucht welche Ausprägungen dieser MDE Traits Einfluss auf Softwareentwicklungsprozesse haben können. Um die Erfassung und Bewertung dieser Einflüsse zu ermöglichen wird die Software Manufaktur Modell Sprache eingeführt. Diese Prozessmodellierungssprache ermöglicht eine Beschreibung, der Veränderungen von Artefaktbeziehungen während der Anwendung von Aktivitäten (z.B. Codegenerierung). Weiter werden auf Basis dieser Modelle, Analysetechniken eingeführt. Diese Analysetechniken erlauben es Risiken für bestimmte Changeability Concerns aufzudecken sowie die Ausprägung von MDE Traits zu erfassen (und damit den Einfluss auf Softwareentwicklungsprozesse). Um die Generalisierbarkeit der Ergebnisse zu studieren, wurden im Rahmen der Arbeit mehrere MDE Settings aus der Praxis sowie teilweise deren Evolutionshistorien erhoben. Daran wird gezeigt, dass MDE Settings sich in einem breiten Spektrum von Einflüssen auf Änderbarkeit und Prozesse bewegen. So ist es weder selten, dass ein MDE Setting neutral für Prozesse ist, noch, dass ein MDE Setting Einschränkungen für einen Prozess impliziert. Ähnlich breit gestreut ist der Einfluss auf die Änderbarkeit.Zusätzlich, wird diskutiert, inwiefern unterschiedliche Evolutionstypen den Einfluss eines MDE Settings auf Änderbarkeit und Prozesse verändern können. Diese Diskussion führt zur Identifikation der strukturellen Evolution'', die sich stark auf die genannten Charakteristika eines MDE Settings auswirken kann. Mithilfe der erfassten MDE Settings, wird gezeigt, dass strukturelle Evolution in der Praxis üblich ist. Schließlich, werden Beispiele aufgedeckt bei denen strukturelle Evolutionsschritte tatsächlich zu einer Änderung der Charakteristika des betreffenden MDE Settings geführt haben. Einerseits bestärkt die ermittelte Vielfalt den Bedarf nach Analysetechniken, wie sie in dieser Dissertation eingeführt werden. Zum Anderen erscheint es nun, dass Evolution zumindest zum Teil die unterschiedlichen Ausprägungen von MDE Settings erklärt. Zusammenfassend wird studiert wie MDE Settings und deren Evolution in der Praxis ausgeprägt sind. Als Ergebnis, werden Techniken zur Identifikation von Risiken für Produktivitätseinflüsse bereitgestellt um den Einsatz von MDE Settings zu unterstützen.
APA, Harvard, Vancouver, ISO, and other styles
45

Hosseini, Rahilsadat. "Wastewater's total influent estimation and performance modeling: a data driven approach." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/2716.

Full text
Abstract:
Wastewater treatment plants (WWTP) involve several complex physical, biological and chemical processes. Often these processes exhibit non-linear behavior that is difficult to describe by classical mathematical models. Safer operation and control of a WWTP can be achieved by developing a modeling tool for predicting the plant performance. In the last decade, many studies were realized in wastewater treatment based on intelligent methods which are related to modeling WWTP. These studies are about predictions of WWTP parameters, process control of WWTP, estimating WWTP output parameters characteristics. In many studies, neural network models were used to model chemical and physical attributes in the flow rate. In this Thesis, a data-driven approach for analyzing water quality is introduced. Improvements in the data collection of information system allow collection of large volumes of data. Although improvements in data collection systems have given researchers sufficient information about various systems, they must be used in conjunction with novel data-mining algorithms to build models and recognize patterns in large data sets. Since the mid 1990's, data mining has been successfully used for model extraction and describing various phenomena of interest.
APA, Harvard, Vancouver, ISO, and other styles
46

Ramadoss, Balaji. "Ontology Driven Model for an Engineered Agile Healthcare System." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5110.

Full text
Abstract:
Healthcare is in urgent need of an effective way to manage the complexity it of its systems and to prepare quickly for immense changes in the economics of healthcare delivery and reimbursement. Centers for Medicare & Medicaid Services (CMS) releases policies affecting inpatient and long-term care hospitals policies that directly affect reimbursement and payment rates. One of these policy changes, a quality-reporting program called Hospital Inpatient Quality Reporting (IQR), will effect approximately 3,400 acute-care and 440 long-term care hospitals. IQR sets guidelines and measures that will contain financial incentives and penalties based on the quality of care provided. CMS, the largest healthcare payer, is aggressively promoting high quality of care by linking payment incentives to outcomes. With CMS assessing each hospital's performance by comparing its Quality Achievements and Quality Improvement scores, there is a growing need and demand to understand these quality measures under the context of patient care, data management and system integration. This focus on patient-centered quality care is difficult for healthcare systems due to the lack of a systemic view of the patient and patient care. This research uniquely addresses the hospital's need to meet these challenges by presenting a healthcare specific framework and methodology for translating data on quality metrics into actionable processes and feedback to produce the desired quality outcome. The solution is based on a patient-care level process ontology, rather than the technology itself, and creates a bridge that applies systems engineering principles to permit observation and control of the system. This is a transformative framework conceived to meet the needs of the rapidly changing healthcare landscape. Without this framework, healthcare is dealing with outcomes that are six to seven months old, meaning patients may not have been cared for effectively. In this research a framework and methodology called the Healthcare Ontology Based Systems Engineering Model (HOB-SEM) is developed to allow for observability and controllability of compartmental healthcare systems. HOB-SEM applies systems and controls engineering principles to healthcare using ontology as the method and the data lifecycle as the framework. The ontology view of patient-level system interaction and the framework to deliver data management and quality lifecycles enables the development of an agile systemic healthcare view for observability and controllability
APA, Harvard, Vancouver, ISO, and other styles
47

Andersson, Johan, and Amirhossein Gharaie. "It is Time to Become Data-driven, but How : Depicting a Development Process Model." Thesis, Högskolan i Halmstad, Akademin för företagande, innovation och hållbarhet, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-45353.

Full text
Abstract:
Background: The business model (BM) is an essential part of firms and it needs to be innovated continuously to allow firms to stay or become competitive. The process of business model innovation (BMI) unfolds incrementally by re-designing or developing new activities in order to provide value propositions (VP). With increasing availability of data, pressure on BMI to orchestrate their activities towards putting data as a key resource and develop data-driven business models (DDBM) is growing. Problematization: The DDBM provides valuable possibilities by utilizing data to optimize current businesses and create new VPs. However, the development process of DDBMs is outlined as challenging and scarcely reviewed. Purpose: This study aims to explore how a data-driven business model development process looks. More specifically, we adopted this research question: What are the phases and activities of a DDBM development process, and what characterizes this process? Method: This is a qualitative study in which the empirical data was collected through 9 semi-structured interviews where the respondents were divided among three different initiatives. Empirical Findings: This study enriches the existing literature of BMI in general and data-driven business model innovation in particular. Concretely, this study contributes to the process perspective of DDBM development. It helps to unpack the complexity of data engagement in business model development and provides a visual process model as an artefact that shows the anatomy of the process. Additionally, this study resonates with value logics manifestation through the states of artefacts, activities, and cognitions. Conclusions: This study concludes that the DDBM development process is structured with two phases as low data-related and high data-related activities, inheriting seven sub-phases consisting of different activities. Also, this study identified four underlying characteristics of the DDBM development process comprising value co-creation, iterative experiment, ethical and regulatory risk, and adaptable strategy. Future research: Further work is needed to explain the anatomy and structure of the DDBM development process in different contexts to uncover if it captures various complexities of data and increases its generalizability. Furthermore, more research is required to differentiate between different business models and consequently customizing the development process for each type. Future research can also further explore the value co-creation in developing DDBM. In this direction, it would be interesting to consider connecting the field of open innovation to the field of DDBM and, specifically, its role in the DDBMs development process. Another promising avenue for future research would be to go beyond the focus on merely improving the VP to maximize the data monetization, and instead focus on the interplay and role that data has on sustainability.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Zhongliang. "Data-driven fault diagnosis for PEMFC systems." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4335/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude de diagnostic de pannes pour les systèmes pile à combustible de type PEMFC. Le but est d'améliorer la fiabilité et la durabilité de la membrane électrolyte polymère afin de promouvoir la commercialisation de la technologie des piles à combustible. Les approches explorées dans cette thèse sont celles du diagnostic guidé par les données. Les techniques basées sur la reconnaissance de forme sont les plus utilisées. Dans ce travail, les variables considérées sont les tensions des cellules. Les résultats établis dans le cadre de la thèse peuvent être regroupés en trois contributions principales.La première contribution est constituée d'une étude comparative. Plus précisément, plusieurs méthodes sont explorées puis comparées en vue de déterminer une stratégie précise et offrant un coût de calcul optimal.La deuxième contribution concerne le diagnostic online sans connaissance complète des défauts au préalable. Il s'agit d'une technique adaptative qui permet d'appréhender l'apparition de nouveaux types de défauts. Cette technique est fondée sur la méthodologie SSM-SVM et les règles de détection et de localisation ont été améliorées pour répondre au problème du diagnostic en temps réel.La troisième contribution est obtenue à partir méthodologie fondée sur l'utilisation partielle de modèles dynamiques. Le principe de détection et localisation de défauts est fondé sur des techniques d'identification et sur la génération de résidus directement à partir des données d'exploitation.Toutes les stratégies proposées dans le cadre de la thèse ont été testées à travers des données expérimentales et validées sur un système embarqué
Aiming at improving the reliability and durability of Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems and promote the commercialization of fuel cell technologies, this thesis work is dedicated to the fault diagnosis study for PEMFC systems. Data-driven fault diagnosis is the main focus in this thesis. As a main branch of data-driven fault diagnosis, the methods based on pattern classification techniques are firstly studied. Taking individual fuel cell voltages as original diagnosis variables, several representative methodologies are investigated and compared from the perspective of online implementation.Specific to the defects of conventional classification based diagnosis methods, a novel diagnosis strategy is proposed. A new classifier named Sphere-Shaped Multi-class Support Vector Machine (SSM-SVM) and modified diagnostic rules are utilized to realize the novel fault recognition. While an incremental learning method is extended to achieve the online adaptation.Apart from the classification based diagnosis approach, a so-called partial model-based data-driven approach is introduced to handle PEMFC diagnosis in dynamic processes. With the aid of a subspace identification method (SIM), the model-based residual generation is designed directly from the normal and dynamic operating data. Then, fault detection and isolation are further realized by evaluating the generated residuals.The proposed diagnosis strategies have been verified using the experimental data which cover a set of representative faults and different PEMFC stacks. The preliminary online implementation results with an embedded system are also supplied
APA, Harvard, Vancouver, ISO, and other styles
49

Schumm, Phillip Raymond Brooke. "Characterizing epidemics in metapopulation cattle systems through analytic models and estimation methods for data-driven model inputs." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/16897.

Full text
Abstract:
Doctor of Philosophy
Department of Electrical and Computer Engineering
Caterina Maria Scoglio
We have analytically discovered the existence of two global epidemic invasion thresholds in a directed meta-population network model of the United States cattle industry. The first threshold describes the outbreak of disease first within the core of the livestock system while the second threshold describes the invasion of the epidemic into a second class of locations where the disease would pose a risk for contamination of meat production. Both thresholds have been verified through extensive numerical simulations. We have further derived the relationship between the pair of thresholds and discovered a unique dependence on the network topology through the fractional compositions and the in-degree distributions of the transit and sink nodes. We then addressed a major challenge for epidemiologists and their efforts to model disease outbreaks in cattle. There is a critical shortfall in the availability of large-scale livestock movement data for the United States. We meet this challenge by developing a method to estimate cattle movement parameters from publicly available data. Across 10 Central States of the US, we formulated a large, convex optimization problem to predict the cattle movement parameters which, having minimal assumptions, provide the best fit to the US Department of Agriculture's Census database and follow constraints defined by scientists and cattle experts. Our estimated parameters can produce distributions of cattle shipments by head which compare well with shipment distributions also provided by the US Department of Agriculture. This dissertation concludes with a brief incorporation of the analytic models and the parameter estimation. We approximated the critical movement rates defined by the global invasion thresholds and compared them with the average estimated cattle movement rates to find a significant opportunity for epidemics to spread through US cattle populations.
APA, Harvard, Vancouver, ISO, and other styles
50

Najar, Christine Ruth. "A model-driven approach to management of integrated metadata-spatial data in the context of spatial data infrastructures /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography