Dissertations / Theses on the topic 'Data-Driven Processes'

To see the other types of publications on this topic, follow the link: Data-Driven Processes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data-Driven Processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sham, Gregory C. (Gregory Chi-Keung). "Developing a data-driven approach for improving operating room scheduling processes." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73397.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 52).
In the current healthcare environment, the cost of delivering patient care is an important concern for hospitals. As a result, healthcare organizations are being driven to maximize their existing resources, both in terms of infrastructure and human capital. Using a data-driven approach with analytical techniques from operations management can contribute towards this goal. More specifically, this thesis shows, drawing from a recent project at Beth Israel Deaconess Medical Center (BIDMC), that predictive modeling can be applied to operating room (OR) scheduling in order to effectively increase capacity. By examining the current usage of the existing block schedule system at BIDMC and developing a linear regression model, OR time that is expected to go unused can be instead identified in advance and freed for use. Sample model results show that it is expected to be operationally effective by capturing a large enough portion of OR time for a pooled set of blocks to be useful for advanced scheduling purposes. This analytically determined free time represents an improvement in how the current block system is employed, especially in terms of the nominal block release time. This thesis makes the argument that such a model can integrate into a scheduling system with more efficient and flexible processes, ultimately resulting in more effective usage of existing resources.
by Gregory C. Sham.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Tallec Yann. "Robust, risk-sensitive, and data-driven control of Markov Decision Processes." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38598.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007.
Includes bibliographical references (p. 201-211).
Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature.
(cont.) The structural and computational results for such games make this formulation appealing. By adding a penalty for unlikely parameters, we extend the formulation of worst-case control of uncertain MDPs and mitigate its conservativeness. We show a duality between the penalized worst-case control of uncertain MDPs with rectangular uncertainty and the minimization of a Markovian dynamically consistent convex risk measure of the sample cost. This notion of risk has desirable properties for multi-period decision making, including a new Markovian property that we introduce and motivate. This Markovian property is critical in establishing the equivalence between minimizing some risk measure of the sample cost and solving a certain zero-sum Markov game between the decision maker and nature, and to tackling infinite-horizon problems. An alternative approach to dealing with uncertain MDPs, which avoids the curse of uncertainty, is to exploit directly observational data. Specifically, we estimate the expected performance of any given policy (and its gradient with respect to certain policy parameters) from a training set comprising observed trajectories sampled under a known policy.
(cont.) We propose new value (and value gradient) estimators that are unbiased and have low training set to training set variance. We expect our approach to outperform competing approaches when there are few system observations compared to the underlying MDP size, as indicated by numerical experiments.
by Yann Le Tallec.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Jiang, Tianyu. "Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ardakani, Mohammad Hamed. "Data driven methods for updating fault detection and diagnosis system in chemical processes." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/650845.

Full text
Abstract:
Modern industrial processes are becoming more complex, and consequently monitoring them has become a challenging task. Fault Detection and Diagnosis (FDD) as a key element of process monitoring, needs to be investigated because of its essential role in decision making processes. Among available FDD methods, data driven approaches are currently receiving increasing attention because of their relative simplicity in implementation. Regardless of FDD types, one of the main traits of reliable FDD systems is their ability of being updated while new conditions that were not considered at their initial training appear in the process. These new conditions would emerge either gradually or abruptly, but they have the same level of importance as in both cases they lead to FDD poor performance. For addressing updating tasks, some methods have been proposed, but mainly not in research area of chemical engineering. They could be categorized to those that are dedicated to managing Concept Drift (CD) (that appear gradually), and those that deal with novel classes (that appear abruptly). The available methods, mainly, in addition to the lack of clear strategies for updating, suffer from performance weaknesses and inefficient required time of training, as reported. Accordingly, this thesis is mainly dedicated to data driven FDD updating in chemical processes. The proposed schemes for handling novel classes of faults are based on unsupervised methods, while for coping with CD both supervised and unsupervised updating frameworks have been investigated. Furthermore, for enhancing the functionality of FDD systems, some major methods of data processing, including imputation of missing values, feature selection, and feature extension have been investigated. The suggested algorithms and frameworks for FDD updating have been evaluated through different benchmarks and scenarios. As a part of the results, the suggested algorithms for supervised handling CD surpass the performance of the traditional incremental learning in regard to MGM score (defined dimensionless score based on weighted F1 score and training time) even up to 50% improvement. This improvement is achieved by proposed algorithms that detect and forget redundant information as well as properly adjusting the data window for timely updating and retraining the fault detection system. Moreover, the proposed unsupervised FDD updating framework for dealing with novel faults in static and dynamic process conditions achieves up to 90% in terms of the NPP score (defined dimensionless score based on number of the correct predicted class of samples). This result relies on an innovative framework that is able to assign samples either to new classes or to available classes by exploiting one class classification techniques and clustering approaches.
Los procesos industriales modernos son cada vez más complejos y, en consecuencia, su control se ha convertido en una tarea desafiante. La detección y el diagnóstico de fallos (FDD), como un elemento clave de la supervisión del proceso, deben ser investigados debido a su papel esencial en los procesos de toma de decisiones. Entre los métodos disponibles de FDD, los enfoques basados en datos están recibiendo una atención creciente debido a su relativa simplicidad en la implementación. Independientemente de los tipos de FDD, una de las principales características de los sistemas FDD confiables es su capacidad de actualización, mientras que las nuevas condiciones que no fueron consideradas en su entrenamiento inicial, ahora aparecen en el proceso. Estas nuevas condiciones pueden surgir de forma gradual o abrupta, pero tienen el mismo nivel de importancia ya que en ambos casos conducen al bajo rendimiento de FDD. Para abordar las tareas de actualización, se han propuesto algunos métodos, pero no mayoritariamente en el área de investigación de la ingeniería química. Podrían ser categorizados en los que están dedicados a manejar Concept Drift (CD) (que aparecen gradualmente), y a los que tratan con clases nuevas (que aparecen abruptamente). Los métodos disponibles, además de la falta de estrategias claras para la actualización, sufren debilidades en su funcionamiento y de un tiempo de capacitación ineficiente, como se ha referenciado. En consecuencia, esta tesis está dedicada principalmente a la actualización de FDD impulsada por datos en procesos químicos. Los esquemas propuestos para manejar nuevas clases de fallos se basan en métodos no supervisados, mientras que para hacer frente a la CD se han investigado los marcos de actualización supervisados y no supervisados. Además, para mejorar la funcionalidad de los sistemas FDD, se han investigado algunos de los principales métodos de procesamiento de datos, incluida la imputación de valores perdidos, la selección de características y la extensión de características. Los algoritmos y marcos sugeridos para la actualización de FDD han sido evaluados a través de diferentes puntos de referencia y escenarios. Como parte de los resultados, los algoritmos sugeridos para el CD de manejo supervisado superan el rendimiento del aprendizaje incremental tradicional con respecto al puntaje MGM (puntuación adimensional definida basada en el puntaje F1 ponderado y el tiempo de entrenamiento) hasta en un 50% de mejora. Esta mejora se logra mediante los algoritmos propuestos que detectan y olvidan la información redundante, así como ajustan correctamente la ventana de datos para la actualización oportuna y el reciclaje del sistema de detección de fallas. Además, el marco de actualización FDD no supervisado propuesto para tratar fallas nuevas en condiciones de proceso estáticas y dinámicas logra hasta 90% en términos de la puntuación de NPP (puntuación adimensional definida basada en el número de la clase de muestras correcta predicha). Este resultado se basa en un marco innovador que puede asignar muestras a clases nuevas o a clases disponibles explotando una clase de técnicas de clasificación y enfoques de agrupamiento
APA, Harvard, Vancouver, ISO, and other styles
5

Cleve, Jochen. "Data-driven theoretical modelling of the turbulent energy cascade." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2004. http://nbn-resolving.de/urn:nbn:de:swb:14-1103125565484-63361.

Full text
Abstract:
Durch eine Modellierung der Energiekaskade gewinnt man wertvolle Einsichten in die Dynamik turbulenter Strömungen. In dieser Arbeit werden multiplikative Kaskadenprozesse untersucht und mit verschiedenen experimentellen Zeitreihen der Energiedissipation verglichen. Zur Berechnung der Energiedissipation ist es unvermeidlich auf eine Hilfskonstruktion zurückzugreifen, die die nicht gemessenen Komponenten des Geschwindigkeitsfeldes ersetzt. Der Schwerpunkt des Vergleichs zwischen Modell und Experiment liegt auf Zweipunktkorrelationen, weil andere Observablen, wie z. B. integrale Momente, durch diese Hilfskonstruktion der Dissipation verfälscht werden. Es werden explizite Ausdrücke für die Zweipunktkorrelationen abgeleitet, die auch Korrekturen, die von einem endlichen Skalierungsbereich stammen,berücksichtigen. Mit diesen Ausdrücken ist es möglich, auch Datensätze mit niedrigen oder moderaten Reynoldszahlen zu fitten und genaue Werte für die Skalierungsexponenten zu bestimmen. Mit einer umfassenden Datenanalyse wird versucht, die freien Parameter des Kaskadengenerators zu bestimmen. Die verfügbare Statistik der Daten ist zu gering, um genauere Aussagen zu treffen, als dass die Verteilung des Kaskadengenerators ähnlich einer log-normal Verteilung sein wird. Mit dem Intermittenzexponenten, der der fundamentalste Skalierungsexponent des Dissipationsfeldes ist, lassen sich die Daten charakterisieren. Die untersuchten Daten teilen sich in zwei Gruppen auf: Die Daten, die aus Luftströmungen gewonnen wurden, weisen einen mit der Reynoldszahl steigenden Intermittenzexponenten auf, der für hohe Reynoldszahlen gegen den konstanten Wert 0.2 konvergiert. Die Daten aus einem Helium-Freistrahl andererseits können am besten mit einem konstanten Intermittenzexponenten 0.1 charakterisiert werden. Diese Unterschiede können nicht vollständig erklärt werden.Um diesen Sachverhalt genauer zu untersuchen wird ein neues Modell vorgeschlagen, das die Kramers-Moyal-Koeffizienten des Geschwindigkeitsfeldes in ein Dissipationsfeld übersetzt, um den Intermittenzexponenten aus einer anderen Perspektive zu berechnen.Schließlich wird eine dynamische Verallgemeinerung des Kaskadenprozesses,die kürzlich vorgestellt wurde, getestet. Das dynamische Modell macht Vorhersagen für allgemeine n-Punktkorrelationen. Die analytischen Ausdrücke für Dreipunktkorrelationen werden mit experimentellen Daten verglichen. Die Übereinstimmung zwischen Modellvorhersage und Experiment ist überzeugend
Modelling the turbulent energy cascade gives valuable insight into the dynamics of a turbulent flow. In this work, random multiplicative cascade processes are studied and compared with dissipation time series obtained from various experiments. The emphasis of this comparison is laid on the two-point correlation function because the unavoidable surrogacy of the dissipation field, i.e.the substitution of the multi-component expression by a single component of the velocity signal, corrupts the scaling behaviour of other observables such as integral moments. Finite-size expressions for the two-point correlation function are derived, which make it possible to fit data obtained at moderate or low Reynolds numbers and extract accurate values of scaling exponents. A comprehensive data analysis attempts to determine the free parameters of the cascade generator. The statistics are too limited to claim more than that the cascade generator will be close to having a log-normal distribution. The most basic scaling exponent of the dissipation field is called intermittency exponent and can be used to characterise the data. The investigated data fall into two groups. One set of data obtained from measurements with air show an increasing intermittency exponent with an increasing Reynolds number and saturate for high Reynolds numbers to a value of 0.2. The other set, obtained in a helium jet is best characterised with a constant intermittency exponent of 0.1. The differences are not fully understood. To investigate this issue further, a new construction is suggested, that translates the Kramers-Moyal coefficients of the velocity field into a dissipation field in order to calculate the intermittency exponent from different perspective. Finally, a dynamical generalisation of the cascade process, introduced recently, is tested. The dynamical model makes predictions for point correlation functions. The analytical expressions for three-point correlation functions are compared with their counterparts obtained from experimental data and show remarkable agreement
APA, Harvard, Vancouver, ISO, and other styles
6

Tu, Zhuowen. "Image Parsing by Data-Driven Markov Chain Monte Carlo." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1038347031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stubbs, Shallon Monique. "Data-driven, mechanistic and hybrid modelling for statistical fault detection and diagnosis in chemical processes." Thesis, University of Newcastle Upon Tyne, 2012. http://hdl.handle.net/10443/1519.

Full text
Abstract:
Research and applications of multivariate statistical process monitoring and fault diagnostic techniques for performance monitoring of continuous and batch processes continue to be a very active area of research. Investigations into new statistical and mathematical methods and there applicability to chemical process modelling and performance monitoring is ongoing. Successive researchers have proposed new techniques and models to address the identified limitations and shortcomings of previously applied linear statistical methods such as principal component analysis and partial least squares. This thesis contributes to this volume of research and investigation into alternative approaches and their suitability for continuous and batch process applications. In particular, the thesis proposes a modified canonical variate analysis state space model based monitoring scheme and compares the proposed scheme with several existing statistical process monitoring approaches using a common benchmark simulator – Tennessee Eastman benchmark process. A hybrid data driven and mechanistic model based process monitoring approach is also investigated. The proposed hybrid scheme gives more specific considerations to the implementation and application of the technique for dynamic systems with existing control structures. A nonmechanistic hybrid approach involving the combination of nonlinear and linear data based statistical models to create a pseudo time-variant model for monitoring of large complex plants is also proposed. The hybrid schemes are shown to provide distinct advantages in terms of improved fault detection and reliability. The demonstration of the hybrid schemes were carried out on two separate simulated processes: a CSTR with recycle through a heat exchanger and a CHEMCAD simulated distillation column. Finally, a batch process monitoring schemed based on a proposed implementation of interval partial least squares (IPLS) technique is demonstrated using a benchmark simulated fed-batch penicillin production process. The IPLS strategy employs data unfolding methods and a proposed algorithm for segmentation of the batch duration into optimal intervals to give a unique implementation of a Multiway-IPLS model. Application results show that the proposed method gives better model prediction and monitoring performance than the conventional IPLS approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Höcker, Filip, and Finn Brand. "‘Data over intuition’ – How big data analytics revolutionises the strategic decision-making processes in enterprises." Thesis, Internationella Handelshögskolan, Jönköping University, IHH, Företagsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-48560.

Full text
Abstract:
Background: Digital technologies are increasingly transforming traditional businesses, and their pervasive impact is leading to a radical restructuring of entire industries. While the significance of generating competitive advantages for businesses utilizing big data analytics is recognized, there is still a lack of consensus of big data analytics influencing strategic decision-making in organisations. As big data and big data analytics become increasingly common, understanding the factors influencing decision-making quality becomes of paramount importance for businesses. Purpose: This thesis investigates how big data and big data analytics affect the operational strategic decision-making processes in enterprises through the theoretical lens of the strategy-as-practice framework. Method: The study follows an abductive research approach by testing a theory (i.e., strategy-aspractice) through the use of a qualitative research design. A single case study of IKEA was conducted to generate the primary data for this thesis. Sampling is carried out internally at IKEA by first identifying the heads of the different departments within the data analysis and from there applying the snowball sampling technique, to increase the number of interviewees and to ensure the collection of enough data for coding. Findings: The findings show that big data analytics has a decisive influence on practitioners. At IKEA, data analysts have become an integral part of the operational strategic decision-making processes and discussions are driven by data and rigor rather than by gut and intuition. In terms of practices, it became apparent that big data analytics has led to a more performance-oriented use of strategic tools and enabling IKEA to make strategic decisions in real-time, which not only increases agility but also mitigates the risk of wrong decisions.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Michael D. "Incidental text priming without reinstatement of context, the role of data-driven processes in implicit memory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0006/MQ45081.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Christian [Verfasser], Günter [Akademischer Betreuer] Rudolph, and Thomas [Gutachter] Bartz-Beielstein. "Data-driven optimization of hot rolling processes / Christian Jung ; Gutachter: Thomas Bartz-Beielstein ; Betreuer: Günter Rudolph." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/120260577X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jung, Christian Verfasser], Günter [Akademischer Betreuer] Rudolph, and Thomas [Gutachter] [Bartz-Beielstein. "Data-driven optimization of hot rolling processes / Christian Jung ; Gutachter: Thomas Bartz-Beielstein ; Betreuer: Günter Rudolph." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/120260577X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jung, Christian [Verfasser], Günter Akademischer Betreuer] Rudolph, and Thomas [Gutachter] [Bartz-Beielstein. "Data-driven optimization of hot rolling processes / Christian Jung ; Gutachter: Thomas Bartz-Beielstein ; Betreuer: Günter Rudolph." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/120260577X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shahzad, Muhammad Khurram. "Improving Business Processes using Process-oriented Data Warehouse." Doctoral thesis, KTH, Data- och systemvetenskap, DSV, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-107691.

Full text
Abstract:
The Business Process Management (BPM) lifecycle consists of four phases: design and analysis, configuration, enactment, and evaluation - also known as performance analysis and improvement. Performance analysis and improvement of business processes, one of the core phases in the BPM life cycle, is becoming on top of the agenda for many enterprises. An emerging approach to that is to use the business intelligence approaches that attempt to facilitate the analytical capabilities of business process management systems by implementing process-oriented data warehouse and mining techniques. However, little work has been has done on developing core methods and tools for performance analysis and improvement of business processes. In particular, adequate methods, clearly defined steps or instructions that can guide process managers for analyzing and improving processes using process warehouse (PW) are not available. In the absence of such methods, guidelines or clearly defined steps, important steps may be ignored and credible improvements steps cannot be taken. This research addresses the described limitations by developing a method for performance analysis and improvement of business processes. The key feature of the developed method is, it employs business-orientation in the design and utilization of a PW. The method is composed of three steps, building goal-structure, integrating goal-structure with PW, and analyzing and improving business processes. During the first step, a set of top-level performance goals are identified for the process of interest. Subsequently, the identified  goals are decomposed to generate a goal-structure that is aligned with the functional decomposition of the process of interest. The second step describes a technique for integrating the generated goal-structure with PW. The third step describes, a performance estimation model, a decision model and a step by step approach that focuses on utilizing PW for analysis and improvement of business processes. In order to facilitate the use of the proposed method a prototype is developed. The prototype offers a graphical user interface for defining goal structure, integrating goals with PW, and goal-based navigation of PW. In order to evaluate the proposed method, we first develop an evaluation framework and subsequently use it for the evaluation of the proposed method. The framework consists of three components, each representing a type of evaluation. The components are, methodological-structure evaluation, performance-based evaluation and perception-based evaluation. The results of the evaluation show partial support for the methodological structure. However, the results of performance and perception evaluation show promising results of the proposed method.

QC 20121217

APA, Harvard, Vancouver, ISO, and other styles
14

Ritter, Tobias [Verfasser], Oskar von Akademischer Betreuer] Stryk, and Stefan [Akademischer Betreuer] [Ulbrich. "PDE-Based Dynamic Data-Driven Monitoring of Atmospheric Dispersion Processes / Tobias Ritter ; Oskar von Stryk, Stefan Ulbrich." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1142377393/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ritter, Tobias Verfasser], Oskar von [Akademischer Betreuer] Stryk, and Stefan [Akademischer Betreuer] [Ulbrich. "PDE-Based Dynamic Data-Driven Monitoring of Atmospheric Dispersion Processes / Tobias Ritter ; Oskar von Stryk, Stefan Ulbrich." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2017. http://d-nb.info/1142377393/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rall, Deniz Patrick [Verfasser], Matthias [Akademischer Betreuer] Wessling, and Anthony [Akademischer Betreuer] Szymczyk. "Data-Driven development of layer-by-layer nanofiltration membranes and processes / Deniz Patrick Rall ; Matthias Wessling, Anthony Szymczyk." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1227054610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Duran, Villalobos Carlos Alberto. "Run-to-run modelling and control of batch processes." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/runtorun-modelling-and-control-of-batch-processes(1d42c508-b96d-4ee6-96ad-ec649a199913).html.

Full text
Abstract:
The University of ManchesterCarlos Alberto Duran VillalobosDoctor of Philosophy in the Faculty of Engineering and Physical SciencesDecember 2015This thesis presents an innovative batch-to-batch optimisation technique that was able to improve the productivity of two benchmark fed-batch fermentation simulators: Saccharomyces cerevisiae and Penicillin production. In developing the proposed technique, several important challenges needed to be addressed:For example, the technique relied on the use of a linear Multiway Partial Least Squares (MPLS) model to adapt from one operating region to another as productivity increased to estimate the end-point quality of each batch accurately. The proposed optimisation technique utilises a Quadratic Programming (QP) formulation to calculate the Manipulated Variable Trajectory (MVT) from one batch to the next. The main advantage of the proposed optimisation technique compared with other approaches that have been published was the increase of yield and the reduction of convergence speed to obtain an optimal MVT. Validity Constraints were also included into the batch-to-batch optimisation to restrict the QP calculations to the space only described by useful predictions of the MPLS model. The results from experiments over the two simulators showed that the validity constraints slowed the rate of convergence of the optimisation technique and in some cases resulted in a slight reduction in final yield. However, the introduction of the validity constraints did improve the consistency of the batch optimisation. Another important contribution of this thesis were a series of experiments that were implemented utilising a variety of smoothing techniques used in MPLS modelling combined with the proposed batch-to-batch optimisation technique. From the results of these experiments, it was clear that the MPLS model prediction accuracy did not significantly improve using these smoothing techniques. However, the batch-to-batch optimisation technique did show improvements when filtering was implemented.
APA, Harvard, Vancouver, ISO, and other styles
18

Kratsch, Wolfgang [Verfasser], and Maximilian [Akademischer Betreuer] Röglinger. "Data-driven Management of Interconnected Business Processes : Contributions to Predictive and Prescriptive Process Mining / Wolfgang Kratsch ; Betreuer: Maximilian Röglinger." Bayreuth : Universität Bayreuth, 2021. http://d-nb.info/122950544X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Schliephake, Hanna Josephina, and Charlotte Laila Niemann. "Digital Institutions to Support Data-Driven Circularity Innovation : The Improvement of Textile and Apparel Recycling Processes through Blockchain Technology." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-26415.

Full text
Abstract:
Purpose - The purpose of this master thesis is to explore if and how blockchain technology can improve textile and apparel recycling processes. It further aims to investigate which institutional and infrastructural preconditions have to be met for it to do so. This research seeks to extend the understanding of the technology’s potential and to derive theoretical and managerial implications. Design/ Methodology/ Approach - The study applies a qualitative, explorative research approach, following a deductive research strategy. Thereby, a theoretical framework was derived based on the results of a literature review. Primary data was collected using the method of semi-structured expert interviews and analysed using the method Thematic Analysis. The sample contained experts from different entities of the textile and apparel recycling industry, namely textile waste collectors, textile waste sorters, textile-to-textile recyclers, manufacturers, recycling experts and digital service-providers. Findings - The results show that blockchain technology in fact holds the potential to improve industry processes through its ability to verify data and assign value. However, the findings suggest that the main challenges of the textile and apparel recycling industry are grounded in its institutional complexity. Therefore, the lack of sufficient infrastructure, information exchange and value creation inhibit the industry from using blockchain technology to its full potential. Implications - To overcome this, it is advised that the individual industry players must collaborate to fulfil the essential institutional and infrastructural requirements. This means creating an inter-organisational network that relies on the exchange of recycling-relevant information, uniform data structures and unified norms and practices. Originality/ Value - Scientific research lacks a coherent understanding of the relation between blockchain technology and textile and apparel recycling. This research bridges this gap by illustrating the industry’s challenges and exploring blockchains potential to address them, while laying out the institutional and infrastructural preconditions for blockchain to contribute to an improved textile and apparel recycling.
APA, Harvard, Vancouver, ISO, and other styles
20

Agamah, Francis Edem. "Large–scale data–driven network analysis of human–plasmodium falciparum interactome: extracting essential targets and processes for malaria drug discovery." Master's thesis, Faculty of Health Sciences, 2020. http://hdl.handle.net/11427/32185.

Full text
Abstract:
Background: Plasmodium falciparum malaria is an infectious disease considered to have great impact on public health due to its associated high mortality rates especially in sub Saharan Africa. Falciparum drugresistant strains, notably, to chloroquine and sulfadoxine-pyrimethamine in Africa is traced mainly to Southeast Asia where artemisinin resistance rate is increasing. Although careful surveillance to monitor the emergence and spread of artemisinin-resistant parasite strains in Africa is on-going, research into new drugs, particularly, for African populations, is critical since there is no replaceable drug for artemisinin combination therapies (ACTs) yet. Objective: The overall objective of this study is to identify potential protein targets through host–pathogen protein–protein functional interaction network analysis to understand the underlying mechanisms of drug failure and identify those essential targets that can play their role in predicting potential drug candidates specific to the African populations through a protein-based approach of both host and Plasmodium falciparum genomic analysis. Methods: We leveraged malaria-specific genome wide association study summary statistics data obtained from Gambia, Kenya and Malawi populations, Plasmodium falciparum selective pressure variants and functional datasets (protein sequences, interologs, host-pathogen intra-organism and host-pathogen inter-organism protein-protein interactions (PPIs)) from various sources (STRING, Reactome, HPID, Uniprot, IntAct and literature) to construct overlapping functional network for both host and pathogen. Developed algorithms and a large-scale data-driven computational framework were used in this study to analyze the datasets and the constructed networks to identify densely connected subnetworks or hubs essential for network stability and integrity. The host-pathogen network was analyzed to elucidate the influence of parasite candidate key proteins within the network and predict possible resistant pathways due to host-pathogen candidate key protein interactions. We performed biological and pathway enrichment analysis on critical proteins identified to elucidate their functions. In order to leverage disease-target-drug relationships to identify potential repurposable already approved drug candidates that could be used to treat malaria, pharmaceutical datasets from drug bank were explored using semantic similarity approach based of target–associated biological processes Results: About 600,000 significant SNPs (p-value< 0.05) from the summary statistics data were mapped to their associated genes, and we identified 79 human-associated malaria genes. The assembled parasite network comprised of 8 clusters containing 799 functional interactions between 155 reviewed proteins of which 5 clusters contained 43 key proteins (selective variants) and 2 clusters contained 2 candidate key proteins(key proteins characterized by high centrality measure), C6KTB7 and C6KTD2. The human network comprised of 32 clusters containing 4,133,136 interactions between 20,329 unique reviewed proteins of which 7 clusters contained 760 key proteins and 2 clusters contained 6 significant human malaria-associated candidate key proteins or genes P22301 (IL10), P05362 (ICAM1), P01375 (TNF), P30480 (HLA-B), P16284 (PECAM1), O00206 (TLR4). The generated host-pathogen network comprised of 31,512 functional interactions between 8,023 host and pathogen proteins. We also explored the association of pfk13 gene within the host-pathogen. We observed that pfk13 cluster with host kelch–like proteins and other regulatory genes but no direct association with our identified host candidate key malaria targets. We implemented semantic similarity based approach complemented by Kappa and Jaccard statistical measure to identify 115 malaria–similar diseases and 26 potential repurposable drug hits that can be 3 appropriated experimentally for malaria treatment. Conclusion: In this study, we reviewed existing antimalarial drugs and resistance–associated variants contributing to the diminished sensitivity of antimalarials, especially chloroquine, sulfadoxine-pyrimethamine and artemisinin combination therapy within the African population. We also described various computational techniques implemented in predicting drug targets and leads in drug research. In our data analysis, we showed that possible mechanisms of resistance to artemisinin in Africa may arise from the combinatorial effects of many resistant genes to chloroquine and sulfadoxine–pyrimethamine. We investigated the role of pfk13 within the host–pathogen network. We predicted key targets that have been proposed to be essential for malaria drug and vaccine development through structural and functional analysis of host and pathogen function networks. Based on our analysis, we propose these targets as essential co-targets for combinatorial malaria drug discovery.
APA, Harvard, Vancouver, ISO, and other styles
21

El, Akkaoui Zineb. "A BPMN-based conceptual language for designing ETL processes." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209198.

Full text
Abstract:
Business Intelligence (BI) is the set of techniques and technologies that support the decision-making process by providing an aggregated insight on data in the organization. Due to the numerous potentially useful data hold by the events and applications running in the organization, the BI market calls for new technologies able to suitably exploit it for analysis wherever it is available. In particular, the Extract, Transform, and Load (ETL) processes, the fundamental BI technology responsible for integrating and cleansing organization data, must respond to these requirements.

However, the development of ETL processes is still considered to be very complex and time-consuming, to such a point that roughly 80% of the BI project effort is dedicated to the ETL development. Among the phases of ETL development life cycle, ETL modeling is a critical and laborious task. Actually, this phase produces

the first effective formal representation of the ETL process, i.e. ETL model, that is completely reused and refined in the subsequent phases of the development.

Typically, the ETL processes are modeled using vendor-specific ETL tools from the very beginning of development. However, these tools are unsuitable for business users since they induce overwhelming fine-grained models.

As an attempt to provide more appropriate tools to business users, vendor-independent ETL modeling languages have been proposed in the literature. Nevertheless, they still remain immature. In order to get a precise view on these languages, we conduct a survey which: i) defines a set of criteria associated to major ETL

requirements identified in the literature; ii) compares the surveyed conceptual languages, issued from research work, to the physical languages, issued from prominent ETL tools; and iii) studies the whole methodologies of ETL development associated

to these modeling languages.

The analysis of our survey reveals several drawbacks in responding to the ETL requirements. Particularly, the conceptual languages have incomplete elements for ETL modeling with few or no formalization. Several languages are only descriptive with no ability to be automatically implemented into executable code, nor are they able to be automatically maintained according to changes over time.

To address these shortcomings, we present, in this thesis, a novel approach that tackles the whole development life cycle of ETL processes.

First, we propose a new vendor-independent language aiming at modeling ETL processes similar to typical business processes, the processes responsible for managing the operations in an organization. The rational behind this proposal is to provide ETL processes with better access to data in events and applications of the organization, including fresh data, and better design capabilities such as available analysis for any users. By using the standard representation mechanism denoted BPMN (Business Process Modeling and Notation) and a classification of ETL elements resulting from a study of the most used commercial and open source ETL tools, the language enables building agile and full-edged ETL processes. We name our language BPMN4ETL to refer to BPMN for ETL processes.

Second, we build a model-driven framework that provides automatic code generation capability and ameliorates maintenance support of our ETL language. We use the Model-Driven Development (MDD) technology as it helps in developing software, particularly in automating the transformation from one phase of the software development to another. We present a set of model-to-text transformations able to produce code for different business process engines and ETL engines. Also, we depict the model-to-model transformations that automatically update the ETL models with the aim of supporting the maintenance of the generated code according to data source evolution. A demonstration using a case study is conducted as an initial validation to show that the framework covering modeling, implementation and maintenance could be used in practice.

To illustrate new concepts introduced in the thesis, mainly the BPMN4ETL language, and the implementation and maintenance framework, we use a case study from the fictitious Northwind Traders company, a retailer company that imports and exports foods from around the world.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
22

Alghwiri, Alaa Ali. "INTELLIGENT PUBLIC TRANSPORTATION SYSTEM PLATFORM IN A UNIVERSITY SETTING." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1543919012077744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hannila, H. (Hannu). "Towards data-driven decision-making in product portfolio management:from company-level to product-level analysis." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224428.

Full text
Abstract:
Abstract Products and services are critical for companies as they create the foundation for companies’ financial success. Twenty per cent of company products typically account for some eighty per cent of sales volume. Nevertheless, the product portfolio decisions — how to strategically renew company product offering — tend to involve emotions, pet products and who-shout-the-loudest mentality while facts, numbers, and quantitative analyses are missing. Profitability is currently measured and reported at a company level, and firms seem unable to measure product-level profitability in a constant way. Consequently, companies are unable to maintain and renew their product portfolio in a strategically or commercially balanced way. The main objective of this study is to provide a data-driven product portfolio management (PPM) concept, which recognises and visualises in real-time and based on facts which company products are concurrently strategic and profitable, and what is the share of them in the product portfolio. This dissertation is a qualitative study to understand the topical area by the means combining literature review, company interviews, observations, and company internal material, to take steps towards data-driven decision-making in PPM. This study indicates that company data assets need to be combined and governed company-widely to realise the full potential of company strategic assets — the DATA. Data must be governed separately from business IT technology and beyond it. Beyond data and technology, the data-driven company culture must be adopted first. The data-driven PPM concept connects key business processes, business IT systems and several concepts, such as productization, product lifecycle management and PPM. The managerial implications include, that the shared understanding of the company products is needed, and the commercial and technical product structures are created accordingly, as they form the backbone of the company business as the skeleton to gather all product-related business-critical information for product-level profitability analysis. Also, product classification for strategic, supportive and non-strategic is needed, since the strategic nature of the product can change during the entire product lifecycle, e.g. due to the technology obsolescence, disruptive innovations by competitors, or for any other reason
Tiivistelmä Tuotteet ja palvelut ovat yrityksille kriittisiä, sillä ne luovat perustan yritysten taloudelliselle menestykselle. Kaksikymmentä prosenttia yrityksen tuotteista edustaa tyypillisesti noin kahdeksaakymmentä prosenttia myyntimääristä. Siitä huolimatta tuoteporfoliopäätöksiin — kuinka strategisesti uudistetaan yrityksen tuotetarjoomaa — liittyy tunteita, lemmikkituotteita ja kuka-huutaa-kovimmin -mentaliteettia faktojen, numeroiden ja kvantitatiivisten analyysien puuttuessa. Kannattavuutta mitataan ja raportoidaan tällä hetkellä yritystasolla, ja yritykset eivät näyttäisi pystyvän mittaamaan tuotetason kannattavuutta johdonmukaisesti. Tämä estää yrityksiä ylläpitämästä ja uudistamasta tuotevalikoimaansa strategisesti tai kaupallisesti tasapainoisella tavalla. Tämän tutkimuksen päätavoite on tarjota dataohjattu (data-driven) tuoteportfoliohallinnan konsepti, joka tunnistaa ja visualisoi reaaliajassa ja faktapohjaisesti, mitkä yrityksen tuotteet ovat samanaikaisesti strategisia ja kannattavia ja mikä on niiden osuus tuoteportfoliossa. Tämä väitöskirja on laadullinen tutkimus, jossa yhdistyy kirjallisuuskatsaus, yrityshaastattelut, havainnot ja yritysten sisäinen dokumentaatio, joiden pohjalta pyritään kohti dataohjautuvaa päätöksentekoa tuoteportfolion hallinnassa. Tämä tutkimus osoittaa, että yrityksen data assettit on yhdistettävä ja hallittava yrityksenlaajuisesti, jotta yrityksen strategisten assettien — DATAN — potentiaali voidaan hyödyntää kokonaisuudessaan. Data on hallittava erillään yrityksen IT-teknologiasta ja sen yläpuolella. Ennen dataa ja teknologiaa on omaksuttava dataohjattu yrityskulttuuri. Dataohjatun tuoteportfolionhallinnan konsepti yhdistää keskeiset liiketoimintaprosessit, liiketoiminnan IT-järjestelmät ja useita konsepteja, kuten tuotteistaminen, tuotteen elinkaaren hallinta ja tuoteportfolion hallinta. Yhteisymmärrys yrityksen tuotteista ja sekä kaupallisen että teknisen tuoterakenteet luominen vastaavasti on ennakkoedellytys dataohjatulle tuoteportfolion hallinnalle, koska ne muodostavat yrityksen liiketoiminnan selkärangan, joka yhdistää kaikki tuotteisiin liittyvät liiketoimintakriittiset tiedot tuotetason kannattavuuden analysoimiseksi. Lisäksi tarvitaan tuotteiden kategorisointi strategisiin, tukeviin ja ei-strategisiin tuotteisiin, koska tuotteen strateginen luonne voi muuttua tuotteen elinkaaren aikana, johtuen esimerkiksi teknologian vanhenemisesta, kilpailijoiden häiritsevistä innovaatioista tai mistä tahansa muusta syystä
APA, Harvard, Vancouver, ISO, and other styles
24

Loos, Carolin [Verfasser], Jan [Akademischer Betreuer] Hasenauer, Jan [Gutachter] Hasenauer, Oliver [Gutachter] Junge, and Ruth [Gutachter] Baker. "Data-driven robust and efficient mathematical modeling of biochemical processes / Carolin Loos ; Gutachter: Jan Hasenauer, Oliver Junge, Ruth Baker ; Betreuer: Jan Hasenauer." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1188815962/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Berg, Martin, and Albin Eriksson. "Toward predictive maintenance in surface treatment processes : A DMAIC case study at Seco Tools." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik, konst och samhälle, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84923.

Full text
Abstract:
Surface treatments are often used in the manufacturing industry to change the surface of a product, including its related properties and functions. The occurrence of degradation and corrosion in surface treatment processes can lead to critical breakdowns over time. Critical breakdowns may impair the properties of the products and shorten their service life, which causes increased lead times or additional costs in the form of rework or scrapping.  Prevention of critical breakdowns due to machine component failure requires a carefully selected maintenance policy. Predictive maintenance is used to anticipate equipment failures to allow for maintenance scheduling before component failure. Developing predictive maintenance policies for surface treatment processes is problematic due to the vast number of attributes to consider in modern surface treatment processes. The emergence of smart sensors and big data has led companies to pursue predictive maintenance. A company that strives for predictive maintenance of its surface treatment processes is Seco Tools in Fagersta. The purpose of this master's thesis has been to investigate the occurrence of critical breakdowns and failures in the machine components of the chemical vapor deposition and post-treatment wet blasting processes by mapping the interaction between its respective process variables and their impact on critical breakdowns. The work has been conducted as a Six Sigma project utilizing the problem-solving methodology DMAIC.  Critical breakdowns were investigated combining principal component analysis (PCA), computational fluid dynamics (CFD), and statistical process control (SPC) to create an understanding of the failures in both processes. For both processes, two predictive solutions were created: one short-term solution utilizing existing dashboards and one long-term solution utilizing a PCA model and an Orthogonal Partial Least Squares (OPLS) regression model for batch statistical process control (BSPC). The short-term solutions were verified and implemented during the master's thesis at Seco Tools. Recommendations were given for future implementation of the long-term solutions. In this thesis, insights are shared regarding the applicability of OPLS and Partial Least Squares (PLS) regression models for batch monitoring of the CVD process. We also demonstrate that the prediction of a certain critical breakdown, clogging of the aluminum generator in the CVD process, can be accomplished through the use of SPC. For the wet blasting process, a PCA methodology is suggested to be effective for visualizing breakdowns.
APA, Harvard, Vancouver, ISO, and other styles
26

Schiller, Alexander [Verfasser], Bernd [Akademischer Betreuer] Heinrich, and Mathias [Akademischer Betreuer] Klier. "Concepts and Methods from Artificial Intelligence in Modern Information Systems – Contributions to Data-driven Decision-making and Business Processes / Alexander Schiller ; Bernd Heinrich, Mathias Klier." Regensburg : Universitätsbibliothek Regensburg, 2020. http://d-nb.info/1203875061/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hebig, Regina. "Evolution of model-driven engineering settings in practice." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7076/.

Full text
Abstract:
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
Um die steigende Komplexität von Softwaresystemen beherrschen zu können, werden heutzutage unterschiedlichste Techniken gemeinsam eingesetzt. Beispiele sind, Design Pattern, Serviceorientierte Architekturen, Softwareentwicklungsprozesse oder modellgetriebene Entwicklung (MDE). Ziel dabei ist die Erhöhung der Produktivität, so dass Entwicklungsdauer und Qualität stabil bleiben können. Während hoch entwickelte Softwareentwicklungsprozesse heute schon standardmäßig genutzt werden, fangen Firmen gerade erst an MDE einzusetzen. Jedoch zeigen Studien, dass der erhoffte Erfolg von MDE nicht jedes Mal eintritt. So scheint es, dass noch kein ausreichendes Verständnis dafür existiert, inwiefern MDE auch Nachteile für die Produktivität bergen kann. Zusätzlich ist bei der Kombination von unterschiedlichen Techniken damit zu rechnen, dass die erreichten Effekte sich gegenseitig negieren anstatt sich zu ergänzen. Hier entsteht die Frage wie MDE und andere Techniken, wie Softwareentwicklungsprozesse, zusammenwirken. Beide Aspekte, der direkte Einfluss auf Produktivität und die Wechselwirkung mit anderen Techniken, müssen aber verstanden werden um den Risiken für den Produktivitätseinfluss von MDE zu identifizieren. Außerdem, muss auch die Generalisierbarkeit dieser Aspekte untersucht werden. Das betrifft die Fragen, ob der Produktivitätseinfluss bei jedem Einsatz von MDE gleich ist und ob der Produktivitätseinfluss über die Zeit stabil bleibt. Beide Fragen sind entscheidend, will man geeignete Risikobehandlung ermöglichen oder künftige Studien zum Erfolg von MDE planen. Diese Dissertation widmet sich der genannten Fragen. Dafür wird zuerst der Begriff MDE Setting'' eingeführt um eine differenzierte Betrachtung von MDE-Verwendungen zu ermöglichen. Ein MDE Setting ist dabei der technische Aufbau, inklusive manueller und automatische Aktivitäten, Artefakten, Sprachen und Werkzeugen. Welche Produktivitätseinflüsse von MDE Settings möglich sind, wird in der Dissertation mit Fokus auf Änderbarkeit und die Wechselwirkung mit Softwareentwicklungsprozessen betrachtet. Dafür wird einerseits eine Taxonomie von Changeability Concerns'' (potentiell betroffene Aspekte von Änderbarkeit) vorgestellt. Zusätzlich, werden drei MDE Traits'' (Charakteristika von MDE Settings die unterschiedlich ausgeprägt sein können) identifiziert. Es wird untersucht welche Ausprägungen dieser MDE Traits Einfluss auf Softwareentwicklungsprozesse haben können. Um die Erfassung und Bewertung dieser Einflüsse zu ermöglichen wird die Software Manufaktur Modell Sprache eingeführt. Diese Prozessmodellierungssprache ermöglicht eine Beschreibung, der Veränderungen von Artefaktbeziehungen während der Anwendung von Aktivitäten (z.B. Codegenerierung). Weiter werden auf Basis dieser Modelle, Analysetechniken eingeführt. Diese Analysetechniken erlauben es Risiken für bestimmte Changeability Concerns aufzudecken sowie die Ausprägung von MDE Traits zu erfassen (und damit den Einfluss auf Softwareentwicklungsprozesse). Um die Generalisierbarkeit der Ergebnisse zu studieren, wurden im Rahmen der Arbeit mehrere MDE Settings aus der Praxis sowie teilweise deren Evolutionshistorien erhoben. Daran wird gezeigt, dass MDE Settings sich in einem breiten Spektrum von Einflüssen auf Änderbarkeit und Prozesse bewegen. So ist es weder selten, dass ein MDE Setting neutral für Prozesse ist, noch, dass ein MDE Setting Einschränkungen für einen Prozess impliziert. Ähnlich breit gestreut ist der Einfluss auf die Änderbarkeit.Zusätzlich, wird diskutiert, inwiefern unterschiedliche Evolutionstypen den Einfluss eines MDE Settings auf Änderbarkeit und Prozesse verändern können. Diese Diskussion führt zur Identifikation der strukturellen Evolution'', die sich stark auf die genannten Charakteristika eines MDE Settings auswirken kann. Mithilfe der erfassten MDE Settings, wird gezeigt, dass strukturelle Evolution in der Praxis üblich ist. Schließlich, werden Beispiele aufgedeckt bei denen strukturelle Evolutionsschritte tatsächlich zu einer Änderung der Charakteristika des betreffenden MDE Settings geführt haben. Einerseits bestärkt die ermittelte Vielfalt den Bedarf nach Analysetechniken, wie sie in dieser Dissertation eingeführt werden. Zum Anderen erscheint es nun, dass Evolution zumindest zum Teil die unterschiedlichen Ausprägungen von MDE Settings erklärt. Zusammenfassend wird studiert wie MDE Settings und deren Evolution in der Praxis ausgeprägt sind. Als Ergebnis, werden Techniken zur Identifikation von Risiken für Produktivitätseinflüsse bereitgestellt um den Einsatz von MDE Settings zu unterstützen.
APA, Harvard, Vancouver, ISO, and other styles
28

Chang, Chia-Jung. "Statistical and engineering methods for model enhancement." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44766.

Full text
Abstract:
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as "Minimal Adjustment", which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
APA, Harvard, Vancouver, ISO, and other styles
29

Can, Ouyan, and Shi Chang-jie. "MULTI-STREAM DATA-DRIVEN TELEMETRY SYSTEM." International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/613172.

Full text
Abstract:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Multi-Stream Data-Driven Telemetry System (MSDDTS) is a new generation system in China developed by Beijing Research Institute of Telemetry (BRIT) for high bit rate, multi-stream data acquisition, processing and display. Features of the MSDDTS include: .Up to 4 data streams; .Data driven architecture; .Multi-processor for parallel processing; .Modular, Configurable, expandable and programmable; .Stand-along capability; .And, external control by host computer. This paper addresses three very important aspects of the MSDDTS. First, the system architecture is discussed. Second, three basic models of the system configuration are described. The third shows the future development of the system.
APA, Harvard, Vancouver, ISO, and other styles
30

Mäkynen, R. (Riku). "Some data-driven methods in process analysis and control." Bachelor's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201808222647.

Full text
Abstract:
Data-driven methods such as artificial neural networks have already been used in the past to solve many different problems such as medical diagnoses or self-driving cars and thus the material shown here can be of use in many different fields of science. a Few studies that are related to data-driven methods in the field of process engineering will be explored in this thesis. The most important finding related to neural network predictive controller was its better performance in the control of a heat exchanger when compared to several other controller types. The benefits of this approach were both energy savings and faster control. Another finding related to Evolutionary Neural Networks (EvoNNs) was the fact that it can be used to filter out the noise that is contained in the measurement data.
APA, Harvard, Vancouver, ISO, and other styles
31

DWIVEDI, SAURABH. "DIMENSIONALITY REDUCTION FOR DATA DRIVEN PROCESS MODELING." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1069770129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhou, Yifeng. "Data driven process monitoring based on neural networks and classification trees." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2740.

Full text
Abstract:
Process monitoring in the chemical and other process industries has been of great practical importance. Early detection of faults is critical in avoiding product quality deterioration, equipment damage, and personal injury. The goal of this dissertation is to develop process monitoring schemes that can be applied to complex process systems. Neural networks have been a popular tool for modeling and pattern classification for monitoring of process systems. However, due to the prohibitive computational cost caused by high dimensionality and frequently changing operating conditions in batch processes, their applications have been difficult. The first part of this work tackles this problem by employing a polynomial-based data preprocessing step that greatly reduces the dimensionality of the neural network process model. The process measurements and manipulated variables go through a polynomial regression step and the polynomial coefficients, which are usually of far lower dimensionality than the original data, are used to build a neural network model to produce residuals for fault classification. Case studies show a significant reduction in neural model construction time and sometimes better classification results as well. The second part of this research investigates classification trees as a promising approach to fault detection and classification. It is found that the underlying principles of classification trees often result in complicated trees even for rather simple problems, and construction time can excessive for high dimensional problems. Fisher Discriminant Analysis (FDA), which features an optimal linear discrimination between different faults and projects original data on to perpendicular scores, is used as a dimensionality reduction tool. Classification trees use the scores to separate observations into different fault classes. A procedure identifies the order of FDA scores that results in a minimum tree cost as the optimal order. Comparisons to other popular multivariate statistical analysis based methods indicate that the new scheme exhibits better performance on a benchmarking problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Bendezú, Mamani Katherine Fiorella, and Valdivia Ronal Esteven Ccanto. "Factores de adecuación de la cultura Data-driven en las organizaciones." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/657591.

Full text
Abstract:
La cultura data-driven consiste básicamente en tomar decisiones basadas en datos, siendo utilizada hoy más que nunca por diversas organizaciones que buscan mejorar sus procesos con el fin de atender mejor a sus clientes y consumidores. Sin embargo, muchas empresas se basan en acciones sin fundamentos y acaban perdiendo grandes oportunidades al no saber aprovechar al máximo el potencial existente en los datos. Por ello, es importante que para implementar una cultura data-driven se debe tomar en cuenta la integración de los datos desde la estrategia empresarial. A su vez, la cultura organizacional es un factor clave para el desarrollo de la cultura data-driven en las organizaciones. Este trabajo busca presentar una exhaustiva investigación de artículos de alto impacto y tiene como objetivo general contrastar las diversas posturas y valoración de los autores respecto a los factores de éxito para la adopción de una cultura data-driven en las organizaciones.
The culture based on data basically consists of making decisions based on data, being today more than ever by various organizations that seek to improve their processes in order to better serve their customers and consumers. However, many companies are based on unfounded actions and end up missing great opportunities by failing to take full advantage of the potential in data. Therefore, it is important that to implement a data-driven culture, the integration of data from the business strategy must be considered. In turn, organizational culture is a key factor for the development of data-driven culture in organizations. This work seeks to present an exhaustive investigation of high impact articles with the aim of contrasting the various positions and assessment of the authors regarding the success factors for the adoption of a data-based culture in organizations.
Trabajo de Suficiencia Profesional
APA, Harvard, Vancouver, ISO, and other styles
34

Roychowdhury, Sayak. "Data-Driven Policies for Manufacturing Systems and Cyber Vulnerability Maintenance." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1493905616531091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Paulin, Carl. "Detecting anomalies in data streams driven by ajump-diffusion process." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184230.

Full text
Abstract:
Jump-diffusion processes often model financial time series as they can simulate the random jumps that they frequently exhibit. These jumps can be seen as anomalies and are essential for financial analysis and model building, making them vital to detect.The realized variation, realized bipower variation, and realized semi-variation were tested to see if one could use them to detect jumps in a jump-diffusion process and if anomaly detection algorithms can use them as features to improve their accuracy. The algorithms tested were Isolation Forest, Robust Random Cut Forest, and Isolation Forest Algorithm for Streaming Data, where the latter two use streaming data. This was done by generating a Merton jump-diffusion process with a varying jump-rate and tested using each algorithm with each of the features. The performance of each algorithm was measured using the F1-score to compare the difference between features and algorithms. It was found that the algorithms were improved from using the features; Isolation Forest saw improvement from using one, or more, of the named features. For the streaming algorithms, Robust Random Cut Forest performed the best for every jump-rate except the lowest. Using a combination of the features gave the highest F1-score for both streaming algorithms. These results show one can use these features to extract jumps, as anomaly scores, and improve the accuracy of the algorithms, both in a batch and stream setting.
Hopp-diffusionsprocesser används regelbundet för att modellera finansiella tidsserier eftersom de kan simulera de slumpmässiga hopp som ofta uppstår. Dessa hopp kan ses som anomalier och är viktiga för finansiell analys och modellbyggnad, vilket gör dom väldigt viktiga att hitta. Den realiserade variationen, realiserade bipower variationen, och realiserade semi-variationen är faktorer av en tidsserie som kan användas för att hitta hopp i hopp-diffusionprocesser. De används här för att testa om anomali-detektionsalgoritmer kan använda funktionerna för att förbättra dess förmåga att detektera hopp. Algoritmerna som testades var Isolation Forest, Robust Random Cut Forest, och Isolation Forest Algoritmen för Strömmande data, där de två sistnämnda använder strömmande data. Detta gjordes genom att genera data från en Merton hopp-diffusionprocess med varierande hoppfrekvens där de olika algoritmerna testades med varje funktion samt med kombinationer av funktioner. Prestationen av varje algoritm beräknades med hjälp av F1-värde för att kunna jämföra algoritmerna och funktionerna med varandra. Det hittades att funktionerna kan användas för att extrahera hopp från hopp-diffusionprocesser och även använda de som en indikator för när hopp skulle ha hänt. Algoritmerna fick även ett högre F1-värde när de använde funktionerna. Isolation Forest fick ett förbättrat F1-värde genom att använda en eller fler utav funktionerna och hade ett högre F1-värde än att bara använda funktionerna för att detektera hopp. Robust Random Cut Forest hade högst F1-värde av de två algoritmer som använde strömmande data och båda fick högst F1-värde när man använde en kombination utav alla funktioner. Resultatet visar att dessa funktioner fungerar för att extrahera hopp från hopprocesser, använda dem för att detektera hopp, och att algoritmernas förmåga att detektera hoppen ökade med hjälp av funktionerna.
APA, Harvard, Vancouver, ISO, and other styles
36

Wei, Xiupeng. "Modeling and optimization of wastewater treatment process with a data-driven approach." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2659.

Full text
Abstract:
The primary objective of this research is to model and optimize wastewater treatment process in a wastewater treatment plant (WWTP). As the treatment process is complex, its operations pose challenges. Traditional physics-based and mathematical- models have limitations in predicting the behavior of the wastewater process and optimization of its operations. Automated control and information technology enables continuous collection of data. The collected data contains process information allowing to predict and optimize the process. Although the data offered by the WWTP is plentiful, it has not been fully used to extract meaningful information to improve performance of the plant. A data-driven approach is promising in identifying useful patterns and models using algorithms versed in statistics and computational intelligence. Successful data-mining applications have been reported in business, manufacturing, science, and engineering. The focus of this research is to model and optimize the wastewater treatment process and ultimately improve efficiency of WWTPs. To maintain the effluent quality, the influent flow rate, the influent pollutants including the total suspended solids (TSS) and CBOD, are predicted in short-term and long-term to provide information to efficiently operate the treatment process. To reduce energy consumption and improve energy efficiency, the process of biogas production, activated sludge process and pumping station are modeled and optimized with evolutionary computation algorithms. Modeling and optimization of wastewater treatment processes faces three major challenges. The first one is related to the data. As wastewater treatment includes physical, chemical, and biological processes, and instruments collecting large volumes of data. Many variables in the dataset are strongly coupled. The data is noisy, uncertain, and incomplete. Therefore, several preprocessing algorithms should be used to preprocess the data, reduce its dimensionality, and determine import variables. The second challenge is in the temporal nature of the process. Different data-mining algorithms are used to obtain accurate models. The last challenge is the optimization of the process models. As the models are usually highly nonlinear and dynamic, novel evolutionary computational algorithms are used. This research addresses these three challenges. The major contribution of this research is in modeling and optimizing the wastewater treatment process with a data-driven approach. The process model built is then optimized with evolutionary computational algorithms to find the optimal solutions for improving process efficiency and reducing energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
37

Addy, Nicholas G. "Ontology driven geographic information retrieval." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/2526.

Full text
Abstract:
The theory of modern information retrieval processes must be improved to meet parallel growth and efficiency in its dependent hardware architectures. The growth in data sources facilitated by hardware improvements must be conversant with parallel growth at the user end of the information retrieval paradigm, encompassing both an increasing demand for data services and a widening user base. Contemporary sources refer to such growth as three dimensional, in reference to the expected and parallel growth in the key areas of hardware processing power, demand from current users of information services and an increase in demand via an extended user base consisting of institutions and organizations who are not characteristically defined by their use of geographic information. This extended user base is expected to grow due to the demand to utilise and incorporate geographic information as part of competitive business processes, to fill the need for advertising and spatial marketing demographics. The vision of the semantic web as such is the challenge of managing integration between diverse and increasing data sources and diverse and increasing end users of information. Whilst data standardisation is one means of achieving this vision at the source end of the information flow, it is not a solution in a free market of ideas. Information in its elemental form should be accessible regardless of the domain of its creation.In an environment where the users and sources are continually growing in scope and depth, the management of data via precise and relevant information retrieval requires techniques which can integrate information seamlessly between machines and users regardless of the domain of application or data storage methods. This research is the study of a theory of geographic information structure which can be applied to all aspects of information systems development, governing at a conceptual level the representation of information to meet the requirements of inter machine operability as well as inter user operability. This research entails a thorough study of the use of ontology from theoretical definition to modern use in information systems development and retrieval, in the geographic domain. This is a study examining how the use of words to describe geographic features are elements which can form a geographic ontology and evaluates WordNet, an English language ontology in the form of a lexical database as a structure for improving geographic information recall on Gazetteers. The results of this research conclude that WordNet can be utilised to as a methodology for improving search results in geographic information retrieval processes as a source for additional query terms, but only on a narrow user domain.
APA, Harvard, Vancouver, ISO, and other styles
38

Wiigh, Oscar. "Visualizing partitioned data in Audience Response Systems : A design-driven approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280847.

Full text
Abstract:
Meetings and presentations are often monological in their nature, creating a barrier of productivity in workplaces around the world. By utilizing modern technologies such as a web-based Audience Response System (ARS), meetings and presentations can be transformed into interactive exercises where the audience’s views, opinions and answers can be expressed. Visualizing these audience responses and relating questions-specific partitioned answers between each other, through visualization structures, was the topic of this report. The thesis project was carried out in collaboration with Mentimeter, creator of a web-based ARS and online presentation tool. The Double Diamond design process model was used to investigate and ground the design and development process. To guide the implementation of the prototypes, a focus group was held with four visualization and design professionals knowledgeable about ARSs, to gather feedback on high fidelity sketches. The final prototypes were evaluated with the extended Technology Acceptance Model (TAM) for information visualization to survey end-users' attitudes and willingness to adopt the visualization structures. Eight end-users tested the final web-based prototypes. The findings of the user tests indicate that both visualizations prototypes showed promise for visualizing partitioned data in novel ways for ARSs, with an emphasis on a circle cluster visualization as it allowed for the desired exploration. The results further imply that there is value to be gained by presenting partitioned data in ways that allows for exploration, and that audiences would likely adopt a full implementation of the visualizations given some added functionalities and adjustments. Future research should focus on fully implementing and testing the visualizations in front of a live audience, as well investigating other contemporary visualization structures and their capabilities for visualizing partitioned ARS data.
Möten och presentationer är ofta sedda som ett produktivitetshinder på arbetsplatser runtom i världen på grund av deras monologiska natur. Genom att använda moderna tekniska lösningar såsom webbaserade Audience Response Systems (ARS) så kan möten och presentationer omvandlas till interaktiva moment där en publiks perspektiv, åsikter och svar kan uttryckas. Att visualisera en publiks svar och relatera frågespecifika partitionerade svar mellan varandra, genom visualiseringar, var denna rapports huvudämne. Projektet utfördes i samarbete med Mentimeter, skapare av ett webbaserat ARS och digitalt presentationsverktyg. Double Diamond-modellen användes för att undersöka och förankra design- och utvecklingsarbetet i projektet. För att guida utvecklingsarbetet, och få feedback på designförslag, genomfördes en fokusgrupp med fyra visualiserings- och designexperter som besatt kunskap om ARS. De framtagna prototyperna utvärderas genom den utökade Technology Acceptance Model (TAM) för att undersöka slutanvändares inställning och villighet att använda visualiseringarna. Totalt testade åtta slutanvändare de framtagna webbaserade prototyperna. Resultatet av användartesterna indikerade att båda visualiseringsprototyperna har potential att visualisera partitionerad data på nya sätt i ARS, men att en klustervisualisering var överlägsen från en utforskningssynpunkt. Resultaten innebär vidare att det finns ett värde i att presentera partitionerad data på sätt som möjliggör utforskning av publikens svar, och att publiken troligen kommer att anta en fullständig implementering av visualiseringarna förutsatt några extra funktioner och justeringar. Framtida forskning bör fokusera på att fullständigt implementera och testa visualiseringarna framför en faktiskt publik, samt undersöka andra samtida visualiseringsstrukturer och deras möjligheter att visualisera partitionerad ARS-data.
APA, Harvard, Vancouver, ISO, and other styles
39

Das, Debasish. "Bayesian Sparse Regression with Application to Data-driven Understanding of Climate." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/313587.

Full text
Abstract:
Computer and Information Science
Ph.D.
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
40

Andersson, Johan, and Amirhossein Gharaie. "It is Time to Become Data-driven, but How : Depicting a Development Process Model." Thesis, Högskolan i Halmstad, Akademin för företagande, innovation och hållbarhet, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-45353.

Full text
Abstract:
Background: The business model (BM) is an essential part of firms and it needs to be innovated continuously to allow firms to stay or become competitive. The process of business model innovation (BMI) unfolds incrementally by re-designing or developing new activities in order to provide value propositions (VP). With increasing availability of data, pressure on BMI to orchestrate their activities towards putting data as a key resource and develop data-driven business models (DDBM) is growing. Problematization: The DDBM provides valuable possibilities by utilizing data to optimize current businesses and create new VPs. However, the development process of DDBMs is outlined as challenging and scarcely reviewed. Purpose: This study aims to explore how a data-driven business model development process looks. More specifically, we adopted this research question: What are the phases and activities of a DDBM development process, and what characterizes this process? Method: This is a qualitative study in which the empirical data was collected through 9 semi-structured interviews where the respondents were divided among three different initiatives. Empirical Findings: This study enriches the existing literature of BMI in general and data-driven business model innovation in particular. Concretely, this study contributes to the process perspective of DDBM development. It helps to unpack the complexity of data engagement in business model development and provides a visual process model as an artefact that shows the anatomy of the process. Additionally, this study resonates with value logics manifestation through the states of artefacts, activities, and cognitions. Conclusions: This study concludes that the DDBM development process is structured with two phases as low data-related and high data-related activities, inheriting seven sub-phases consisting of different activities. Also, this study identified four underlying characteristics of the DDBM development process comprising value co-creation, iterative experiment, ethical and regulatory risk, and adaptable strategy. Future research: Further work is needed to explain the anatomy and structure of the DDBM development process in different contexts to uncover if it captures various complexities of data and increases its generalizability. Furthermore, more research is required to differentiate between different business models and consequently customizing the development process for each type. Future research can also further explore the value co-creation in developing DDBM. In this direction, it would be interesting to consider connecting the field of open innovation to the field of DDBM and, specifically, its role in the DDBMs development process. Another promising avenue for future research would be to go beyond the focus on merely improving the VP to maximize the data monetization, and instead focus on the interplay and role that data has on sustainability.
APA, Harvard, Vancouver, ISO, and other styles
41

Decker, Sebastian. "Data-driven business process improvement : An illustrative case study about the impacts and success factors of business process mining." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-43958.

Full text
Abstract:
The current business environment is rapidly and fundamentally changing. The main driver are digital technologies. Companies face the pressure to exploit those technologies to improve their business processes in order to achieve competitive advantage. In the light of increased complexity of business processes and the existence of corporate Big Data stored in information systems, the discipline of process mining has emerged. Investigate how process mining can support the optimization of business processes. In this qualitative study, an illustrative case study research is utilized involving eight research participants. Hereby, data is primarily collected from semi-structured interviews. The data is analyzed using content analysis. In addition, the illustrative case serves the purpose to demonstrate the application of process mining. The research revealed that process mining has important impacts on current business process improvement. Not all of them were explicitly positive. The derived success factors should support vendors, current and potential users to apply process mining safe and successfully.
APA, Harvard, Vancouver, ISO, and other styles
42

STRÖMBERG, HANNA. "Data driven customer insights in the B2B sales process at high technology scaleups." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296246.

Full text
Abstract:
When scaling a company it is important to implement customer insights to achieve growth of revenue. Understanding and defining a suitable B2B sales process has also been shown to play an important part in enhancing sales, and traditional processes include multiple steps performed by sales representatives. One step revolves around the presentation of the offered product or service. For sales representatives to present a product or service successfully they must acquire or have deep knowledge of the customer, such as their industry trends and general business. This can be achieved by acquiring customer insights that are data driven. Adopting data driven customer insights has also been proven to increase sales. Therefore, this research investigates the connection between the B2B sales process and the generation and implementation of data driven customer insights. In particular, this research explores the steps included in a B2B sales process at a high technology scaleup and hence how data driven customer insights can enhance the presentation step in the B2B sales process.  The research is carried out through a case study at a case company labelled as a high technology scaleup. Interviews were conducted with sales representatives working in the commercial team at the case company. The result from this research finds that six steps are included in the B2B sales process at high technology scaleups. The steps are as follows: Lead generation, First meeting, Assessment, Contract proposal, Negotiation and Closed deal. The second step includes presenting the offered product or service, which this research identified as most challenging for the sales representatives to execute successfully due to the technical complexity of the product/service. Findings from this research shows that data driven customer insights can be used to simplify this step in the process. For example, data driven customer insights can help personalize presentation material and enable rapport building. In addition, data driven customer insights help align expectations between buyers and sellers during the first meeting, thus increasing the likelihood of reaching a closed deal.
När ett företag ska skalas upp är det viktigt att implementera kundinsikter för att uppnå ökad omsättning. Att förstå och definiera en passande B2B-försäljningsprocess har också visats spela en viktig roll för nå ökade intäkter, och traditionella säljprocesser innehåller flera steg som säljpersonal utför. Ett steg kretsar kring presentationen av den erbjudna produkten eller tjänsten. För att säljpersonal ska kunna presentera en produkt eller tjänst med framgång behöver de förvärva eller ha djup kunskap om kunden, såsom branschtrender och generell verksamhet. Detta kan uppnås genom att anskaffa kundinsikter som är datadrivna. Att använda datadriven kundinsikt har också visats öka försäljningssiffror. Med detta som bakgrund undersöker därför den här forskningen sambandet mellan B2B-försäljningsprocessen och generering och implementering av datadriven kundinsikt. I synnerhet undersöker denna forskning stegen som ingår i en B2B-försäljningsprocess i ett högteknologiskt scale up och därmed hur datadriven kundinsikt kan förbättra presentationssteget i B2B-försäljningsprocessen.  Forskningen utförs genom en fallstudie på ett fallföretag som räknas som ett högteknologiskt scale up. Intervjuer genomfördes med försäljningsmedarbetare som jobbar i det kommersiella teamet på företaget. Resultatet från denna forskning visar att sex steg ingår i B2B-försäljningsprocessen vid högteknologiska scale ups. Dessa sex stegen är: Leadsgenerering, Första möte, Utvärdering, Kontraktsförslag, Förhandling och Avslutad affär. Det andra steget innebär att den erbjudna produkten eller tjänsten presenteras, och detta steg identifieras som mest utmanande för försäljningsmedarbetarna att utföra med framgång på grund av produktens/tjänstens tekniska komplexitet. Vidare visar resultat från denna forskning att datadriven kundinsikt kan användas för att förenkla detta steg i processen. Datadriven kundinsikt kan till exempel hjälpa till att personalisera presentationsmaterial och möjliggöra förtroendebyggande. Dessutom möjliggör datadrivna kundinsikter att köpare och säljare delar gemensamma förväntningar på det första mötet, vilket ökar sannolikheten att uppnå en sluten affär.
APA, Harvard, Vancouver, ISO, and other styles
43

Sitruk, Yohann. "Pilotage de la performance des projets de science citoyenne dans un contexte de transformation du rapport aux données scientifiques : systématisation et perte de production." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM035/document.

Full text
Abstract:
De plus en plus d’organisations scientifiques contemporaines intègrent dans leur processus des foules de participants assignés à des tâches variées, souvent appelés projets de science citoyenne. Ces foules sont une opportunité dans un contexte lié à une avalanche de données massives qui met les structures scientifiques face à leurs limites en terme de ressources et en capacités. Mais ces nouvelles formes de coopération sont déstabilisées par leur nature même dès lors que les tâches déléguées à la foule demandent une certaine inventivité - résoudre des problèmes, formuler des hypothèses scientifiques - et que ces projets sont amenés à être répétés dans l’organisation. A partir de deux études expérimentales basées sur une modélisation originale, cette thèse étudie les mécanismes gestionnaires à mettre en place pour assurer la performance des projets délégués à la foule. Nous montrons que la performance est liée à la gestion de deux types de capitalisation : une capitalisation croisée (chaque participant peut réutiliser les travaux des autres participants) ; une capitalisation séquentielle (capitalisation par les participants puis par les organisateurs). Par ailleurs cette recherche met en avant la figure d’une nouvelle figure managériale pour supporter la capitalisation, le « gestionnaire des foules inventives », indispensable pour le succès des projets
A growing number of contemporary scientific organizations collaborate with crowds for diverse tasks of the scientific process. These collaborations are often designed as citizen science projects. The collaboration is an opportunity for scientific structures in a context of massive data deluge which lead organizations to face limits in terms of resources and capabilities. However, in such new forms of cooperation a major crisis is caused when tasks delegated to the crowd require a certain inventiveness - solving problems, formulating scientific hypotheses - and when these projects have to be repeated in the organization. From two experimental studies based on an original modeling, this thesis studies the management mechanisms needed to ensure the performance of projects delegated to the crowd. We show that the performance is linked to the management of two types of capitalization: a cross-capitalization (each participant can reuse the work of the other participants); a sequential capitalization (capitalization by the participants then by the organizers). In addition, this research highlights the figure of a new managerial figure to support the capitalization, the "manager of inventive crowds", essential for the success of the projects
APA, Harvard, Vancouver, ISO, and other styles
44

Morselli, Luca. "Analisi e implementazione di un sistema di Data Visualization in relazione al modello organizzativo aziendale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Le aziende oggi sono sopraffatte dall'enorme quantitativo di dati con cui hanno a che fare: una corretta archiviazione di questi è presupposto fondamentale per consentire un'accurata rappresentazione dei dati per poter poi essere analizzati da parte dei diversi attori aziendali. Per fare ciò si ricorre all'utilizzo di sistemi di Business Intelligence in grado di fornire un supporto fondamentale lungo tutto il processo decisionale a diversi livelli. L'elaborato ha dunque come scopo principale la presentazione delle tecniche di Data Visualization a supporto di tale processo in funzione del ruolo organizzativo dell'utente di riferimento. Nell'elaborato sono presentate una serie di proposte di Data Visualization per ogni ruolo e diverse possibili applicazioni. Particolare attenzione viene posta alle potenzialità che le dashboard possono presentare nell'analisi delle diverse aree aziendali e l'impatto che queste hanno sul processo decisionale a livello economico, strategico e operativo.
APA, Harvard, Vancouver, ISO, and other styles
45

Dyrhage, Max. "Incorporating a tag management system in an agile web development process to become more data-driven." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240395.

Full text
Abstract:
Web analytics are used to track and examine user behavior on websites and web applications. In order to take data driven decisions companies and organizations working with the web look to web analytics to understand their users. A piece of Javascript code that collect user behavior and information is often referred to as a tag, which can be managed through a tag management system. Tag management systems can provide structure to how a website’s users’ behaviors are being measured. This study examines how a tag management system can enable web analytics of user data to be incorporated in an agile web development process at the Swedish company Dailybitsof. With a literature study, case study and interviews with professionals on the subject, a set of recommendations to enable web analytics is presented. This study suggests that a tag management system can enable web analytics to be incorporated into an agile web development process if it is implemented in combination with changes to the agile process.
Webbanalys används för att spåra och undersöka användarbeteende på hemsidor och webbapplikationer. För att ta datadrivna beslut, använder sig företag och organisationer av webbanalys för att förstå sina användare. Några rader Javascript-kod kan användas för att samla upp användarbeteende och kallas ibland för ett tag. Dessa tags kan behandlas i ett så kallad tag management-system. Tag management-system kan ge struktur över hur användarbeteendet på en hemsida spåras och analyseras. Den har uppsatsen undersöker hur ett tag management-system kan möjliggöra att data från webbanalys blir använt i en agil webbutvecklingsprocess hos det svenska företaget Dailybitsof. Med en litteraturstudie, fallstudie och intervjuer med professionella inom ämnet presenteras rekommendationer för att att möjliggöra webbanalys. Uppsatsen föreslår att ett tag management-system kan möjliggöra användandet av webbanalys i en agil webbutvecklingsprocess om det implementeras i kombination med ändringar av den agila processen.
APA, Harvard, Vancouver, ISO, and other styles
46

Carleson, Hannes, and Marcus Lyth. "Evaluation of Problem Driven Software Process Improvement." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189216.

Full text
Abstract:
Software development is constantly growing in complexity and several newtools have been created with the aim to manage this. However, even with thisever evolving range of tools and methodology, organizations often struggle withhow to implement a new development-process, especially when implementingagile methods. The most common reason for this is because teams implementagile tools in an ad-hoc manner, without fully considering the effects this cancause. This leads to teams trying to correct their choice of methodologysomewhere during the post-planning phase, which can be devastating for aproject as it adds further complexity to the project by introducing new problemsduring the transition process. Moreover, with an existing range of tools aimedat managing this process transition, none of them have been thoroughlyevaluated, which in turn forms the problem that this thesis is centred around.This thesis explores a method transition scenario and evaluates a SoftwareProcess Improvement method oriented around the problems that theimprovement process is aiming to solve. The goal with this is to establish ifproblem oriented Software Process Improvement is viable as well as to providefurther data for the extensive research that is being done in this field. We wishto prove that the overall productivity of a software development team can beincreased even during a project by carefully managing the transition to newmethods using a problem driven approach.The research method used is of qualitative and inductive character. Data iscollected by performing a case study, via action research, and literature studies.The case study consists of iteratively managing a transition over to newmethods, at an organization in the middle of a project, using a problem drivenapproach to Software Process Improvement. Three iterations of methodimprovement are applied on the project and each iteration acts as an evaluationon how well Problem Driven Software Process Improvement works.By using the evaluation model created for this degree project, the researchershave found that problem driven Software Process Improvement is an effectivetool for managing and improving the processes of a development team.Productivity has increased with focus on tasks with highest priority beingfinished first. Transparency has increased with both development team andcompany having a clearer idea of work in progress and what is planned.Communication has grown with developers talking more freely about userstories and tasks during planning and stand-up meetings. The researchersacknowledge that the results of the study are of a limited scope and alsorecognize that further evaluation in form of more iterations are needed for acomplete evaluation.
APA, Harvard, Vancouver, ISO, and other styles
47

Wendin, Ingrid, and Per Bark. "Data at your service! : A case study of utilizing in-service data to support the B2B sales process at a large information and communications technology company." Thesis, Linköpings universitet, Industriell ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176403.

Full text
Abstract:
The digitalization of our society and the creation of data intense industries are transforming how industrial sales can be made. Large volumes of data are generated when businesses and people use digital products and services which are available in the modern world. Some of this data describes the digital products and services when they are in use, i.e., it is in-service data. Furthermore, data has during the last decade been seen as an asset which can improve decision-making and has made sales activities become increasingly customer specific. The purpose of this study was to explore how knowledge from in-service data can serve B2B selling. To realize this purpose the following three research questions were answered by conducting a single case study of a large company in the information and communications technology (ICT) industry. (RQ1) How does a company in a data intense industry use knowledge from in-service data in the B2B sales process? (RQ2) What opportunities does knowledge from in-service data create in the B2B sales process? (RQ3) What challenges hinder a company from using knowledge from in-service data in the B2B sales process? RQ1: This study has concluded that, in the context of a data intense industry, throughout the steps in the B2B sales process, knowledge from in-service data is actively used by the sales team, however, to varying degrees. In-service data is used in six categories of sales activities: (1) to understand the customer in terms of their technical and strategical needs, which enables lead generation and cross-selling, (2) to make information from in-service data available through data collection, storage, and analyses, (3) to nurture the relationship between buyer and seller by creating understanding, trust and satisfactory offers to the customer, (4) to present solutions with convincing arguments, (5) to solve problems and satisfy the customer’s needs, and (6) to provide post-sale value-adding services. Moreover, three general resources which are used in the activities were identified: An audit report which presents the information of the data, a plan which presents strategic expansions of the solution, and simulations of the solution. Furthermore, four general actors who are performing the activities were identified: the Key Account Manager (KAM) who is responsible for conducting the sales interactions with the customer, the sales team, and the presales team who both support the KAM, and the customer. In addition to the general resources and actors, companies may use step-specific resources and actors. RQ2: Four categories of opportunities were identified: knowledge from in-service data (1) assists KAMs in discovering customer needs, (2) guides the KAM in creating better customer specific solutions, (3) helps the KAM move the sale faster through the sales process, and (4) assists the company in becoming a true partner who provides strategic services, rather than acting as a supplier. RQ3: Finally, four categories of challenges were identified: (1) organizational, (2) technological, (3) cultural, and (4) legal & security. Out of these, obtaining access to the data was identified as the greatest challenge to use in-service data. The opportunities and the challenge to access data are deemed to be general for companies in data intense industries, while the other challenges are depending on the structure, size, and culture of the individual company. The findings of this study contribute to a general understanding of how companies in data intense industries may use knowledge from in-service data, what opportunities this data create for their B2B sales process, and which challenges they face when they pursue activities which use the knowledge from in-service data. To conclude, in-service data serves B2B selling especially as a source of customer knowledge. It is used by salespeople to understand the customer in terms of its technical and strategical needs and salespeople use this knowledge to conduct various customer-oriented sales activities. In-service data creates several opportunities in B2B sales. However, several challenges must be overcome to seize the opportunities. Especially the question of data access.
APA, Harvard, Vancouver, ISO, and other styles
48

Falkenstrand, Johanna, and Camilla Lemos. "Fostering Proactiveness in Data-Driven Matrix Organizations : A Study of Alfa Laval's Distribution Center in Tumba." Thesis, KTH, Industriell produktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254341.

Full text
Abstract:
Globalization has increased the complexity of the business world, as it adds new dimensions to companies’ operations, such as global suppliers and customers, and competition from global actors. To handle the complexity, companies are pressured to become more data-driven to be able to measure and align their operations, and create possibilities for efficiency and competitiveness [Skjott-Larsen etal., 2007; Long, 2018]. In order to enable the change towards becoming more data-driven, companies need to rethink the structure of their organization. Matrix structures have gained popularity, since it allows companies to focus on more than one dimension by creating functional teams focused on specific tasks [Sy et al., 2005]. However, it is not uncommon that the functional groups becomes functional silos, with an inward focus on the own groups’ performance, leading to decreased understanding of other groups and poor communication between groups. A lack of understanding of other groups can lead to a reactive, rather than proactive, way of handling problems [Motiwalla and Pearson, 2009]. The purpose of this project is to create a process that can be used to facilitate proactive work in adata-driven matrix organization struggling with a reactive way of handling problems. The process can be used as a way to decide between possible solutions in decision-making processes, while making sure that any affected department is involved at an early stage in the decision-making process. At Alfa Laval’s distribution center (DC) in Tumba, they are facing the challenges of functional silos and reactive work. The organization is data-driven, why a lot of decisions are based on data. However, the best decision according to the data is not always feasible, which has lead to decisions being made that affects other departments negatively. Based on data from and observations made at the DC, a processwas created. The process was iterated and improved through application to real-life problems and point of improvements identified at DC Tumba. While it is based on the operations at Alfa Laval, it canbe applied to any organization facing similar challenges. The final version of the process proved to deliver good solutions to problems by involving stakeholders early on in the process, making it possible for them to influence how the solutions should be adjusted in order to avoid the changes affecting their daily work negatively. The most important conclusion is that important stakeholder departments should be involved earlyin decision-making processes. That way, their valuable competence and knowledge can be utilized when identifying possible solution, and any negative effects of a solution on another departments can be discovered before implementation. In addition, by taking the time to thoroughly analyze the root cause and effects to a problem, the understanding of the chain can increase.
Globalisering har ökat komplexiteten av affärsvärlden, då ytterligare dimensioner måste tas hänsyn till i företags verksamheter, så som globala leverantörer och kunder, och ökad konkurrens från globala aktörer. För att hantera komplexiteten blir företag mer datadrivna, för att kunna mäta och samordna sin verksamhet och skapa möjligheter för effektivitet och konkurrenskraftighet [Skjott-Larsen et al.,2007; Long, 2018]. För att möljiggöra ett skifte mot att bli mer datadriven, måste företag se över sin organisationsstruktur. Matrisstrukturer har ökat i popularitet då de möjliggör att företag kan fokusera på fler än en dimensioner samtidigt genom att skapa funktionella grupper fokuserade på specifikauppgifter [Sy et al., 2005]. Dock är det inte ovanligt att funktionella grupper förvandlas till funktionella silos, med ett inåtriktat fokus på den egna gruppens prestationer, vilket leder till minskad förtåelse och bristfällig kommunikation grupper emellan. Bristande förståelse för andra grupper kan leda till ett klimat där problem hanteras reaktivt snarare än proaktivt [Motiwalla and Pearson, 2009]. Syftet med detta projekt är att skapa en process som kan användas för att underlätta proaktivt arbetet i en datadriven organisation där problem hanteras reaktivt. Processen kan användas som ett hjälpmedel för att välja den bästa av flera möjliga lösningar, samtidigt som påverkade avdelningar involveras i ett tidigt stadium av beslutsprocessen. På Alfa Lavals distributionscenter (DC) i Tumba, finns utmaningar med funktionella silos och reaktivt arbete. Organisationen är datadriven, och beslut fattas baserat på data. Dock är inte alltid beslut som baserats på data rimliga, vilket har lett till att beslut tas som påverkar andra avdelningar negativt. Baserat på data från och observationer på DCt, skapades en preliminär process. Processen itererades och förbättrades sedan genom att appliceras på verkliga problem och förbättringsområden som identifierades på DC Tumba. Även om processen togs fram och baserades på Alfa Lavals verksamhet, kan den appliceras på andra organisationer som står inför samma utmaningar. Den slutgiltiga versionen av processen visade sig generera bra lösningar till problemen genom att involvera intressenter tidigt i processen, vilket gav dem möjligheten att påverka hur den rekommenderade lösningen skulle justeras för att undvika att dereas dagliga arbete skulle påverkas negativt. Den viktigaste slutsatsen är att det är viktigt att involvera intressentavdelningar i ett tidigt skede i beslutsfattandeprocesser. På så sätt kan deras värdefulla kompetens och kunskaper nyttjas när potentiella lösningar till ett problem genereras, och negativa effekter från lösningen på andra avdelningar kan upptäckas innan implementering. Att dessutom noggrant analysera roten till problemet och dess effekter kan leda till att förståelsen för hela kedjan ökar.
APA, Harvard, Vancouver, ISO, and other styles
49

Calfa, Bruno Abreu. "Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/575.

Full text
Abstract:
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
APA, Harvard, Vancouver, ISO, and other styles
50

DeWitt, David. "Teacher-Based Teams Talk of Change in Instructional Practices." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4615.

Full text
Abstract:
Mandates have been issued for educators to collaborate and improve student achievement, requiring a change in instructional practices through teacher talk. Teachers have struggled to make the transitional conversion from team planning to observed changes in instructional practices with evidence of improvement. The purpose of this qualitative study was to examine how teachers collaborated while following the Ohio Improvement Process. The purpose was then to make data-driven changes regarding instructional practices in the continuous improvement cycle. The conceptual framework was constructed from the teachers' dialogic stances towards talk of instruction, along with the intellectual and emotional attitudes teachers have about making changes. The guiding research question examined the ways teachers have been influenced by each other to make changes in instructional practices. The case study design observed a sample of 10 teachers from two teacher-based teams, with five of those teachers being interviewed. Observational data were examined for dialogic stance toward talk of instructional practices, whereas interview data were analyzed looking for evidence of the cognitive restructuring. Statements were categorized as motivations and influences. The analysis revealed that the teachers are changing their thinking through motivations and influences from collaboration. Literature has supported the findings that teachers could benefit from a gradual implementation process leading to the continuous improvement cycle. By developing a policy recommendation paper with a focus on teacher learning, positive social change may include preparing and empowering teachers for the changes that occur through collaboration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography