Dissertations / Theses on the topic 'Data-driven experiments'

To see the other types of publications on this topic, follow the link: Data-driven experiments.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Data-driven experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cedeno, Vanessa Ines. "Pipelines for Computational Social Science Experiments and Model Building." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91445.

Full text
Abstract:
There has been significant growth in online social science experiments in order to understand behavior at-scale, with finer-grained data collection. Considerable work is required to perform data analytics for custom experiments. In this dissertation, we design and build composable and extensible automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual's cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making. Because of this, there is interest in modeling situations that promote the creation of CI in order to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagram games, it is surprising that very little work has been done in modeling these games. Also, abduction is an inference approach that uses data and observations to identify plausibly (and preferably, best) explanations for phenomena. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. In a group anagrams web-based networked game setting, we formalize an abductive loop, implement it computationally, and exercise it; we build and evaluate three agent-based models (ABMs) through a set of composable and extensible pipelines; we also analyze experimental data and develop mechanistic and data-driven models of human reasoning to predict detailed game player action. The agreement between model predictions and experimental data indicate that our models can explain behavior and provide novel experimental insights into CI.
Doctor of Philosophy
To understand individual and collective behavior, there has been significant interest in using online systems to carry out social science experiments. Considerable work is required for analyzing the data and to uncover interesting insights. In this dissertation, we design and build automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual’s cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making, so there is interest in modeling situations that promote the creation of CI to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagrams games, it is surprising that very little work has been done in modeling these games. In addition, to identify best explanations for phenomena we use abduction. Abduction is an inference approach that uses data and observations. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. In a group anagrams web-based networked game setting we do the following. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. We build and evaluate three agent-based models (ABMs). We analyze experimental data and develop models of human reasoning to predict detailed game player action. We claim our models can explain behavior and provide novel experimental insights into CI, because there is agreement between the model predictions and the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
2

Merikle, Elizabeth Paige 1965. "Facilitation of performance on a picture fragment completion test: Data-driven potentiation in perception." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/277941.

Full text
Abstract:
A technique commonly used to study the structure of memory entails preceding a task by a brief masked presentation of a potentially relevant stimulus. In two experiments, I examined the type of facilitation obtained on a picture fragment completion task by prior presentation of either the name of the completed object, a complete picture of the object, or the fragment itself. In Experiment 1a significantly more ambiguous picture fragments (i.e. fragments supporting a number of interpretations) were identified after exposure to pictures than to picture names or picture fragments. Experiment 1b verified that the information in the masked primes was not available to conscious awareness. These results suggest that under limited encoding conditions only bottom-up activation provided by prior presentation of the fragments aids shape recognition under degraded conditions. Implications for the structures and processes involved in shape recognition are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Oskarsdottir, Eyglo Myrra. "Towards a Data-Driven Pricing Decision With the Help of A/B Testing." Thesis, KTH, Nationalekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199224.

Full text
Abstract:
An A/B test is implemented on a SaaS rm's product page to examine the di erence in conversion rates from website visitors who are randomly assigned to two di erent product-landing pages that show di erent prices. To count as a successful conversion the visitors that view a product-landing page have to click on a \Free Trial" button. Half of the group will be assigned the treatment page, which will state higher prices and the other half will be assigned the controlled page, which will state today's current price. The only variant that will di er from the two pages will be the stated price of the product and all other factors will be kept constant. The controlled experiment is executed to get a sense of customers' price sensitivity, hence this thesis contributes to microeconomic research of the private sector, more specically to the ICT industry by using a novel approach with the help of A/B testing on prices. The results showed no statistical signicance difference between the two variations, which can be translated to accepting the null hypothesis; the demand for a particular Software-As-A-Service product will hold unchanged after the proposed price increase. At rst, this could be a surprising result but when looking into the industry, which the rm participates in and their early mover advantages this result could have been assumed.
APA, Harvard, Vancouver, ISO, and other styles
4

Emerton, Guy. "Data-driven methods for exploratory analysis in chemometrics and scientific experimentation." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86366.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Background New methods to facilitate exploratory analysis in scientific data are in high demand. There is an abundance of available data used only for confirmatory analysis from which new hypotheses can be drawn. To this end, two new exploratory techniques are developed: one for chemometrics and another for visualisation of fundamental scientific experiments. The former transforms large-scale multiple raw HPLC/UV-vis data into a conserved set of putative features - something not often attempted outside of Mass-Spectrometry. The latter method ('StatNet'), applies network techniques to the results of designed experiments to gain new perspective on variable relations. Results The resultant data format from un-targeted chemometric processing was amenable to both chemical and statistical analysis. It proved to have integrity when machine-learning techniques were applied to infer attributes of the experimental set-up. The visualisation techniques were equally successful in generating hypotheses, and were easily extendible to three different types of experimental results. Conclusion The overall aim was to create useful tools for hypothesis generation in a variety of data. This has been largely reached through a combination of novel and existing techniques. It is hoped that the methods here presented are further applied and developed.
AFRIKAANSE OPSOMMING: Agtergrond Nuwe metodes om ondersoekende ontleding in wetenskaplike data te fasiliteer is in groot aanvraag. Daar is 'n oorvloed van beskikbaar data wat slegs gebruik word vir bevestigende ontleding waaruit nuwe hipoteses opgestel kan word. Vir hierdie doel, word twee nuwe ondersoekende tegnieke ontwikkel: een vir chemometrie en 'n ander vir die visualisering van fundamentele wetenskaplike eksperimente. Die eersgenoemde transformeer grootskaalse veelvoudige rou HPLC / UV-vis data in 'n bewaarde stel putatiewe funksies - iets wat nie gereeld buite Massaspektrometrie aangepak word nie. Die laasgenoemde metode ('StatNet') pas netwerktegnieke tot die resultate van ontwerpte eksperimente toe om sodoende ân nuwe perspektief op veranderlike verhoudings te verkry. Resultate Die gevolglike data formaat van die ongeteikende chemometriese verwerking was in 'n formaat wat vatbaar is vir beide chemiese en statistiese analise. Daar is bewys dat dit integriteit gehad het wanneer masjienleertegnieke toegepas is om eienskappe van die eksperimentele opstelling af te lei. Die visualiseringtegnieke was ewe suksesvol in die generering van hipoteses, en ook maklik uitbreibaar na drie verskillende tipes eksperimentele resultate. Samevatting Die hoofdoel was om nuttige middele vir hipotese generasie in 'n verskeidenheid van data te skep. Dit is grootliks bereik deur 'n kombinasie van oorspronklike en bestaande tegnieke. Hopelik sal die metodes wat hier aangebied is verder toegepas en ontwikkel word.
APA, Harvard, Vancouver, ISO, and other styles
5

Kalibjian, J. R. "A Packet Based, Data Driven Telemetry System for Autonomous Experimental Sub-Orbital Spacecraft." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608857.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
A data driven telemetry system is described that responds to the rapid nature in which experimental satellite telemetry content is changed during the development process. It also meets the needs of a diverse experiment in which the many phases of a mission may contain radically different types of telemetry data. The system emphasizes mechanisms for achieving high redundancy of critical data. A practical example of such an implementation, Brilliant Pebbles Flight Experiment Three (FE-3), is cited.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, He Sokhansanj Bahrad. "Systematic data-driven modeling of cellular systems for experimental design and hypothesis evaluation /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guerrero, Ludueña Richard E. "Data Driven Approach to Enhancing Efficiency and Value in Healthcare." Doctoral thesis, Universitat de Barcelona, 2017. http://hdl.handle.net/10803/456670.

Full text
Abstract:
Healthcare is changing, and the era of data-driven healthcare organisations is increasingly popular. Data-Driven approaches (e.g., Machine Learning, Metaheuristics, Modelling and Simulation, Data Analytics, and Data Visualisation) can be used to increase efficiency and value in health services. Despite extensive research and technological development, the evidence impact of those methodologies in the healthcare sector is limited. In this Thesis we argue that an approach without borders in terms of academic societies and field of study could help to tackle this lack of impact to enhance efficiency and value in healthcare. This Thesis is based on solving practical problems in healthcare, with the research drawing upon both theoretical and empirical analysis. The research is organised in four stages. In the first, a variety of techniques from Modelling and Simulation were studied and used to analyse current performance and to model improved and more efficient future states of healthcare systems. The focus was the analysis of capacity, demand, activity, and queues both at hospital and population levels. In the second part, a Genetic Algorithm was used to solve a Routing Home Healthcare problem. In the third part, Social Network Analysis was used to visualise and analyse email networks. In the final part, a new healthcare system performance metric is proposed and implemented using a case study. New frameworks to implement these methodologies in the context of real-world problems are presented throughout the Thesis. In collaboration with University of Southampton, Wessex Academic Health Science Network (AHSN), and NHS England, several projects were developed and implemented for healthcare improvement in the UK. The work aims to increase early detection of cancer and thereby reduce premature mortality. The research was conducted working closely with NHS organisations across the Wessex region in England to produce bespoke endoscopy service modelling, as well as population level models. At a regional level, a Colorectal Cancer Screening Programme model was developed. An analysis of endoscopy activity, capacity and demand across the region was conducted. Future demand for endoscopy services in five years' time was estimated, and we found that the system has enough capacity to attend the expected future activity. A new healthcare system performance metric is presented as a tool to improve healthcare services. Genetic Algorithm metaheuristic was applied in a variant of the Home Health Care Problem (HHCP), focusing on the route planning of clinical homecare. Working with the Hospital del Mar Medical Research Institute of Barcelona and the Agency of Health Quality and Assessment of Catalonia a study was developed to estimate future utilisation scenarios of knee arthroplasty (KA) revision in the Spanish National Health System in the short-term (2015) and long-term (2030) and their impact on primary KA utilisation. One of the findings was that the variation in the number of revisions depended on both the primary utilisation rate and the survival function applied. Future activity and resources needed was estimated. A Social Network Analysis (SNA) project was developed in collaboration with the Wessex AHSN to analyse and extract insight from an organisational email, using SNA and Data Mining. A new healthcare system performance metric - based on the Overall Equipment Effectiveness (OEE) measure - is proposed and evaluated using real data from and Endoscopy Unit from a UK based hospital. To summarise, this work identifies four key techniques to use in the investigation of health data - Machine Learning Algorithms, Metaheuristic, Discrete Event Simulation and Data Analytics & Visualisations. Following a review of the different subjects and associated issues, those four techniques were evaluated and used with an applied-focus to solve healthcare problems. Key learning points from all different studies, as well as challenges and opportunities for the application of data-driven methodologies are discussed in the final chapter of the Thesis.
La asistencia sanitaria está cambiando y la era de las organizaciones sanitarias basadas en datos es cada vez más popular. Los enfoques basados en datos (por ejemplo, Aprendizaje Automático; Meta-heurísticas; Modelamiento y Simulación; y Análisis y Visualización de datos) pueden utilizarse para aumentar la eficiencia y el valor en los servicios sanitarios. A pesar de la amplia investigación y el desarrollo tecnológico, la evidencia sobre el impacto de estas metodologías en el sector sanitario es limitada. En esta tesis argumentamos que un enfoque sin fronteras en términos de sociedades académicas y campo de estudio podría ayudar a abordar esta falta de impacto para aumentar la eficiencia y el valor en la asistencia sanitaria. Esta tesis se basa en la resolución de problemas prácticos en el sector sanitario, con un enfoque tanto teórico como práctico. La investigación se organizó en cuatro etapas. En la primera, una variedad de técnicas de modelamiento y simulación fueron estudiadas y aplicadas en el análisis y simulación de mejores y más eficientes configuraciones de sistemas sanitarios. El objetivo fue un análisis de capacidad, demanda, actividad y listas de esperas a nivel hospitalario y poblacional. En la segunda parte, un Algoritmo Genético fue implementado para resolver un problema de ruteo de personal sanitario encargado de atención de salud en el hogar. En la tercera parte, Análisis de Redes Sociales fue utilizado para visualizar y analizar una red de usuarios de correos electrónicos. En la etapa final, se propone una nueva métrica para evaluar el rendimiento de sistemas sanitarios, la cual fue implementada a través de un caso de estudio. Diferentes marcos de referencia para la implementación de estas metodologías en problemas reales se presentan a lo largo de la tesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Deshpande, Shubhangi. "Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/35901.

Full text
Abstract:
Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from the two packages SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques: full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD) are used to train the surrogates. The biggest concern in using the proposed methodology is the generation of the required database. This thesis proposes a data driven approach where an expensive simulation run is required if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the response surface approximations constructed using design of experiments can be effectively managed by a SAO framework based on a trust region strategy. An interesting result is the significant reduction in the number of simulations for the subsequent runs of the optimization algorithm with a cumulatively growing simulation database.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Jha, Rajesh. "Combined Computational-Experimental Design of High-Temperature, High-Intensity Permanent Magnetic Alloys with Minimal Addition of Rare-Earth Elements." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2621.

Full text
Abstract:
AlNiCo magnets are known for high-temperature stability and superior corrosion resistance and have been widely used for various applications. Reported magnetic energy density ((BH) max) for these magnets is around 10 MGOe. Theoretical calculations show that ((BH) max) of 20 MGOe is achievable which will be helpful in covering the gap between AlNiCo and Rare-Earth Elements (REE) based magnets. An extended family of AlNiCo alloys was studied in this dissertation that consists of eight elements, and hence it is important to determine composition-property relationship between each of the alloying elements and their influence on the bulk properties. In the present research, we proposed a novel approach to efficiently use a set of computational tools based on several concepts of artificial intelligence to address a complex problem of design and optimization of high temperature REE-free magnetic alloys. A multi-dimensional random number generation algorithm was used to generate the initial set of chemical concentrations. These alloys were then examined for phase equilibria and associated magnetic properties as a screening tool to form the initial set of alloy. These alloys were manufactured and tested for desired properties. These properties were fitted with a set of multi-dimensional response surfaces and the most accurate meta-models were chosen for prediction. These properties were simultaneously extremized by utilizing a set of multi-objective optimization algorithm. This provided a set of concentrations of each of the alloying elements for optimized properties. A few of the best predicted Pareto-optimal alloy compositions were then manufactured and tested to evaluate the predicted properties. These alloys were then added to the existing data set and used to improve the accuracy of meta-models. The multi-objective optimizer then used the new meta-models to find a new set of improved Pareto-optimized chemical concentrations. This design cycle was repeated twelve times in this work. Several of these Pareto-optimized alloys outperformed most of the candidate alloys on most of the objectives. Unsupervised learning methods such as Principal Component Analysis (PCA) and Heirarchical Cluster Analysis (HCA) were used to discover various patterns within the dataset. This proves the efficacy of the combined meta-modeling and experimental approach in design optimization of magnetic alloys.
APA, Harvard, Vancouver, ISO, and other styles
10

Marín, de Mas Igor Bartolomé. "Development and application of novel model-driven and data-driven approaches to study metabolism in the framework of systems medicine." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/296313.

Full text
Abstract:
The general aim of this thesis is to develop and apply new computational tools to overcome existing limitations in the analysis of metabolism. This thesis is focused on developing new computational strategies to overcome the following identified limitations: i) The existing metabolic flux analysis tools does not account for the existence of metabolic channeling: Here we developed a new computational tool based on non-stationary 13 C-FBA to evaluate different models reflecting different topologies of intracellular metabolism, using the channeling in hepatocytes as case of concep ii) Metabolic drug-target discovery based on GSMM does not consider the different cell subpopulations existing within the tumor: Here we develope a method that integrate trancriptomic data into a comparative genome-scale metabolic network reconstruction analysis in the context of intra-tumoral heterogeneity . We determined subpopulation-specific drug targets . Additionally we determined a metabolic gene signature associated to tumor progression in pc that was correlated with other types of cancer. Iii) Current mechanistic and probabilistic computational approaches are not suitable to study the complexity of the crosstalk between metabolic and gene regulatory networks.: Here we developed a novel computational method combining probabilistic and mechanistic approaches to integrate multi-level omic data into a discrete model-based analysis. This method allowed to analyze the mechanism underlying the crosstalk between metabolism and gene regulation, using as case of concept the study of the abnormal adaptation to training in COPD patients.
La presente tesis doctoral se centra en el desarrollo de herramientas computacionales que permitan el estudio de los mecanismos moleculares que ocurren dentro de la célula. Mas específicamente estudia el metabolismo celular desde diferentes puntos de vista usando y desarrollando métodos computacionales basados en diversas metodologías. Así pues en un primer capitulo se desarrolla un método basado en el analista de los flujos metabólicos en estado no estacional isotópico utilizando modelos cinéticos para estudiar el fenómeno de la canalización metabólica en hepatocitos. Este fenómeno modifica la topología metabólica alterando el fenotipo. Nuestro método nos permitió discriminar varios modelos con distintas topología prediciendo la existencia de canalización metabólica en la glucólisis. En el segundo capitulo se desarrolló un método para analizar el metabolismo tumoral teniendo en cuenta la heterogeneidad de poblaciones. En concreto estudiamos dos subpoblaciones extraídas de una linea celular de cáncer de próstata. Para ello utilizamos un modelo a gran escala de todo el metabolismo celular humano. El análisis reflejó la existencia de diferencias notables a nivel de vías metabólicas concretas, confiriendo a cada subpoblacion sensibilidades distintas a diferentes fármacos. En esta linea se demostró que mientras las células PC-3M eran sensibles al etomoxir e insensibles al calcitriol, las PC-3S presentaban una sensibilidad opuesta. En el tercero y ultimo capitulo de la tesis desarrollamos un nuevo método computacional que integra aproximaciones probabilísticas y mecanicistas para integrar diferentes tipos de datos en un análisis basado en modelos discretos. Para ello utilizamos como caso de concepto el estudio de la adaptación anómala al entrenamiento de pacientes con EPOC. El análisis reveló diferencias importantes a nivel de metabolismo energético en comparación con el grupo control.
APA, Harvard, Vancouver, ISO, and other styles
11

Nainer, Carlo. "In-Orbit Data Driven Parameter Estimation for Attitude Control of Satellites." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0058.

Full text
Abstract:
Cette thèse porte sur la détermination des paramètres du modèle dynamique d’un satellite pour son contrôle à l’aide de la théorie de l’identification des systèmes. Plusieurs algorithmes d’estimation paramétrique sont développés et adaptés à différentes configurations de mesures de données (gyroscope ou star tracker). Ces algorithmes permettent d’estimer la matrice d’inertie du satellite ainsi que l’alignement des actionneurs à partir des données de télémétrie. Pour réaliser cette estimation, la méthode de variable instrumentale est privilégiée. Des filtres sont élaborés afin d’améliorer significativement la précision de l’estimation, et ce même en présence de bruits de capteur et de perturbations au niveau des actionneurs. Les performances des algorithmes proposes sont analysées et validées à l’aide de simulations Monte Carlo à partir de données issues d’un simulateur haute-fidélité du CNES. La deuxième contribution concerne l’optimisation de la richesse des manœuvres réalisées par le satellite tout en respectant les contraintes physiques du système. L’efficacité des nouvelles trajectoires proposées est démontrée d’une part via des simulations de Monte Carlo et d’autre part à l’aide de tests effectués lors d’un vol zéro gravité en avion
The system identification for satellite attitude control is investigated in this thesis. Several parameter estimation algorithms are developed and adapted to the different types of sensor (gyroscope or star tracker). These algorithms allow to estimate, from the telemetry data, the satellite inertia matrix as well as the actuator alignments. For these estimation algorithms, an instrumental variable approach is considered. Filters are designed in order to significantly improve the accuracy and precision of the estimates, even in presence of sensor noise and disturbance torques. The performances of the proposed algorithms are analyzed and validated via Monte Carlo simulations using data from a high-fidelity simulator from CNES. The second main contribution concerns the optimization of maneuvers to improve the information content in the data, while respecting the physical constraints of the satellite. The effectiveness of the generated trajectory is evaluated both via Monte Carlo simulations and through real experiments in a zero-gravity environment
APA, Harvard, Vancouver, ISO, and other styles
12

Ganatra, Nirmal Kirtikumar. "Validation of computer-generated results with experimental data obtained for torsional vibration of synchronous motor-driven turbomachinery." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/499.

Full text
Abstract:
Torsional vibration is an oscillatory angular twisting motion in the rotating members of a system. It can be deemed quite dangerous in that it cannot be detected as easily as other forms of vibration, and hence, subsequent failures that it leads to are often abrupt and may cause direct breakage of the shafts of the drive train. The need for sufficient analysis during the design stage of a rotating machine is, thus, well justified in order to avoid expensive modifications during later stages of the manufacturing process. In 1998, a project was initiated by the Turbomachinery Research Consortium (TRC) at Texas A&M University, College Station, TX, to develop a suite of computer codes to model torsional vibration of large drive trains. The author had the privilege of developing some modules in Visual Basic for Applications (VBA-Excel) for this suite of torsional vibration analysis codes, now collectively called XLTRC-Torsion. This treatise parleys the theory behind torsional vibration analysis using both the Transfer Matrix approach and the Finite Element approach, and in particular, validates the results generated by XLTRC-Torsion based on those approaches using experimental data available from tests on a 66,000 HP Air Compressor.
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Zhuoying. "Smart Photocatalytic Building Materials for Autogenous Improvement of Indoor Environment: Experimental, Physics-Based, and Data-Driven Modeling Approaches." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1626277456472492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jenson, Justin Michael. "Design of selective peptide inhibitors of anti-apoptotic Bfl-1 using experimental screening, structure-based design, and data-driven modeling." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120631.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biology, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Protein-protein interactions are central to all biological processes. Designer reagents that selectively bind to proteins and inhibit their interactions can be used to probe protein interaction networks, discover druggable targets, and generate potential therapeutic leads. Current technology makes it possible to engineer proteins and peptides with desirable interaction profiles using carefully selected sets of experiments that are customized for each design objective. There is great interest in improving the protein design pipeline to create protein binders more efficiently and against a wider array of targets. In this thesis, I describe the design and development of selective peptide inhibitors of anti-apoptotic BcI-2 family proteins, with an emphasis on targeting Bfl-1. Anti-apoptotic Bcl-2 family proteins bind to short, pro-apoptotic BH3 motifs to support cellular survival. Overexpression of BfI-1 has been shown to promote cancer cell survival and the development of chemoresistance. Prior work suggests that selective inhibition of Bfl-1 can induce cell death in Bfl-1 overexpressing cancer cells without compromising healthy cells that also rely on anti-apoptotic BcI-2 proteins for survival. Thus, Bfl-1-selective BH3 mimetic peptides are potentially valuable for diagnosing Bfl-1 dependence and can serve as leads for therapeutic development. In this thesis, I describe three distinct approaches to designing potent and selective Bfl-1 inhibitors. First, I describe the design and screening of libraries of variants of BH3 peptides. I show that peptides from this screen bind in a previously unobserved BH3 binding mode and have large margins of specificity for Bfl-1 when tested in vitro and in cultured cells. Second, I describe a computational model of the specificity landscape of three anti-apoptotic Bcl-2 proteins including Bfl-1. This model was derived from high-throughput affinity measurement of thousands of peptides from BH3 libraries. I show that this model is useful for designing peptides with desirable interaction profiles within a family of related proteins. Third, I describe the use of a scoring potential built on the amino acid frequencies from well-defined structural motifs complied from the Protein Data Bank to design novel BH3 peptides targeting Bfl-1.
by Justin Michael Jenson.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Darwish, Rabab. "The role of decision-driven data collection on Northwest Ohio Local Education Agencies' intervention for first-time-in-college students' post-secondary outcomes: A quasi-experimental evaluation of the PK-16 Pathways of Promise (P³) Project." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1616543639316973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zeman, Martin. "Measurement of the Standard Model W⁺W⁻ production cross-section using the ATLAS experiment on the LHC." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112263/document.

Full text
Abstract:
Les mesures de sections efficaces de production di-bosons constituent une partie importante du programme de physique au Large Hadron Collider (Grand collisionneur de hadrons) au CERN. Ces analyses de physique offrent la possibilité d'explorer le secteur électrofaible du modèle standard à l'échelle du TeV et peuvent être une indication de l'existence de nouvelles particules au-delà du modèle standard. L'excellente performance du LHC dans les années 2011 et 2012 a permis de faire les mesures très compétitives. La thèse donne un aperçu complet des méthodes expérimentales utilisées dans la mesure de la section efficace de production de W⁺W⁻ dans les collisions proton-proton au √s = 7 TeV et 8 TeV. Le texte décrit l'analyse en détail, en commençant par l'introduction du cadre théorique du modèle standard et se poursuit avec une discussion détaillée des méthodes utilisées dans l'enregistrement et la reconstruction des événements de physique dans une expérience de cette ampleur. Les logiciels associés (online et offline) sont inclus dans la discussion. Les expériences sont décrites en détail avec en particulier une section détaillée sur le détecteur ATLAS. Le dernier chapitre de cette thèse présente une description détaillée de l'analyse de la production de bosons W dans les modes de désintégration leptoniques utilisant les données enregistrées par l'expérience ATLAS pendant les années 2011 et 2012 (Run I). Les analyses utilisent 4,6 fb⁻¹ de données enregistrées à √s = 7 TeV et 20,28 fb⁻¹ enregistré à 8 TeV. La section efficace mesurée expérimentalement pour la production de bosons W dans l'expérience ATLAS est plus grande que celle prédite par le modèle standard à 7 TeV et 8 TeV. La thèse se termine par la présentation des résultats de mesures différentielles de section efficace
Measurements of di-boson production cross-sections are an important part of the physics programme at the CERN Large Hadron Collider. These physics analyses provide the opportunity to probe the electroweak sector of the Standard Model at the TeV scale and could also indicate the existence of new particles or probe beyond the Standard Model physics. The excellent performance of the LHC through years 2011 and 2012 allowed for very competitive measurements. This thesis provides a comprehensive overview of the experimental considerations and methods used in the measurement of the W⁺W⁻ production cross-section in proton-proton collisions at √s = 7 TeV and 8 TeV. The treatise covers the material in great detail, starting with the introduction of the theoretical framework of the Standard Model and follows with an extensive discussion of the methods implemented in recording and reconstructing physics events in an experiment of this magnitude. The associated online and offline software tools are included in the discussion. The relevant experiments are covered, including a very detailed section about the ATLAS detector. The final chapter of this thesis contains a detailed description of the analysis of the W-pair production in the leptonic decay channels using the datasets recorded by the ATLAS experiment during 2011 and 2012 (Run I). The analyses use 4.60 fb⁻¹ recorded at √s = 7 TeV and 20.28 fb⁻¹ recorded at 8 TeV. The experimentally measured cross section for the production of W bosons at the ATLAS experiment is consistently enhanced compared to the predictions of the Standard Model at centre-of-mass energies of 7 TeV and 8 TeV. The thesis concludes with the presentation of differential cross-section measurement results
APA, Harvard, Vancouver, ISO, and other styles
17

Svoboda, Josef. "Monitoring a simulace chování experimentálních terčů pro ADS, vývinu tepla a úniku neutronů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-446741.

Full text
Abstract:
Urychlovačem řízené podkritické systémy (ADS) se schopností transmutovat dlouhodobě žijící radionuklidy mohou vyřešit problematiku použitého jaderného paliva z aktuálních jaderných reaktorů. Stejně tak i potenciální problém s nedostatkem dnes používaného paliva, U-235, jelikož jsou schopny energeticky využít U-238 nebo i hojný izotop thoria Th-232. Tato disertační práce se v rámci základního ADS výzkumu zabývá spalačními reakcemi a produkcí tepla různých experimentálních terčů. Experimentální měření bylo provedeno ve Spojeném ústavu jaderných výzkumů v Dubně v Ruské federaci. V rámci doktorského studia bylo v průběhu let 2015-2019 provedeno 13 experimentů. Během výzkumu byly na urychlovači Fázotron ozařovány různé terče protony s energií 660 MeV. Nejdříve spalační terč QUINTA složený z 512 kg přírodního uranu, následně pak experimentální terče z olova a uhlíku nebo terč složený z olověných cihel. Byl proveden také speciální experiment zaměřený na detailní výzkum dvou protony ozařovaných uranových válečků, z nichž je složen spalační terč QUINTA. Výzkum byl především zaměřen na monitorování uvolňovaného tepla ze zpomalovaných protonů, spalační reakce a štěpení, způsobeného neutrony produkovanými spalační reakcí. Dále se na uvolňování tepla podílely piony a fotony. Teplota byla experimentálně měřena pomocí přesných termočlánků se speciální kalibrací. Rozdíly teplot byly monitorovány jak na povrchu, tak uvnitř terčů. Další výzkum byl zaměřený na monitorování unikajících neutronů z terče porovnávací metodou mezi dvěma detektory. První obsahoval malé množství štěpného materiálu s teplotním čidlem. Druhý byl složený z neštěpného materiálu (W nebo Ta), avšak s podobnými materiálovými vlastnostmi se stejnými rozměry. Unik neutronů (resp. neutronový tok mimo experimentální terč) byl detekován uvolněnou energií ze štěpné reakce. Tato práce se zabývá přesným měřením změny teploty pomocí termočlánků, s využitím elekroniky od National Instrument a softwaru LabView pro sběr dat. Pro práci s daty, analýzu a vizualizaci dat byl použit skriptovací jazyk Python 3.7. (s využitím několika knihoven). Přenos částic by simulován pomocí MCNPX 2.7.0., a konečně simulace přenosu tepla a určení povrchové teploty simulovaného modelu bylo provedeno v programu ANSYS Fluent (pro jednodušší výpočty ANSYS Transient Thermal).
APA, Harvard, Vancouver, ISO, and other styles
18

Dahlfors, Marcus. "Studies of Accelerator-Driven Systems for Transmutation of Nuclear Waste." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Universitetsbiblioteket [distributör], 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gstalter, Étienne. "Réduction d’ordre de modèle de crash automobile pour l’optimisation masse / prestations." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2576.

Full text
Abstract:
Cette thèse est une contribution à un thème de recherche sur les applications de la réduction de modèle à l’ingénierie RENAULT. Elle fait suite aux travaux du projet IRT SystemX ROM (Réduction de modèles et Optimisation Multidisciplinaire) et aux thèses CIFRE RENAULT [Vuong], [Charrier]. L’application industrielle principale du thème est la mise au point de la structure d’un véhicule sollicité en crash; des travaux sont également en cours sur la combustion, l’acoustique et l’aérodynamique. Les travaux de cette thèse sont à la fois un apport à la méthode générique ReCUR et son adaptation à l’optimisation de la caisse d’un véhicule pour le crash. RENAULT utilise largement l’optimisation pour la mise au point de la caisse en crash, avec un outil d’optimisation numérique basé sur la méthode des plans d’expériences. Cette méthode nécessite beaucoup de calculs crash car cette simulation est considérée comme une boite noire, en utilisant uniquement les entrées et sorties. La méthode ReCUR prend le contre-pied en exploitant un maximum d’informations de chaque simulation crash, dans le but de réduire fortement leur nombre. Les travaux de cette thèse ont permis de remplir cet objectif pour les applications de mise au point au nominal et pour l’optimisation robuste dans des cas complexes comme le choc frontal et arrière
This thesis is a part of a global research work dedicated to reduced-order modelling applications in the Renault engineering direction. It's research topic has been improved in the IRT System)('s project on Reduced Order Model and Multi-disciplinary Optimization. Some previous thesis can help understand the context. ([Vuong], [Charrier]). The main industrial application of the research theme is the focus on a body structure, in a crash loading. Some research works on acoustic, combustion and aerodynamic are currently ongoing. This thesis is both a contribution to the generic ReCUR method, and its application to a car body structure optimization for crash loadings. Engineering teams at Renault uses optimization to obtain the best crash simulation, with a numerical optimization software, based on designs of experiments. It requires a lot of crash simulation because each simulation is considered as unique, with only one response for each parameter. Only Inputs and Outputs are known. The ReCUR method consider that each simulation is a huge mine that needs our attention. We hope that we can decrease the number of crash simulation required to compute a model, by using much more data for each simulation
APA, Harvard, Vancouver, ISO, and other styles
20

Valente, Paulo Francisco Constantino. "Data-driven quality by design for complex generic drug products." Master's thesis, 2019. http://hdl.handle.net/10316/88014.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Biomédica apresentada à Faculdade de Ciências e Tecnologia
A indústria farmacêutica é uma das atividades mais inovadoras e regulamentadas, sendo a distribuição dos componentes terapêuticos no corpo humano, com a qualidade desejada, um dos grandes focos da investigação nesta área. Para garantir que a qualidade final dos medicamentos seja uma preocupação em todas as etapas do seu desenvolvimento, as entidades reguladoras têm incentivado as empresas a adotarem princípios de qualidade pelo design, o que promove mais conhecimento sobre o processo e permite reduzir os recursos necessários, o que, em última instância, torna os cuidados de saúde mais acessíveis para todos. Neste trabalho, são analisados dados experimentais do processo de fabrico de um medicamento genérico complexo, a ser atualmente desenvolvido pela Bluepharma - Indústria Farmacêutica. Estes produtos são conhecidos por implicarem um esforço adicional, pois o seu desenvolvimento envolve tarefas mais complicadas do que os medicamentos convencionais. Além disso, neste problema, pretende-se que seis respostas diferentes, discretas e contínuas, sejam otimizadas simultaneamente. Um novo método para identificar efeitos ativos em experiências de triagem é proposto, envolvendo o uso de regressão passo-a-passo com a implementação de hereditariedade de efeitos, modelos lineares generalizados e validação através do critério de informação de Akaike corrigido. Esta abordagem é mais simples do que as sugeridas na literatura com objectivos semelhantes e apresenta resultados muito melhores do que as técnicas padrão utilizadas normalmente na indústria farmacêutica. Além disso, alguns outros procedimentos são realizados para extrair informações importantes dos dados disponíveis, como o estudo dos melhores níveis de cada fator para cada resposta, otimização de várias respostas simultaneamente, análise dos fatores não controlados e a criação de modelos preditivos. A combinação de todos estes métodos permite uma maior compreensão do processo de desenvolvimento e fornece novas técnicas auxiliares, ajudando a atingir as caraterísticas pretendidas do produto muito mais eficientemente.
The pharmaceutical industry is one of the most innovative and regulated activities, and the delivery of the therapeutic components with the desired quality is one of the biggest concerns in pharma R\&D. In order to ensure that the final quality of the drug products is a focus in all stages of the development, the regulatory agencies have encouraged the companies to adopt quality by design principles, which promotes more knowledge about the process and allows to reduce the required resources, and ultimately to make the healthcare more affordable for everyone. In this work, it is analyzed the experimental data of the production process of a complex generic drug, being currently developed by Bluepharma - Indústria Farmacêutica. These products are known for implying an additional effort as their development comprises harder tasks compared to conventional drugs. Besides, in this problem, six different responses, both discrete and continuous ones, are expected to be simultaneously optimized. A new method to identify active effects in screening experiments is proposed, involving the use of stepwise regression with the enforcement of effects heredity, generalized linear models and corrected Akaike information criterion validation. This approach is simpler than the same-purpose ones suggested in the literature and it was found to perform much better than the standard techniques executed usually in the pharmaceutical industry.Besides, some other procedures are considered to retrieve important information from the available data, such as the study of the best levels of each factor for each response, optimization of multiple responses simultaneously, analysis of non-controlled factors, and creation of predictive models. The combination of all these methods provides a better understanding of the development process and make available new auxiliary techniques, aiding to achieve the targets much more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
21

Hülsmann, Mailin. "Intelligent doctor patient matching: how José Mello saude experiments towards data-driven and patient-centric decision making." Master's thesis, 2018. http://hdl.handle.net/10362/52482.

Full text
Abstract:
While data-driven decision-making is generally accepted as a fundamental capability of a competitive firm, many firms are facing difficulties in developing this capability. This case demonstrates how a private healthcare organization, José de Mello Saúde, engages in collaboration with a global university-led program for such capability building, in a pilot project of intelligent doctor-patient matching. The case walks the reader through the entire data science pipeline, from project scoping to data curation, modelling, prototype testing, until implementation. It enables discussions on how to overcome managerial challenges and build the needed capabilities to successfully integrate advanced analytics into the organization’s operations.
APA, Harvard, Vancouver, ISO, and other styles
22

Fallin, Patrick Timothy. "Design of an engineering experiment and data driven design in secondary education." 2013. http://hdl.handle.net/2152/22717.

Full text
Abstract:
Pre-tests and post tests were used to assess the effectiveness of an engineering high school unit on experimental design and data driven design. The engineering data acquisition unit examined in this report used project based learning to teach the design of an engineering experiment and data driven design as part of the engineering design process. The project consists of the design of a building that can safely withstand an earthquake. Students construct, test and collect data on baseline buildings, with and without load using a shaker table and data acquisition. Students' then design experiments to evaluate design modifications that will meet the customer's needs. Overall, although the number of participants was limited, the survey instruments indicated that understanding of experimental design improved among high school students participating in the unit. Based on this pilot implementation of survey instruments, some of the survey questions were clarified.
text
APA, Harvard, Vancouver, ISO, and other styles
23

Ganatra, Nirmal Kirtikumar. "Validation of computer-generated results with experimental data obtained for torsional vibration of synchronous motor-driven turbomachinery." 2003. http://hdl.handle.net/1969/499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography