Dissertations / Theses on the topic 'Models and simulations of design'

To see the other types of publications on this topic, follow the link: Models and simulations of design.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Models and simulations of design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pohl, Thomas. "Design of adaptable simulation models." Thesis, Sheffield Hallam University, 2006. http://shura.shu.ac.uk/20240/.

Full text
Abstract:
In today's world, with ever increasing competition, modelling and simulation proves to be a very helpful tool. Many methodologies exist to help build a simulation model from scratch. In terms of adaptability, most current attempts focus on either the operational side, ie the automated integration of data into a model, or the creation of new software. However, very few attempts are being made to improve the adaptability of shelved models built in existing simulation software. As a result, there is a certain reluctance, in some areas, to use simulation to its full potential. Based on these facts, it is obvious that anything, which makes reuse of simulation models easier, can help improve the use and spread of simulation as a valuable tool to maintain a company's competitiveness. In order to find such a solution, the following issues are looked at in this thesis: The changes to a simulation model that constitute the biggest problem, ways to minimise those changes, and possibilities to simplify the implementation of those changes. Those factors are evaluated, first by investigating current practices of building adaptable simulation models via a literature review, then the most difficult changes to implement in a simulation model, and the most frequent types of simulation software, are identified by means of interviews and questionnaire surveys. Next, parameters describing the adaptability of a simulation model are defined. In a further step, two of the most widely used simulation packages are benchmarked against a variety of tasks, reflecting the changes most frequent to models. The benchmarking study also serves to define and test certain elements regarding their suitability for adaptable models. Based on all those steps, model building guidelines for the creation of adaptable simulation models are developed and then validated by means of interviews and a framed field experiment. The interviews and questionnaire reveal that deleting is the easiest task and modifying the most complicated, while handling devices are the most difficult element to modify. The results also show that simulators (eg Arena) are the most widespread type of simulation software. The benchmarking showed that Arena is overall more adaptable than Simul8, and confirms the findings from the user survey. Also, it shows that sequencing is very helpful for modifying models, while the use of sub-models decrease the adaptability. Finally, the validation proves that the model building guidelines substantially increase the adaptability of models.
APA, Harvard, Vancouver, ISO, and other styles
2

Ochs, David S. "Design of detailed models for use in fast aeroelastic simulations of permanent-magnet direct-drive wind turbines." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/15042.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Ruth Douglas Miller
This thesis presents the design of two models for permanent-magnet direct-drive wind turbines. The models are of a 10 kW and a 5 MW wind turbine, which are representative of residential scale and commercial scale turbines respectively. The models include aerodynamic and mechanical simulations through the FAST software, as well as concurrent electrical simulations through the SimPowerSystems toolbox for MATLAB/Simulink. The aim is to provide wind turbine designers and researchers with a comprehensive simulation tool that they can use to design and test many different aspects of a wind turbine. The particular novelty of these models is their high level of detail in electromechanical simulations. For each model, a generator speed controller was designed in a reference frame attached to the generator’s rotor, and was executed with a 3-phase active rectifier using space-vector pulse-width modulation. Also for each model, active and reactive power controllers were designed in a reference frame synchronous with the grid, and were executed with a 3-phase inverter using space-vector pulse-width modulation. Additionally, a blade pitch controller was designed for the 5 MW model. Validation of the models was carried out in the MATLAB/Simulink environment with satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
3

Craig, David Latch. "Perceptual simulation and analogical reasoning in design." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/23940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhiyong. "Data-Driven Adaptive Reynolds-Averaged Navier-Stokes k - ω Models for Turbulent Flow-Field Simulations." UKnowledge, 2017. http://uknowledge.uky.edu/me_etds/93.

Full text
Abstract:
The data-driven adaptive algorithms are explored as a means of increasing the accuracy of Reynolds-averaged turbulence models. This dissertation presents two new data-driven adaptive computational models for simulating turbulent flow, where partial-but-incomplete measurement data is available. These models automatically adjust (i.e., adapts) the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k-ω turbulence equations to improve agreement between the simulated flow and a set of prescribed measurement data. The first approach is the data-driven adaptive RANS k-ω (D-DARK) model. It is validated with three canonical flow geometries: pipe flow, the backward-facing step, and flow around an airfoil. For all 3 test cases, the D-DARK model improves agreement with experimental data in comparison to the results from a non-adaptive RANS k-ω model that uses standard values of the closure coefficients. The second approach is the Retrospective Cost Adaptation (RCA) k-ω model. The key enabling technology is that of retrospective cost adaptation, which was developed for real-time adaptive control technology, but is used in this work for data-driven model adaptation. The algorithm conducts an optimization, which seeks to minimize the surrogate performance, and by extension the real flow-field error. The advantage of the RCA approach over the D-DARK approach is that it is capable of adapting to unsteady measurements. The RCA-RANS k-ω model is verified with a statistically steady test case (pipe flow) as well as two unsteady test cases: vortex shedding from a surface-mounted cube and flow around a square cylinder. The RCA-RANS k-ω model effectively adapts to both averaged steady and unsteady measurement data.
APA, Harvard, Vancouver, ISO, and other styles
5

Kini, Satish D. "An approach to integrating numerical and response surface models for robust design of production systems." Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1080276457.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xviii, 220 p.; also includes graphics (some col.). Includes abstract and vita. Advisor: R. Shivpuri, Dept. of Industrial, Welding and Systems Engineering. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Han, Sangmok. "A design tool for reusing integration knowledge in simulation models." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/85771.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2006.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 88-89).
In the academic field of computer-aided product development, the role of the design tool is to support engineering designers to develop and integrate simulation models. Used to save time and costs in product development process, the simulation model, however, introduces additional costs for its development and integration, which often become considerably large due to the fact that many, complex simulation models need to be integrated. Moreover, the result of integration and the effort taken during the integration process are often not reused for other product development projects. In this paper, we attempt to develop a design tool that can capture integration knowledge and make the knowledge reusable for other design tasks. More specifically, we are interested in the two kinds of integration knowledge: the first captured in the form of a graph structure associating simulation models, called the integration structure, and the second generalized from script codes into rule-based patterns, called the integration code pattern. An integration mechanism and a pattern generalization algorithm have been developed and incorporated into a design tool utilizing a new integration model called catalog model, a model that enables us to reuse the integration structure and code patterns of one model to quickly build another. Application scenarios have demonstrated the effectiveness of the design tool: The same integration task could be performed in less time, and repetitive and error-prone elements in the task were substantially reduced as a result of reusing integration knowledge in the simulation models.
by Sangmok Han.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Yasar, Orten Pinar. "Numerical Analysis, Design And Two Port Equivalent Circuit Models For Split Ring Resonator Arrays." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611620/index.pdf.

Full text
Abstract:
Split ring resonator (SRR) is a metamaterial structure which displays negative permeability values over a relatively small bandwidth around its magnetic resonance frequency. Unit SRR cells and arrays have been used in various novel applications including the design of miniaturized microwave devices and antennas. When the SRR arrays are combined with the arrays of conducting wires, left handed materials can be constructed with the unusual property of having negative valued effective refractive indices. In this thesis, unit cells and arrays of single-ring multiple-split type SRR structures are numerically analyzed by using Ansoft&rsquo
s HFSS software that is based on the finite elements method (FEM). Some of these structures are constructed over low-loss dielectric substrates and their complex scattering parameters are measured to verify the numerical simulation results. The major purpose of this study has been to establish equivalent circuit models to estimate the behavior of SRR structures in a simple and computationally efficient manner. For this purpose, individual single ring SRR cells with multiple splits are modeled by appropriate two-port RLC resonant circuits paying special attention to conductor and dielectric loss effects. Results obtained from these models are compared with the results of HFSS simulations which use either PEC/PMC (perfect electric conductor/perfect magnetic conductor) type or perfectly matched layer (PML) type boundary conditions. Interactions between the elements of SRR arrays such as the mutual inductance and capacitance effects as well as additional dielectric losses are also modeled by proper two-port equivalent circuits to describe the overall array behavior and to compute the associated transmission spectrum by simple MATLAB codes. Results of numerical HFSS simulations, equivalent circuit model computations and measurements are shown to be in good agreement.
APA, Harvard, Vancouver, ISO, and other styles
8

Muthukrishnan, Gayathri. "Utilizing Hierarchical Clusters in the Design of Effective and Efficient Parallel Simulations of 2-D and 3-D Ising Spin Models." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9944.

Full text
Abstract:
In this work, we design parallel Monte Carlo algorithms for the Ising spin model on a hierarchical cluster. A hierarchical cluster can be considered as a cluster of homogeneous nodes which are partitioned into multiple supernodes such that communication across homogenous clusters is represented by a supernode topological network. We consider different data layouts and provide equations for choosing the best data layout under such a network paradigm. We show that the data layouts designed for a homogeneous cluster will not yield results as good as layouts designed for a hierarchical cluster. We derive theoretical results on the performance of the algorithms on a modified version of the LogP model that represents such tiered networking, and present simulation results to analyze the utility of the theoretical design and analysis. Furthermore, we consider the 3-D Ising model and design parallel algorithms for sweep spin selection on both homogeneous and hierarchical clusters. We also discuss the simulation of hierarchical clusters on a homogeneous set of machines, and the efficient implementation of the parallel Ising model on such clusters.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Bo. "Design, modelling and simulation of a novel micro-electro-mechanical gyroscope with optical readouts." Thesis, Cape Peninsula University of Technology, 2007. http://hdl.handle.net/20.500.11838/1101.

Full text
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2007
Micro Electro-Machnical Systems (MEMS) applications are fastest development technology present. MEMS processes leverage mainstream IC technologies to achieve on chip sensor interface and signal processing circuitry, multi-vendor accessibility, short design cycles, more on-chip functions and low cost. MEMS fabrications are based on thin-film surface microstructures, bulk micromaching, and LIGA processes. This thesis centered on developing optical micromaching inertial sensors based on MEMS fabrication technology which incorporates bulk Si into microstructures. Micromachined inertial sensors, consisting of the accelerometers and gyroscopes, are one of the most important types of silicon-based sensors. Microaccelerometers alone have the second largest sales volume after pressure sensors, and it is believed that gyroscopes will soon be mass produced at the similar volumes occupied by traditional gyroscopes. A traditional gyroscope is a device for measuring or maintaining orientation, based on the principle of conservation of angular momentum. The essence of the gyroscope machine is a spinning wheel on an axle. The device, once spinning, tends to resist changes to its orientation due to the angular momentum of the wheel. In physics this phenomenon is also known as gyroscopic inertia or rigidity in space. The applications are limited by the huge volume. MEMS Gyroscopes, which are using the MEMS fabrication technology to minimize the size of gyroscope systems, are of great importance in commercial, medical, automotive and military fields. They can be used in cars for ASS systems, for anti-roll devices and for navigation in tall buildings areas where the GPS system might fail. They can also be used for the navigation of robots in tunnels or pipings, for leading capsules containing medicines or diagnostic equipment in the human body, or as 3-D computer mice. The MEMS gyroscope chips are limited by high precision measurement because of the unprecision electrical readout system. The market is in need for highly accurate, high-G-sustainable inertial measuring units (IMU's). The approach optical sensors have been around for a while now and because of the performance, the mall volume, the simplicity has been popular. However the production cost of optical applications is not satisfaction with consumer. Therefore, the MEMS fabrication technology makes the possibility for the low cost and micro optical devices like light sources, the waveguide, the high thin fiber optical, the micro photodetector, and vary demodulation measurement methods. Optic sensors may be defined as a means through which a measurand interacts with light guided in an optical fiber (an intrinsic sensor) or guided to (and returned from) an interaction region (an extrinsic sensor) by an optical fiber to produce an optical signal related to the parameter of interest. During its over 30 years of history, fiber optic sensor technology has been successfully applied by laboratories and industries worldwide in the detection of a large number of mechanical, thermal, electromagnetic, radiation, chemical, motion, flow and turbulence of fluids, and biomedical parameters. The fiber optic sensors provided advantages over conventional electronic sensors, of survivability in harsh environments, immunity to Electro Magnetic Interference (EMI), light weight, small size, compatibility with optical fiber communication systems, high sensitivity for many measurands, and good potential of multiplexing. In general, the transducers used in these fiber optic sensor systems are either an intensity-modulator or a phase-modulator. The optical interferometers, such as Mach-Zehnder, Michelson, Sagnac and Fabry-Perot interferometers, have become widely accepted as a phase modulator in optical sensors for the ultimate sensitivity to a range of weak signals. According to the light source being used, the interferometric sensors can be simply classified as either a coherence interferometric sensor if a the interferometer is interrogated by a coherent light source, such as a laser or a monochromatic light, or a lowcoherence interferometric sensor when a broadband source a light emitting diode (LED) or a superluminescent diode (SLD), is used. This thesis proposed a novel micro electro-mechanical gyroscope system with optical interferometer readout system and fabricated by MEMS technology, which is an original contribution in design and research on micro opto-electro-mechanical gyroscope systems (MOEMS) to provide the better performances than the current MEMS gyroscope. Fiber optical interferometric sensors have been proved more sensitive, precision than other electrical counterparts at the measurement micro distance. The MOMES gyroscope system design is based on the existing successful MEMS vibratory gyroscope and micro fiber optical interferometer distances sensor, which avoid large size, heavy weight and complex fabrication processes comparing with fiber optical gyroscope using Sagnac effect. The research starts from the fiber optical gyroscope based on Sagnac effect and existing MEMS gyroscopes, then moving to the novel design about MOEMS gyroscope system to discuss the operation principles and the structures. In this thesis, the operation principles, mathematics models and performances simulation of the MOEMS gyroscope are introduced, and the suitable MEMS fabrication processes will be discussed and presented. The first prototype model will be sent and fabricated by the manufacture for the further real time performance testing. There are a lot of inventions, further research and optimize around this novel MOEMS gyroscope chip. In future studying, the research will be putted on integration three axis Gyroscopes in one micro structure by optical sensor multiplexing principles, and the new optical devices like more powerful light source, photosensitive materials etc., and new demodulation processes, which can improve the performance and the interface to co-operate with other inertial sensors and navigation system.
APA, Harvard, Vancouver, ISO, and other styles
10

Wiedemann, Michael. "Robust parameter design for agent-based simulation models with application in a cultural geography model." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FWiedemann.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Johnson, Rachel T. ; Second Reader: Baez, Francisco R, "June 2010." Description based on title screen as viewed on July 15, 2010. Author(s) subject terms: Cultural Geography, Agent-Based Model (ABM), Irregular Warfare (IW), Theory of planned Behavior (TpB), Baysian Belief Nets (BBN), Counterinsurgency Operations (COIN), Stability Operations, Discrete Event Simulation (DES), Design of Experiments (DOX), Robust Parameter Design (RPD). Includes bibliographical references (p. 69-70). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
11

Martin, Christopher John. "A new tool for the validation of dynamic simulation models." Thesis, n.p, 1995. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Shariat, Yazdi Hamed [Verfasser]. "Statistical analysis and simulation of design models evolution / Hamed Shariat Yazdi." Siegen : Universitätsbibliothek der Universität Siegen, 2015. http://d-nb.info/1076911692/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Jiang, Lin. "Novel catalysts by computational enzyme design /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Quirante, Natalia. "Rigorous Design of Chemical Processes: Surrogate Models and Sustainable Integration." Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/74373.

Full text
Abstract:
El desarrollo de procesos químicos eficientes, tanto desde un punto de vista económico como desde un punto de vista ambiental, es uno de los objetivos principales de la Ingeniería Química. Para conseguir este propósito, durante los últimos años, se están empleando herramientas avanzadas para el diseño, simulación, optimización y síntesis de procesos químicos, las cuales permiten obtener procesos más eficientes y con el menor impacto ambiental posible. Uno de los aspectos más importantes a tener en cuenta para diseñar procesos más eficientes es la disminución del consumo energético. El consumo energético del sector industrial a nivel global representa aproximadamente el 22.2 % del consumo energético total, y dentro de este sector, la industria química representa alrededor del 27 %. Por lo tanto, el consumo energético de la industria química a nivel global constituye aproximadamente el 6 % de toda la energía consumida en el mundo. Además, teniendo en cuenta que la mayor parte de la energía consumida es generada principalmente a partir de combustibles fósiles, cualquier mejora de los procesos químicos que reduzca el consumo energético supondrá una reducción del impacto ambiental. El trabajo recopilado en esta Tesis Doctoral se ha llevado a cabo dentro del grupo de investigación COnCEPT, perteneciente al Instituto Universitario de Ingeniería de los Procesos Químicos de la Universidad de Alicante, durante los años 2014 y 2017. El objetivo principal de la presente Tesis Doctoral se centra en el desarrollo de herramientas y modelos de simulación y optimización de procesos químicos con el fin de mejorar la eficiencia energética de éstos, lo que conlleva a la disminución del impacto ambiental de los procesos. Más concretamente, esta Tesis Doctoral se compone de dos estudios principales, que son los objetivos concretos que se pretenden conseguir: - Estudio y evaluación de los modelos surrogados para la mejora en la optimización basada en simuladores de procesos químicos. - Desarrollo de nuevos modelos para la optimización de procesos químicos y la integración de energía simultánea, para redes de intercambiadores de calor.
APA, Harvard, Vancouver, ISO, and other styles
15

Higgs, Jessica Marie. "Ion Trajectory Simulations and Design Optimization of Toroidal Ion Trap Mass Spectrometers." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6652.

Full text
Abstract:
Ion traps can easily be miniaturized to become portable mass spectrometers. Trapped ions can be ejected by adjusting voltage settings of the radiofrequency (RF) signal applied to the electrodes. Several ion trap designs include the quadrupole ion trap (QIT), cylindrical ion trap (CIT), linear ion trap (LIT), rectilinear ion trap (RIT), toroidal ion trap, and cylindrical toroidal ion trap. Although toroidal ion traps are being used more widely in miniaturized mass spectrometers, there is a lack of fundamental understanding of how the toroidal electric field affects ion motion, and therefore, the ion trap's performance as a mass analyzer. Simulation programs can be used to discover how traps with toroidal geometry can be optimized. Potential mapping, field calculations, and simulations of ion motion were used to compare three types of toroidal ion traps: a symmetric and an asymmetric trap made using hyperbolic electrodes, and a simplified trap made using cylindrical electrodes. Toroidal harmonics, which represent solutions to the Laplace equation in a toroidal coordinate system, may be useful to understand toroidal ion traps. Ion trapping and ion motion simulations were performed in a time-varying electric potential representing the symmetric, second-order toroidal harmonic of the second kind—the solution most analogous to the conventional, Cartesian quadrupole. This potential distribution, which we call the toroidal quadrupole, demonstrated non-ideal features in the stability diagram of the toroidal quadrupole which were similar to that for conventional ion traps with higher-order field contributions. To eliminate or reduce these non-ideal features, other solutions to the Laplace equation can be added to the toroidal quadrupole, namely the toroidal dipole, toroidal hexapole, toroidal octopole, and toroidal decapole. The addition of a toroidal hexapole component to the toroidal quadrupole provides improvement in ion trapping, and is expected to play an important role in optimizing the performance of all types of toroidal ion trap mass spectrometers.The cylindrical toroidal ion trap has been miniaturized for a portable mass spectrometer. The first miniaturized version (r0 and z0 reduced by 1/3) used the same central electrode and alignment sleeve as the original design, but it had too high of capacitance for the desired RF frequency. The second miniaturized version (R, r0, and z0 reduced by 1/3) was designed with much less capacitance, but several issues including electrode alignment and sample pressure control caused the mass spectra to have poor resolution. The third miniaturized design used a different alignment method, and its efficiency still needs to be improved.
APA, Harvard, Vancouver, ISO, and other styles
16

Qian, Zhiguang. "Computer experiments [electronic resource] : design, modeling and integration /." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11480.

Full text
Abstract:
The use of computer modeling is fast increasing in almost every scientific, engineering and business arena. This dissertation investigates some challenging issues in design, modeling and analysis of computer experiments, which will consist of four major parts. In the first part, a new approach is developed to combine data from approximate and detailed simulations to build a surrogate model based on some stochastic models. In the second part, we propose some Bayesian hierarchical Gaussian process models to integrate data from different types of experiments. The third part concerns the development of latent variable models for computer experiments with multivariate response with application to data center temperature modeling. The last chapter is devoted to the development of nested space-filling designs for multiple experiments with different levels of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Brito, A. E. S. C. "Configuring Simulation Models Using CAD Techniques: A New Approach to Warehouse Design." Thesis, Cranfield University, 1992. http://dspace.lib.cranfield.ac.uk/handle/1826/4640.

Full text
Abstract:
The research reported in this thesis is related to the development and use of software tools for supporting warehouse design and management. Computer Aided Design and Simulation techniques are used to develop a software system that forms the basis of a Decision Support System for warehouse design. The current position of simulation software is reviewed. It is investigated how appropriate current simulation software is for warehouse modelling. Special attention is given to Visual Interactive Simulation, graphics, animation and user interfaces. The warehouse design process is described and common difficulties are highlighted. A Decision Support System (DSS) framework is proposed to give support during all the warehouse design phases. The use of simulation in warehouse design is identified as being essential for evaluating different warehouse configurations. Several simulation models are used to show that the warehouse systems special characteristics require a new way of defining the simulation model and new modelling elements to represent the complex logic of a warehouse system. AWARD (Advanced WARehouse Design) is a data-driven generic model, developed to build warehouse simulation models. It uses Computer Aided Design (CAD) techniques for drawing the warehouse layout and configuring the simulation model. The user has no need for programming skills and a user-friendly interface makes it easy to use. High resolution colour graphics and a scale drawing of the warehouse makest he dynamic display of the model a good representationo f the real system. Several examples illustrate the use of the AWARD system. The experience and advantages of the AWARD approach is discussed and the extension of this approach to other areas is explored.
APA, Harvard, Vancouver, ISO, and other styles
18

Rampersad, Kevin. "The rapid design of simulation models using cladistics and template based modelling." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7756.

Full text
Abstract:
The drive towards a more globally competitive market has led to an increase in demand for goods and services on a global scale. As a result of this increase in productivity, production systems are being designed and redesigned at an increased rate and that they are becoming more innovative as they progress with time. The challenge this research attempts to address is how to improve the ability of the UKbased manufacturing industry to make a more effective decision during manufacturing systems design/redesign by adopting simulation techniques as both strategic and operational decision making tools. The aim of this research is to develop a classification scheme based on cladistics and evolutionary analysis and to use this classification in the development of a template based modelling library. The research focused on identifying the existing manufacturing layout types, the various layout configurations that are being used and template based model generation. Some of the major developments of the research conducted were the construction of the manufacturing layout and component based cladograms and the RapidSim generator. A literature review on manufacturing systems layouts revealed the types of system layouts which are most commonly used in the manufacturing sector as well as the component configurations and characteristics which are found within each production systems. This research makes a major contribution by providing a cladistical classification of manufacturing systems layouts, an external interface for model building and development and a set of recommendations, which when adopted may help increase the use of template based simulation modelling. Based on the data analysis carried out, the findings suggest that there is room for the development and implementation of a template based modelling approach to the development of simulation based models. The most important result obtained from the validation of the model was that the time taken to build and run the model decreased significantly by around 65% when compared to the conventional model building process.
APA, Harvard, Vancouver, ISO, and other styles
19

Svensson, Marcus, and Daniel Haraldsson. "Integrating Design Optimization in the Development Process using Simulation Driven Design." Thesis, Linköpings universitet, Maskinkonstruktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157374.

Full text
Abstract:
This master thesis has been executed at Scania CV AB in Södertälje, Sweden. Scania is a manufacturer of heavy transport solutions, an industry which is changing rapidly in order to meet stricter regulations, ensuring a sustainable future. Continuous product improvements and new technologies are required to increase performance and to meet markets requirements. By implementing design optimization in the design process it enables the potential of supporting design exploration, which is beneficial when products with high performance are developed. The purpose was to show the potential of design optimization supported by simulation driven design as a tool in the development process. To examine an alternative way of working for design engineers, elaborating more competitive products in terms of economical and performance aspects. Furthermore, to minimize time and iterations between divisions by developing better initial concept proposals. The alternative working method was developed iteratively in parallel with a case study. The case study was a suction strainer and were used for method improvements and validation, as well as decision basis for the included sub-steps. The working method for implementing design optimization and simulation driven design ended up with a procedure consisted of three main phases, concept generation, detail design and verification. In the concept generation phase topology optimization was used, which turned out to be a beneficial method to find optimized solutions with few inputs. The detail design phase consisted of a parameterized CAD model of the concept which then was shape optimized. The shape optimization enabled design exploration of the concept which generated valuable findings to the product development. Lastly the optimized design was verified with more thorough methods, in this case verification with FE-experts. The working method was tested and verified on the case study component, this resulted in valuable knowledge for future designs for similar components. The optimized component resulted in a performance increase where the weight was decrease by 54% compared with a reference product.
APA, Harvard, Vancouver, ISO, and other styles
20

Riley, Matthew E. "Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1314895435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

San, Jose Angel. "Analysis, design, implementation and evaluation of graphical design tool to develop discrete event simulation models using event graphs and SIMKIT." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397405.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, Sept. 2001.
Thesis advisor(s): Buss, Arnold; Miller, Nita. "September 2001." Includes bibliographical references (p. 109-110). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
22

Klostermeier, Christian. "Investigation into the capability of large eddy simulation for turbomachinery design." Thesis, University of Cambridge, 2008. https://www.repository.cam.ac.uk/handle/1810/252106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Mingyang 1981. "Macromodeling and simulation of linear components characterized by measured parameters." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=112589.

Full text
Abstract:
Recently, microelectronics designs have reached extremely high operating frequencies as well as very small die and package sizes. This has made signal integrity an important bottleneck in the design process, and resulted in the inclusion of signal integrity simulation in the computer aided design flow. However, such simulations are often difficult because in many cases it is impossible to derive analytical models for certain passive elements, and the only available data are frequency-domain measurements or full-wave simulations. Furthermore, at such high frequencies these components are distributed in nature and require a large number of poles to be properly characterized. Simple lumped equivalent circuits are therefore difficult to obtain, and more systematic approaches are required. In this thesis we study the Vector Fitting techniques for obtaining such equivalent model and propose a more streamlined approach for preserving passivity while maintaining accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Mumpower, Gregory D. "Improving product and process design integration through representation and simulation of manufacturing processes." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

López, Benítez Miguel. "Spectrum usage models for the analysis, design and simulation of cognitive radio networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/33282.

Full text
Abstract:
The owned spectrum allocation policy, in use since the early days of modern radio communications, has been proven to effectively control interference among radio communication systems. However, the overwhelming proliferation of new operators, innovative services and wireless technologies during the last years has resulted, under this static regulatory regime, in the depletion of spectrum bands with commercially attractive radio propagation characteristics. An important number of spectrum measurements, however, have shown that spectrum is mostly underutilized, thus indicating that the virtual spectrum scarcity problem actually results from static and inflexible spectrum management policies rather than the physical scarcity of radio resources. This situation has motivated the emergence of Dynamic Spectrum Access (DSA) methods based on the Cognitive Radio (CR) paradigm, which has gained popularity as a promising solution to conciliate the existing conflicts between spectrum demand growth and spectrum underutilization. The basic underlying idea of DSA/CR is to allow unlicensed (secondary) users to access in an opportunistic and non-interfering manner some licensed bands temporarily unoccupied by the licensed (primary) users. Due to the opportunistic nature of this principle, the behavior and performance of a DSA/CR network depends on the spectrum occupancy patterns of the primary system. A realistic and accurate modeling of such patterns becomes therefore essential and extremely useful in the domain of DSA/CR research. The potential applicability of spectrum usage models ranges from analytical studies to the design and dimensioning of secondary networks as well as the development of innovative simulation tools and more efficient DSA/CR techniques. Spectrum occupancy modeling in the context of DSA/CR constitutes a rather unexplored research area. This dissertation addresses the problem of modeling spectrum usage in the context of DSA/CR by contributing a comprehensive and holistic set of realistic models capable to accurately capture and reproduce the statistical properties of spectrum usage in real radio communication systems in the time, frequency and space dimensions. The first part of this dissertation addresses the development of a unified methodological framework for spectrum measurements in the context of DSA/CR and presents the results of an extensive spectrum measurement campaign performed over a wide variety of locations and scenarios in the metropolitan area of Barcelona, Spain, to identify potential bands of interest for future DSA/CR deployments. To the best of the author's knowledge, this is the first study of these characteristics performed under the scope of the Spanish spectrum regulation and one of the earliest studies in Europe. The second part deals with various specific aspects related to the processing of measurements to extract spectrum occupancy patterns, which is largely similar to the problem of spectrum sensing in DSA/CR. The performance of energy detection, the most widely employed spectrum sensing technique in DSA/CR, is first assessed empirically. The outcome of this study motivates the development of a more accurate theoretical-empirical performance model as well as an improved energy detection scheme capable to outperform the conventional method while preserving a similar level of complexity, computational cost and application. The findings of these studies are finally applied in the third part of the dissertation to the development of innovative spectrum usage models for the time (in discrete- and continuous-time versions), frequency and space domains. The proposed models can been combined and integrated into a unified modeling approach where the time, frequency and space dimensions of spectrum usage can simultaneously be reproduced, thus providing a complete and holistic characterization of spectrum usage in real systems for the analysis, design and simulation of the future DSA/CR networks.
APA, Harvard, Vancouver, ISO, and other styles
26

Palmer, Kurt D. "Data collection plans and meta models for chemical process flowsheet simulators." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kaczorowski, Przemyslaw Robert. "Thermal-based multi-objective optimal design of liquid cooled power electronic modules." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/16448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Swinerd, C. "On the design of hybrid simulation models, focussing on the agent-based system dynamics combination." Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/8645.

Full text
Abstract:
There is a growing body of literature reporting the application of hybrid simulations to inform decision making. However, guidance for the design of such models, where the output depends upon more than one modelling paradigm, is limited. The benefits of realising this guidance include facilitating efficiencies in the general modelling process and reduction in project risk (both across measures of time, cost and quality). Focussing on the least well researched modelling combination of agent-based simulation with system dynamics, a combination potentially suited to modelling complex adaptive systems, the research contribution presented here looks to address this shortfall. Within a modelling process, conceptual modelling is linked to model specification via the design transition. Using standards for systems engineering to formally define this transition, a critical review of the published literature reveals that it is frequently documented. However, coverage is inconsistent and consequently it is difficult to draw general conclusions and establish best practice. Therefore, methods for extracting this information, whilst covering a diverse range of application domains, are investigated. A general framework is proposed to consistently represent the content of conceptual models; characterising the key elements of the content and interfaces between them. Integrating this content in an architectural design, design classes are then defined. Building on this analysis, a decision process is introduced that can be used to determine the utility of these design classes. This research is benchmarked against reported design studies considering system dynamics and discrete-event simulation and demonstrated in a case study where each design archetype is implemented. Finally, the potential for future research to extend this guidance to other modelling combinations is discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Roure, Océane. "NUMERICAL SIMULATIONS OF FRICTION-INDUCED NOISE OF AUTOMOTIVE WIPER SYSTEMS." Thesis, KTH, MWL Marcus Wallenberg Laboratoriet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180252.

Full text
Abstract:
Automotive parts may be the cause of very annoying friction-induced noise and the source of many customer complaints. Indeed, when a wiper operates on a windshield, vibratory phenomena may appear due to flutter instabilities and may generate squeal noise. As squeal noise generated by wiper system is a random and complex phenomenon, there are only few studies dealing with the wiper noise. The complexity of this phenomenon is due to the cinematic of the movement and to the various environmental parameters which have an influence on the appearance of the noise. This master thesis is a research and development project and presents a numerical simulation methodology used in the aim to reduce and eradicate squeal noise of wiper systems.  In the first part, the finite element model representing a wiper system and the numerical simulation methodology will be presented in detail. In the second part, stability analysis will be carried out in nominal studies and in designs of experiments. Parametric studies will also be achieved to understand the behavior and the influence of each considered input parameters. Two wiper blades, with the same geometry but with different material, will be considered for the different studies. These two wiper blades will be examined to figure out when squeal noises appear.
APA, Harvard, Vancouver, ISO, and other styles
30

Ünver, Hakkı Özgür. "A comparative study of Lotka-Volterra and system dynamics models for simulation of technology industry dynamics." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44705.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2008.
Includes bibliographical references (leaves 78-80).
Scholars have developed a range of qualitative and quantitative models for generalizing the dynamics of technological innovation and identifying patterns of competition between rivals. This thesis compares two predominant approaches in the quantified modeling of technological innovation and competition. Multi-mode framework, based on the Lotka-Volterra equation barrowed from biological ecology, provide a rich setting for assessing the interaction between two or more technologies. A more recent approach uses System Dynamics to model the dynamics of innovative industries. A System Dynamics approach enables the development of very comprehensive models, which can cover multiple dimensions of innovation, and provides very broad insights for innovative and competitive landscape of an industry. As well as comparing these theories in detail, a case study is also performed on both of them. The phenomenal competition between two technologies in the consumer photography market; the recent battle between digital and film camera technology, is used as a test case and simulated by both models. Real market data is used as inputs to the simulations. Outputs are compared and interpreted with the realities of the current market conditions and predictions of industry analysts. Conclusions are derived on the strengths and weaknesses of both approaches. Directions for future research on model extensions incorporating other forms of innovation are given, such as collaborative interaction in SME networks.
by Hakkı Özgür Ünver.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Satish, Prabhu Nachiketh, and Ranjan Tunga Sarapady. "Evaluation of parametric CAD models from a manufacturing perspective to aid simulation driven design." Thesis, Linköpings universitet, Maskinkonstruktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167724.

Full text
Abstract:
Scania are known among to be the world’s leading supplier of transport solutions for heavy trucks and buses. Scania’s goal is to develop combustion engines that achieve low-pollutant emissions as well as lower carbon-footprint with higher efficiency. To achieve the above Scania has invested resources in Simulation Driven Design of parametric CAD models which drives design innovation rather than following the design. This enables in creating flexible and robust models in their design process. This master thesis is being conducted in collaboration with Scania exhaust after treatment systems department, focusing on developing a methodology to automatically evaluate the cost and manufacturability of a parametric model, which is intended for an agile working environment with fast iterations within Scania. From the thesis methodology’s data collection process literature study, former thesis work and interviews with designers and cost engineers at Scania, a proposed method is developed that can be implemented during the design process. The method involved four different phase they are Design phase, Analysis phase, Validation phase and Improvement phase. The proposed method is evaluated to check the method feasibility for evaluation on parameteric CAD parts for manufacturability and costing. This proposed method is applied on two different parts of a silencer as part of a case study which is mainly to evaluate the results from Improvement phase. The focus of this thesis is to realise the proposed method through simulation software like sheet metal stamping/forming simulation, cost evaluating tool where the simulation driven design process is achieved. This is done with the help of collaboration between parameteric CAD models and the above simulation software under a common MDO framework through DOE study run or optimisation study runs. The resultant designs is later considered to be improved design in terms of manufacturability and costing.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Chun-Sho. "A process simulation model for the manufacture of composite laminates from fiber-reinforced, polyimide matrix prepreg materials." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40298.

Full text
Abstract:
A numerical simulation model has been developed which describes the manufacture of composite laminates from fiber-reinforced polyimide (PMR-15) matrix prepreg materials. The simulation model is developed in two parts. The first part is the volatile formation model which simulates the production of volatiles and their transport through the composite microstructure during the imidization reaction. The volatile formation model can be used to predict the vapor pressure profile and volatile mass flux. The second part of the simulation model, the consolidation model, can be used to determine the degree of crosslinking, resin melt viscosity, temperature, and the resin pressure inside the composite during the consolidation process. Also, the model is used to predict the total resin flow, thickness change, and total process time. The simulation model was solved by a finite element analysis. Experiments were performed to obtain data for verification of the model. Composite laminates were fabricated from ICI Fiberite HMF2474/66C carbon fabric, PMR-15 prep reg and cured with different cure cycles. The results predicted by the model correlate well with experimental data for the weight loss, thickness, and fiber volume fraction changes of the composite. An optimum processing cycle for the fabrication of PMR-15 polyimide composites was developed by combining the model generated optimal imidization and consolidation cure cycles. The optimal cure cycle was used to manufacture PMR-15 composite laminates and the mechanical and physical properties of the laminates were measured. Results showed that fabrication of PMR-15 composite laminates with the optimal cure cycle results in improved mechanical properties and a significantly reduced the processing time compared with the manufacturer's suggested cure cycle.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Pavurala, Naresh. "Oral Drug Delivery -- Molecular Design and Transport Modeling." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/53505.

Full text
Abstract:
One of the major challenges faced by the pharmaceutical industry is to accelerate the product innovation process and reduce the time-to-market for new drug developments. This involves billions of dollars of investment due to the large amount of experimentation and validation processes involved. A computational modeling approach, which could explore the design space rapidly, reduce uncertainty and make better, faster and safer decisions, fits into the overall goal and complements the product development process. Our research focuses on the early preclinical stage of the drug development process involving lead selection, optimization and candidate identification steps. Our work helps in screening the most favorable candidates based on the biopharmaceutical and pharmacokinetic properties. This helps in precipitating early development failures in the early drug discovery and candidate selection processes and reduces the rate of late-stage failures, which is more expensive. In our research, we successfully integrated two well-known models, namely the drug release model (dissolution model) with a drug transport model (compartmental absorption and transit (CAT) model) to predict the release, distribution, absorption and elimination of an oral drug through the gastrointestinal (GI) tract of the human body. In the CAT model, the GI tract is envisioned as a series of compartments, where each compartment is assumed to be a continuous stirred tank reactor (CSTR). We coupled the drug release model in the form of partial differential equations (PDE's) with the CAT model in the form of ordinary differential equations (ODE's). The developed model can also be used to design the drug tablet for target pharmacokinetic characteristics. The advantage of the suggested approach is that it includes the mechanism of drug release and also the properties of the polymer carrier into the model. The model is flexible and can be adapted based on the requirements of the clients. Through this model, we were also able to avoid depending on commercially available software which are very expensive. In the drug discovery and development process, the tablet formulation (oral drug delivery) is an important step. The tablet consists of active pharmaceutical ingredient (API), excipients and polymer. A controlled release of drug from this tablet usually involves swelling of the polymer, forming a gel layer and diffusion of drug through the gel layer into the body. The polymer is mainly responsible for controlling the release rate (of the drug from the tablet), which would lead to a desired therapeutic effect on the body. In our research, we also developed a molecular design strategy for generating molecular structures of polymer candidates with desired properties. Structure-property relationships and group contributions are used to estimate the polymer properties based on the polymer molecular structure, along with a computer aided technique to generate molecular structures of polymers having desired properties. In greater detail, we utilized group contribution models to estimate several desired polymer properties such as grass transition temperature (Tg), density (ρ) and linear expansion coefficient (α). We subsequently solved an optimization model, which generated molecular structures of polymers with desired property values. Some examples of new polymer repeat units are - [CONHCH₂ - CH₂NHCO]n -, - [CHOH - COO]n -. These repeat-units could potentially lead to novel polymers with interesting characteristics; a polymer chemist could further investigate these. We recognize the need to develop group contribution models for other polymer properties such as porosity of the polymer and diffusion coefficients of water and drug in the polymer, which are not currently available in literature. The geometric characteristics and the make-up of the drug tablet have a large impact on the drug release profile in the GI tract. We are exploring the concept of tablet customization, namely designing the dosage form of the tablet based on a desired release profile. We proposed tablet configurations which could lead to desired release profiles such as constant or zero-order release, Gaussian release and pulsatile release. We expect our work to aid in the product innovation process.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Moreno, Oswaldo. "Design of the step-feed activated sludge process." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mueller, Ralph. "Specification and Automatic Generation of Simulation Models with Applications in Semiconductor Manufacturing." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16147.

Full text
Abstract:
The creation of large-scale simulation models is a difficult and time-consuming task. Yet simulation is one of the techniques most frequently used by practitioners in Operations Research and Industrial Engineering, as it is less limited by modeling assumptions than many analytical methods. The effective generation of simulation models is an important challenge. Due to the rapid increase in computing power, it is possible to simulate significantly larger systems than in the past. However, the verification and validation of these large-scale simulations is typically a very challenging task. This thesis introduces a simulation framework that can generate a large variety of manufacturing simulation models. These models have to be described with a simulation data specification. This specification is then used to generate a simulation model which is described as a Petri net. This approach reduces the effort of model verification. The proposed Petri net data structure has extensions for time and token priorities. Since it builds on existing theory for classical Petri nets, it is possible to make certain assertions about the behavior of the generated simulation model. The elements of the proposed framework and the simulation execution mechanism are described in detail. Measures of complexity for simulation models that are built with the framework are also developed. The applicability of the framework to real-world systems is demonstrated by means of a semiconductor manufacturing system simulation model.
APA, Harvard, Vancouver, ISO, and other styles
36

Radulovic, Luka. "Influence of advanced load simulation models on fatigue design of jackets for offshore wind turbines." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6369/.

Full text
Abstract:
Constant developments in the field of offshore wind energy have increased the range of water depths at which wind farms are planned to be installed. Therefore, in addition to monopile support structures suitable in shallow waters (up to 30 m), different types of support structures, able to withstand severe sea conditions at the greater water depths, have been developed. For water depths above 30 m, the jacket is one of the preferred support types. Jacket represents a lightweight support structure, which, in combination with complex nature of environmental loads, is prone to highly dynamic behavior. As a consequence, high stresses with great variability in time can be observed in all structural members. The highest concentration of stresses occurs in joints due to their nature (structural discontinuities) and due to the existence of notches along the welds present in the joints. This makes them the weakest elements of the jacket in terms of fatigue. In the numerical modeling of jackets for offshore wind turbines, a reduction of local stresses at the chord-brace joints, and consequently an optimization of the model, can be achieved by implementing joint flexibility in the chord-brace joints. Therefore, in this work, the influence of joint flexibility on the fatigue damage in chord-brace joints of a numerical jacket model, subjected to advanced load simulations, is studied.
APA, Harvard, Vancouver, ISO, and other styles
37

Tonga, Melek Mehlika. "Uncertainty Evaluation Through Ranking Of Simulation Models For Bozova Oil Field." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613243/index.pdf.

Full text
Abstract:
Producing since 1995, Bozova Field is a mature oil field to be re-evaluated. When evaluating an oil field, the common approach followed in a reservoir simulation study is: Generating a geological model that is expected to represent the reservoir
building simulation models by using the most representative dynamic data
and doing sensitivity analysis around a best case in order to get a history-matched simulation model. Each step deals with a great variety of uncertainty and changing one parameter at a time does not comprise the entire uncertainty space. Not only knowing the impact of uncertainty related to each individual parameter but also their combined effects can help better understanding of the reservoir and better reservoir management. In this study, uncertainties associated only to fluid properties, rock physics functions and water oil contact (WOC) depth are examined thoroughly. Since sensitivity analysis around a best case will cover only a part of uncertainty, a full factorial experimental design technique is used. Without pursuing the goal of a history matched case, simulation runs are conducted for all possible combinations of: 19 sets of capillary pressure/relative permeability (Pc/krel) curves taken from special core analysis (SCAL) data
2 sets of pressure, volume, temperature (PVT) analysis data
and 3 sets of WOC depths. As a result, historical production and pressure profiles from 114 (2 x 3 x 19) cases are presented for screening the impact of uncertainty related to aforementioned parameters in the history matching of Bozova field. The reservoir simulation models that give the best match with the history data are determined by the calculation of an objective function
and they are ranked according to their goodness of fit. It is found that the uncertainty of Pc/krel curves has the highest impact on the history match values
uncertainty of WOC depth comes next and the least effect arises from the uncertainty of PVT data. This study constitutes a solid basis for further studies which is to be done on the selection of the best matched models for history matching purposes.
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Huafen. "Risk-conscious design of off-grid solar energy houses." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31814.

Full text
Abstract:
Thesis (Ph.D)--Architecture, Georgia Institute of Technology, 2010.
Committee Chair: Godfried Augenbroe; Committee Member: Ellis Johnson; Committee Member: Pieter De Wilde; Committee Member: Ruchi Choudhary; Committee Member: Russell Gentry. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
39

Ley-Chavez, Adriana. "Quantitative Models to Design and Evaluate Risk-Specific Screening Strategies for Cervical Cancer Prevention." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1324545286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ba, Shan. "Multi-layer designs and composite gaussian process models with engineering applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44751.

Full text
Abstract:
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
APA, Harvard, Vancouver, ISO, and other styles
41

Machchhar, Raj Jiten. "Automated Model Generation and Pre-Processing to Aid Simulation-Driven Design : An implementation of Design Automation in the Product Development process." Thesis, Linköpings universitet, Maskinkonstruktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168738.

Full text
Abstract:
The regulations on emissions from a combustion engine vehicle are getting tougher with increasing awareness on sustainability, requiring the exhaust after-treatment systems to constantly evolve to the changes in the legislation. To establish a leading position in the competitive market, companies must adapt to these changes within a reasonable timeframe. With Scania’s extensive focus on Simulation-driven design, the product development process at Scania is highly iterative. A considerable amount of time is spent on generating a specific model for a simulation from the existing Computer-aided Design (CAD) model and pre-processing it. Thus, the purpose of this thesis is to investigate how design and simulation teams can collectively work to automatically generate a discretized model from the existing CAD model, thereby reducing repetitive work. As an outcome of this project, a method is developed comprising of two automation modules. The first module, proposed to be used by a design engineer, automatically generates a simulation-specific model from the existing CAD model. The second module, proposed to be used by a simulation engineer, automatically discretizes the model. Based on two case study assemblies, it is shown that the proposed method is significantly robust and has the potential to reduce product development time remarkably.
APA, Harvard, Vancouver, ISO, and other styles
42

Taylor, Richard E. "A design tool for use in simulation and training of sinus surgery." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7294.

Full text
Abstract:
The traditional approaches to training surgeons are becoming increasingly difficult to apply to modern surgical procedures. The development of Minimally Invasive Surgery (MIS) techniques demands new and complex psychomotor skills, and means that the apprentice-based system described by 'see one, do one, teach one' can no longer be expected to fully prepare surgeons for operations on real patients, placing patient safety at risk. The use of cadavers and animals in surgical training raises issues of ethics, cost and anatomical similarity to live humans. Endoscopic sinus surgery involves further risk to the patient due to the proximity of vital structures such as the brain, eyes, optic nerve and internal carotid artery. In recent years, simulation has been used to overcome these problems, exposing surgeons to complex procedures in a safe environment, similarly to its use in aviation. However, the cases simulated in this manner may not be customised by training staff to present desired pathology. This thesis describes the design and development of a new tool for the creation of customised cases for the training of sinus surgery. Users who are inexperienced and non-skilled in the use of three-dimensional (3D) Computer Aided Design (CAD) modelling software may use the tool to implement pathology to the virtual sinus model, which was constructed from real CT data. Swelling is applied in five directions (four horizontal, one vertical) to the cavity lining of the frontal and sphenoid sinuses. Tumours are individually customised and positioned in the frontal, sphenoid and ethmoid sinuses. The customised CAD model may then be latterly manufactured using Three-Dimensional Printing (3DP) to produce the complex anatomy of the sinuses in a full colour physical part for the realistic simulation of surgical procedures. An investigation into the colouring of the physical model is also described, involving the study of endoscopic videos to ascertain realistic shades. The program was evaluated by a group of medical professionals from a range of fields, and their feedback was taken into account in subsequent redevelopment of the program, and to suggest further work.
APA, Harvard, Vancouver, ISO, and other styles
43

Stilkerich, Stephan C. "Probabilistic image models and their massively parallel architectures a seamless simulation- and VLSI design framework approach /." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=984791892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Harth, Tobias [Verfasser]. "Identification of Material Parameters for Inelastic Constitutive Models : Stochastic Simulation and Design of Experiments / Tobias Harth." Aachen : Shaker, 2003. http://d-nb.info/1179036204/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Sichao. "Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78764.

Full text
Abstract:
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Montero, Juan Esteban S. M. Massachusetts Institute of Technology. "Implication of the Jensen's inequality for system dynamic simulations : application to a plug & play integrated refinery supply chain model." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100363.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Page 113 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 60-65).
This investigation studies how critical is the effect of considering uncertainty to a dynamic model because of Jensen's Inequality. This is done using as an example the supply chain of a refinery, which illustrates that the difference between probable and expected results can be significant, arguing that the distributions and probabilities can be dramatically different from the expected-planned value. Moreover, this research discusses that, from the perspective of the dynamics of the system, the mode of behavior can vary considerably as well, leading managers to dissimilar situations and contexts that will inevitably produce different decisions or strategies. Supply chain management is a critical aspect of any business. The energy industry is a particularly relevant example of a global supply chain, representing a crucial challenge the management of complexity and relevance for the overall performance of the business. The complexity of managing the supply chain of an energy company is produced by the physical size, diversity of operations and products and dynamics of the system, among many others causes. On top of the intrinsic complexity of the business itself, the manager of a supply chain should also consider the complexity of the models and methodologies used to make decisions about it. These models and methodologies are diverse and they serve different purposes under certain assumptions. This study also discusses the complexity faced by supply chain managers, presenting a compilation of bibliographic research about different considerations and approaches. Managers often employ models and analytics to simplify the complexity and produce intuition by different means in order to form their decisions and strategies. The analysis of the effects of uncertainty on the results and behavior of a dynamic simulation model is done by stochastically simulating an already-developed -plug & play- dynamic model of a refinery. This approach permits the exploration of different configurations, considering different definitions of uncertainty, analyzing and comparing their particular results.
by Juan Esteban Montero.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
47

Razavi, Majarashin Asghar. "MARKOV STATE MODELS AND THEIR APPLICATIONS IN PROTEIN FOLDING SIMULATION, SMALL MOLECULE DESIGN, AND MEMBRANE PROTEIN MODELING." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/362098.

Full text
Abstract:
Chemistry
Ph.D.
This dissertation is focused on the application of Markov State Models on protein folding and designing of small drug-like molecules, as well as application of computational tools on the study of biological processes. The central focus of protein folding is to understand how proteins obtain their unique three-dimensional structure from their aminoacid sequences. The function of protein critically depends on its three- dimensional structure; hence, any internal (such as mutations) or external (such as high temperature) perturbation that obstructs three-dimensional structure of a protein will also interfere with its function. Many diseases are associated with inability of protein to form its unique structure. For example, sickle cell anemia is caused by a single mutation that changes glutamic acid to valine. Molecular dynamics (MD) simulations could be utilized to study protein folding and effects of perturbations on protein energy landscape; however, due to its inherent atomic resolution, MD simulations usually provide enormous amount of data even for small proteins. A thorough analysis and extraction of desired information from MD provided data could be extremely challenging and is well beyond human comprehension. Markov state models (MSMs) are proved to be apt for the analysis of large scale random processes and equilibrium conditions, hence it could be applied for protein folding studies. MSMs can be used to obtain long timescale information from short timescale simulations. In other words, the combination of many short simulations and MSMs is a powerful technique to study the folding mechanism of many proteins, even the ones with folding times over millisecond. This dissertation is centered on the use of MSMs and MD simulation in understanding protein folding and biological processes and is constructed as the following. The first chapter provides a brief introduction into MD simulation and the different techniques that could be used to facilitate simulations. Protein folding and its challenges are also discussed in chapter one. Finally, chapter one ends with describing MSMs and technical aspects of building them for protein folding studies. Chapter two is focused on using MD simulations and MSMs to design small protein like molecules to prevent biofilm propagation by disrupting its lifecycle. The biofilm lifecycle and strategy for its interruption is described first. Then, the designed molecules and their conformational sampling by MD simulations are explained. Next, the application of MSMs in obtaining and comparing equilibrium population of all designs are discussed. At the end of chapter two, the molecular descriptions of best designs are explained. Chapter three is focused on the effects of mutations on the energy landscape of a sixteen residue protein from c-terminal hairpin of protein G, GB1. Three mutations, tz4, tz5, and tz6 are discussed, and their folding rates and folding mechanisms are compared with wild-type GB1 using MSMs built from a significantly large MD simulation data set (aggregating over 9 millisecond). Finally, chapter four is focused on the application of MD simulations on understanding the selectivity of Na,K-ATPase, a biologically critical protein that transports sodium ions outside and potassium ions inside against their concentration gradient in almost all eukaryotic cells. Multiple MD approaches, including metadynamics and free energy perturbation methods are used to describe the origins of selectivity for Na,K-ATPase.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
48

Yuce, Bilgiday. "Fault Attacks on Embedded Software: New Directions in Modeling, Design, and Mitigation." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81824.

Full text
Abstract:
This research investigates an important class of hardware attacks against embedded software, which uses fault injection as a hacking tool. Fault attacks use well-chosen, targeted fault injection combined with clever system response analysis to break the security of a system. In case of a fault attack on embedded software, faults are injected into the underlying processor hardware and their effects are observed in the executed software's output. This introduces an additional difficulty in mitigation of fault attack risk. Designing efficient countermeasures requires first understanding software, instruction-set, and hardware level components of fault attacks, and then, systematically addressing the vulnerabilities at each level. This research first proposes an instruction fault sensitivity model to capture effects of fault injection on embedded software. Based on the instruction fault sensitivity model, a novel fault attack method called MAFIA (Micro-architecture Aware Fault Injection Attack) is also introduced. MAFIA exploits the vulnerabilities in multiple abstraction layers. This enables an adversary to determine best points to attack during the execution as well as pinpoint the desired fault effects. It has been shown that MAFIA breaks the existing countermeasures with significantly fewer fault injections than the traditional fault attacks. Another contribution of the research is a fault attack simulator, MESS (Micro-architectural Embedded System Simulator). MESS enables a user to model hardware, instruction-set, and software level components of fault attacks in a simulation environment. Thus, software designers can use MESS to evaluate their programs against several real-world fault attack scenarios. The final contribution of this research is the fault-attack-resistant FAME (Fault-attack Aware Microprocessor Extensions) processor, which is suited for embedded, constrained systems. FAME combines fault detection in hardware and fault response in software. This allows low-cost, performance-efficient, flexible, and backward-compatible integration of hardware and software techniques to mitigate fault attack risk. FAME has been designed as an architectural concept as well as implemented as a chip prototype. In addition to protection mechanisms, the chip prototype also includes fault injection and analysis features to ease fault attack research. The findings of this research indicate that considering multiple abstraction layers together is essential for efficient fault attacks, countermeasures, and evaluation techniques.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Badal, Soler Andreu. "Development of advanced geometric models and acceleration techniques for Monte Carlo simulation in Medical Physics." Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6615.

Full text
Abstract:
Els programes de simulació Monte Carlo de caràcter general s'utilitzen actualment en una gran varietat d'aplicacions.
Tot i això, els models geomètrics implementats en la majoria de programes imposen certes limitacions a la forma dels objectes que es poden definir. Aquests models no són adequats per descriure les superfícies arbitràries que es troben en estructures anatòmiques o en certs aparells mèdics i, conseqüentment, algunes aplicacions que requereixen l'ús de models geomètrics molt detallats no poden ser acuradament estudiades amb aquests programes.
L'objectiu d'aquesta tesi doctoral és el desenvolupament de models geomètrics i computacionals que facilitin la descripció dels objectes complexes que es troben en aplicacions de física mèdica. Concretament, dos nous programes de simulació Monte Carlo basats en PENELOPE han sigut desenvolupats. El primer programa, penEasy, utilitza un algoritme de caràcter general estructurat i inclou diversos models de fonts de radiació i detectors que permeten simular fàcilment un gran nombre d'aplicacions. Les noves rutines geomètriques utilitzades per aquest programa, penVox, extenen el model geomètric estàndard de PENELOPE, basat en superfícices quàdriques, per permetre la utilització d'objectes voxelitzats. Aquests objectes poden ser creats utilitzant la informació anatòmica obtinguda amb una tomografia computeritzada i, per tant, aquest model geomètric és útil per simular aplicacions que requereixen l'ús de l'anatomia de pacients reals (per exemple, la planificació radioterapèutica). El segon programa, penMesh, utilitza malles de triangles per definir la forma dels objectes simulats. Aquesta tècnica, que s'utilitza freqüentment en el camp del disseny per ordinador, permet representar superfícies arbitràries i és útil per simulacions que requereixen un gran detall en la descripció de la geometria, com per exemple l'obtenció d'imatges de raig x del cos humà.
Per reduir els inconvenients causats pels llargs temps d'execució, els algoritmes implementats en els nous programes s'han accelerat utilitzant tècniques sofisticades, com per exemple una estructura octree. També s'ha desenvolupat un paquet de programari per a la paral·lelització de simulacions Monte Carlo, anomentat clonEasy, que redueix el temps real de càlcul de forma proporcional al nombre de processadors que s'utilitzen.
Els programes de simulació que es presenten en aquesta tesi són gratuïts i de codi lliures. Aquests programes s'han provat en aplicacions realistes de física mèdica i s'han comparat amb altres programes i amb mesures experimentals.
Per tant, actualment ja estan llestos per la seva distribució pública i per la seva aplicació a problemes reals.
Monte Carlo simulation of radiation transport is currently applied in a large variety of areas. However, the geometric models implemented in most general-purpose codes impose limitations on the shape of the objects that can be defined. These models are not well suited to represent the free-form (i.e., arbitrary) shapes found in anatomic structures or complex medical devices. As a result, some clinical applications that require the use of highly detailed phantoms can not be properly addressed.
This thesis is devoted to the development of advanced geometric models and accelration techniques that facilitate the use of state-of-the-art Monte Carlo simulation in medical physics applications involving detailed anatomical phantoms. To this end, two new codes, based on the PENELOPE package, have been developed. The first code, penEasy, implements a modular, general-purpose main program and provides various source models and tallies that can be readily used to simulate a wide spectrum of problems. Its associated geometry routines, penVox, extend the standard PENELOPE geometry, based on quadric surfaces, to allow the definition of voxelised phantoms. This kind of phantoms can be generated using the information provided by a computed tomography and, therefore, penVox is convenient for simulating problems that require the use of the anatomy of real patients (e.g., radiotherapy treatment planning). The second code, penMesh, utilises closed triangle meshes to define the boundary of each simulated object. This approach, which is frequently used in computer graphics and computer-aided design, makes it possible to represent arbitrary surfaces and it is suitable for simulations requiring a high anatomical detail (e.g., medical imaging).
A set of software tools for the parallelisation of Monte Carlo simulations, clonEasy, has also been developed. These tools can reduce the simulation time by a factor that is roughly proportional to the number of processors available and, therefore, facilitate the study of complex settings that may require unaffordable execution times in a sequential simulation.
The computer codes presented in this thesis have been tested in realistic medical physics applications and compared with other Monte Carlo codes and experimental data. Therefore, these codes are ready to be publicly distributed as free and open software and applied to real-life problems.
APA, Harvard, Vancouver, ISO, and other styles
50

Heap, Ryan C. "Real-Time Visualization of Finite Element Models Using Surrogate Modeling Methods." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/6536.

Full text
Abstract:
Finite element analysis (FEA) software is used to obtain linear and non-linear solutions to one, two, and three-dimensional (3-D) geometric problems that will see a particular load and constraint case when put into service. Parametric FEA models are commonly used in iterative design processes in order to obtain an optimum model given a set of loads, constraints, objectives, and design parameters to vary. In some instances it is desirable for a designer to obtain some intuition about how changes in design parameters can affect the FEA solution of interest, before simply sending the model through the optimization loop. This could be accomplished by running the FEA on the parametric model for a set of part family members, but this can be very timeconsuming and only gives snapshots of the models real behavior. The purpose of this thesis is to investigate a method of visualizing the FEA solution of the parametric model as design parameters are changed in real-time by approximating the FEA solution using surrogate modeling methods. The tools this research will utilize are parametric FEA modeling, surrogate modeling methods, and visualization methods. A parametric FEA model can be developed that includes mesh morphing algorithms that allow the mesh to change parametrically along with the model geometry. This allows the surrogate models assigned to each individual node to use the nodal solution of multiple finite element analyses as regression points to approximate the FEA solution. The surrogate models can then be mapped to their respective geometric locations in real-time. Solution contours display the results of the FEA calculations and are updated in real-time as the parameters of the design model change.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography