Dissertations / Theses on the topic 'Domestic engineering Simulation methods'

To see the other types of publications on this topic, follow the link: Domestic engineering Simulation methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Domestic engineering Simulation methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yu, Huan. "New Statistical Methods for Simulation Output Analysis." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4931.

Full text
Abstract:
In this thesis, there are generally three contributions to the Ranking and Selection problem in discrete-event simulation area. Ranking and selection is an important problem when people want to select single or multiple best designs from alternative pool. There are two different types in discrete-event simulation: terminating simulation and steady-state simulation. For steady-state simulation, there is an initial trend before the data output enters into the steady-state, if we cannot start the simulation from steady state. We need to remove the initial trend before we use the data to estimate the steady-state mean. Our first contribution regards the application to eliminate the initial trend/initialization bias. In this thesis, we present a novel solution to remove the initial trend motivated by offline change detection method. The method is designed to monitor the cumulative absolute bias from the estimated steady-state mean. Experiments are conducted to compare our procedure with other existing methods. Our method is shown to be at least no worse than those methods and in some cases much better. After removing the initialization bias, we can apply a ranking and selection procedure for the data outputs from steady-state simulation. There are two main approaches to ranking and selection problem. One is subset selection and the other one is indifference zone selection. Also by employing directed graph, some single-best ranking and selection methods can be extended to solve multi-best selection problem. Our method is designed to solve multi-best ranking and selection. And in Chapter 3, one procedure for ranking and selection in terminating simulation is extended based full sequential idea. It means we compare the sample means among all systems in contention at each stage. Also, we add a technique to do pre-selection of the superior systems at the same time of eliminating inferior systems. This can accelerate the speed of obtaining the number of best systems we want. Experiments are conducted to demonstrate the pre-selection technique can save observation significantly compared with the procedure without it. Also compared with existing methods, our procedure can save significant number of observations. We also explore the effect of common random number. By using it in the simulation process, more observations can be saved. The third contribution of this thesis is to extend the procedure in Chapter 3 for steady-state simulation. Asymptotic variance is employed in this case. We justify our procedure in asymptotic point of view. And by doing extensive experiments, we demonstrate that our procedure can work in most cases when sample size is finite
APA, Harvard, Vancouver, ISO, and other styles
2

Fiore, Andrew M. (Andrew Michael). "Fast simulation methods for soft matter hydrodynamics." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122848.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references.
This thesis describes the systematic development of methods to perform large scale dynamic simulations of hydrodynamically interacting colloidal particles undergoing Brownian motion. Approximations to the hydrodynamic interactions between particles are built from the periodic fundamental solution for flow at zero Reynolds number and are methodically improved by introducing the multipole expansion and constraints on particle dynamics. Ewald sum splitting, which decomposes the sum of slowly decaying interactions into two rapidly decaying sums evaluated indepently in real space and Fourier space, is used to accelerate the calculation and serves as the basis for a new technique to sample the Brownian displacements that is orders of magnitude faster than prior approaches. The simulation method is first developed using the ubiquitous Rotne-Prager approximation for the hydrodynamic interactions.
Extension of the Rotne-Prager approximation is achieved via the multipole expansion, which introduces the notion of induced force moments whose value is determined from the solution of constraint problems (for example, rigid particles cannot deform in flow), and methods for handling these multipole-based constraints are illustrated. The multipole expansion converges slowly when particles are nearly touching, a problem which is functionally solved for dynamic simulations by including divergent lubrication interactions, in the style of Stokesian Dynamics. The lubrication interactions effectively introduce an additional constraint on the relative motion of closely separated particle pairs. This constraint is combined with the multipole constraints by developing a general method to handle nearly arbitrary dynamic constraints using saddle point matrices. Finally, the methods developed herein are applied to study sedimentation in suspensions of attractive colloidal particles.
The simulation results are used to develop a predictive model for the hindered/promoted settling function that describes the mean sedimentation rate as a function of particle concentration and attraction strength.
"The research in this thesis was supported by the MIT Energy Initiative Shell Seed Fund and NSF Career Award CBET-1 554398"
by Andrew M. Fiore.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Chemical Engineering
APA, Harvard, Vancouver, ISO, and other styles
3

Geller, Benjamin M. "Methods for advancing automobile research with energy-use simulation." Thesis, Colorado State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3635614.

Full text
Abstract:

Personal transportation has a large and increasing impact on people, society, and the environment globally. Computational energy-use simulation is becoming a key tool for automotive research and development in designing efficient, sustainable, and consumer acceptable personal transportation systems. Historically, research in personal transportation system design has not been held to the same standards as other scientific fields in that classical experimental design concepts have not been followed in practice. Instead, transportation researchers have built their analyses around available automotive simulation tools, but conventional automotive simulation tools are not well-equipped to answer system-level questions regarding transportation system design, environmental impacts, and policy analysis.

The proposed work in this dissertation aims to provide a means for applying more relevant simulation and analysis tools to these system-level research questions. First, I describe the objectives and requirements of vehicle energy-use simulation and design research, and the tools that have been used to execute this research. Next this dissertation develops a toolset for constructing system-level design studies with structured investigations and defensible hypothesis testing. The roles of experimental design, optimization, concept of operations, decision support, and uncertainty are defined for the application of automotive energy simulation and system design studies.

The results of this work are a suite of computational design and analysis tools that can serve to hold automotive research to the same standard as other scientific fields while providing the tools necessary to complete defensible and objective design studies.

APA, Harvard, Vancouver, ISO, and other styles
4

Lloyd, Jennifer A. "Numerical methods for Monte Carlo device simulation." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12766.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (leaves 51-53).
by Jennifer Anne Lloyd.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
5

Adnan, Abid Muhammad. "Various methods of water marsh utilization for domestic sewage waste water treatment." Thesis, Högskolan i Borås, Institutionen Ingenjörshögskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20727.

Full text
Abstract:
Different methods are being used for the removal of unwanted material from waste sewage water such as mini sewage treatment plant, infiltration and filter bed. But as compared to all above methods introduction of marsh is more beneficial, as marsh efficiency is much better then above mentioned methods. Marshes also have important role in biodiversity. Domestic waste sewage water contains organic material, viruses, bacteria and pathogens, nitrate and phosphate. These all factor influence the external environment. Removal of nitrate and phosphate is necessary because if they are not removed they may cause rapid increase in the growth of algae. Algae have short life span so they die. Bacteria use a lot of oxygen for the decomposition of algae. So as a result there becomes deoxygenating in the marsh. Most of the animals die because of lack of oxygen in the water source. Marsh method is better as compared to other methods for the removal of organic material and nutrients. Subsurface flow is needed in wetland for getting the best result. Subsurface flow wetland system will make the process better and it will minimize the effect of odor and insects and these both things directly create bad effect on external environment. In the subsurface flow wetland Phragmites australis and similar plants are used. Bacteria grow on the roots of these plants and break down the nutrients. Waste water treatment marshes are best suited for smaller towns, villages and single family homes. They work best under relatively warm conditions, but many are used in temperate climate as well. For the removal of microorganisms, chlorine is to be used, as it is best way for the removal of it. A de-chlorination process is also necessary, otherwise this water will create bad effect on aquatic life.
APA, Harvard, Vancouver, ISO, and other styles
6

Naghiyev, Eldar. "Device-free localisation in the context of domestic energy saving control methods." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/14314/.

Full text
Abstract:
A reduction in greenhouse gas emissions by the energy sector is required to decelerate global warming. With the domestic sector being the biggest energy consumer, a great amount of saving potential is available in the operation of dwellings. This thesis is proposing to improve domestic energy efficiency by combining energy saving control measures designed to be made by occupants and automation systems, called Combined Occupant and Automation Control (COAC). It highlights that the occupant’s position is necessary to effectively integrate both of those conservation methods. Three unobtrusive domestic occupant detection technologies were identified and compared for this purpose. Device-free Localisation (DfL), an emerging technology, which was found to be the most suited for a COAC system, was then investigated further by the means of a series of technical experiments. A questionnaire, investigating user perception of DfL and of COAC systems, was conducted. Furthermore, case studies were undertaken, during which three dwellings with real occupants received prototypes of a COAC system, consisting of automated washing appliances and a smart pricing scheme. As part of these case studies, semi-structured interviews were conducted. User preferences with regards to the COAC system’s interface and operation were established. Also, behavioural changes, induced by occupant control methods, were observed. The different studies furthermore found that financial gain was the main incentive to save energy. Automation system’s support in conserving energy was demonstrated to be distinctly appreciated and although security and privacy concerns were prevalent, DfL’s support was also permitted. Furthermore, guidance was developed for DfL setup and operation, especially with regards to using an automation system’s infrastructure for this purpose. In conclusion, this research suggests that the novel concept of integrating DfL and COAC meets the technical and practical requirements for general adoption, and hence provides another tool in the race against global warming.
APA, Harvard, Vancouver, ISO, and other styles
7

Pirgul, Khalid, and Jonathan Svensson. "Verification of Powertrain Simulation Models Using Machine Learning Methods." Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166290.

Full text
Abstract:
This thesis is providing an insight into the verification of a quasi-static simulation model based on the estimation of fuel consumption using machine learning methods. Traditional verification using real test data is not always available. Therefore, a methodology consisting of verification analysis based on estimation methods was developed together with an improving process of a quasi-static simulation model. The modelling of the simulation model mainly consists of designing and implementing a gear selection strategy together with the gearbox itself for a dual clutch transmission dedicated to hybrid application. The purpose of the simulation model is to replicate the fuel consumption behaviour of vehicle data provided from performed tests. To verify the simulation results, a so-called ranking model is developed. The ranking model estimates a fuel consumption reference for each time step of the WLTC homologation drive cycle using multiple linear regression. The results of the simulation model are verified, and a scoring system is used to indicate the performance of the simulation model, based on the correlation between estimated- and simulated data of the fuel consumption. The results show that multiple linear regression can be an appropriate approach to use as verification of simulation models. The normalised cross-correlation power is also examined and turns out to be a useful measure for correlation be-tween signals including a lag. The developed ranking model is a fast first step of evaluating a new vehicle configuration concept.
APA, Harvard, Vancouver, ISO, and other styles
8

Watson, Harry Alexander James. "Robust simulation and optimization methods for natural gas liquefaction processes." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115702.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 313-324).
Natural gas is one of the world's leading sources of fuel in terms of both global production and consumption. The abundance of reserves that may be developed at relatively low cost, paired with escalating societal and regulatory pressures to harness low carbon fuels, situates natural gas in a position of growing importance to the global energy landscape. However, the nonuniform distribution of readily-developable natural gas sources around the world necessitates the existence of an international gas market that can serve those regions without reasonable access to reserves. International transmission of natural gas via pipeline is generally cost-prohibitive beyond around two thousand miles, and so suppliers instead turn to the production of liquefied natural gas (LNG) to yield a tradable commodity. While the production of LNG is by no means a new technology, it has not occupied a dominant role in the gas trade to date. However, significant growth in LNG exports has been observed within the last few years, and this trend is expected to continue as major new liquefaction operations have and continue to become operational worldwide. Liquefaction of natural gas is an energy-intensive process requiring specialized cryogenic equipment, and is therefore expensive both in terms of operating and capital costs. However, optimization of liquefaction processes is greatly complicated by the inherently complex thermodynamic behavior of process streams that simultaneously change phase and exchange heat at closely-matched cryogenic temperatures. The determination of optimal conditions for a given process will also generally be nontransferable information between LNG plants, as both the specifics of design (e.g. heat exchanger size and configuration) and the operation (e.g. source gas composition) may have significantly variability between sites. Rigorous evaluation of process concepts for new production facilities is also challenging to perform, as economic objectives must be optimized in the presence of constraints involving equipment size and safety precautions even in the initial design phase. The absence of reliable and versatile software to perform such tasks was the impetus for this thesis project. To address these challenging problems, the aim of this thesis was to develop new models, methods and algorithms for robust liquefaction process simulation and optimization, and to synthesize these advances into reliable and versatile software. Recent advances in the sensitivity analysis of nondifferentiable functions provided an advantageous foundation for the development of physically-informed yet compact process models that could be embedded in established simulation and optimization algorithms with strong convergence properties. Within this framework, a nonsmooth model for the core unit operation in all industrially-relevant liquefaction processes, the multi-stream heat exchanger, was first formulated. The initial multistream heat exchanger model was then augmented to detect and handle internal phase transitions, and an extension of a classic vapor-liquid equilibrium model was proposed to account for the potential existence of solutions in single-phase regimes, all through the use of additional nonsmooth equations. While these initial advances enabled the simulation of liquefaction processes under the conditions of simple, idealized thermodynamic models, it became apparent that these methods would be unable to handle calculations involving nonideal thermophysical property models reliably. To this end, robust nonsmooth extensions of the celebrated inside-out algorithms were developed. These algorithms allow for challenging phase equilibrium calculations to be performed successfully even in the absence of knowledge about the phase regime of the solution, as is the case when model parameters are chosen by a simulation or optimization algorithm. However, this still was not enough to equip realistic liquefaction process models with a completely reliable thermodynamics package, and so new nonsmooth algorithms were designed for the reasonable extrapolation of density from an equation of state under conditions where a given phase does not exist. This procedure greatly enhanced the ability of the nonsmooth inside-out algorithms to converge to physical solutions for mixtures at very high temperature and pressure. These models and submodels were then integrated into a flowsheeting framework to perform realistic simulations of natural gas liquefaction processes robustly, efficiently and with extremely high accuracy. A reliable optimization strategy using an interior-point method and the nonsmooth process models was then developed for complex problem formulations that rigorously minimize thermodynamic irreversibilities. This approach significantly outperforms other strategies proposed in the literature or implemented in commercial software in terms of the ease of initialization, convergence rate and quality of solutions found. The performance observed and results obtained suggest that modeling and optimizing such processes using nondifferentiable models and appropriate sensitivity analysis techniques is a promising new approach to these challenging problems. Indeed, while liquefaction processes motivated this thesis, the majority of the methods described herein are applicable in general to processes with complex thermodynamic or heat transfer considerations embedded. It is conceivable that these models and algorithms could therefore inform a new, robust generation of process simulation and optimization software.
by Harry Alexander James Watson.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Baumgartner, Claus Ernst 1961. "Simulation methods for multiconductor transmission lines in electronic applications." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/284323.

Full text
Abstract:
Accurate and efficient simulation of lossy, multi-conductor transmission lines that are terminated by nonlinear circuits is necessary to design high-performance electronic circuits and packages. In this work, theoretical and practical considerations of lossy line simulation are presented. Using delay differential equations, the class of systems with "bidirectional delay" is introduced. These systems can be partitioned such that the resulting subsystems are only linked via delayed variables. It is stated in the "decoupling theorem" that the subsystems can be solved independently for a time interval, which is not longer than the shortest time delay. Circuits that contain transmission lines are shown to form systems with bidirectional delay and, consequently, can be decoupled. Using concepts derived from waveform relaxation, the decoupling is exploited to reduce the computational effort required for transmission line simulation. Moreover, an efficient method for the approximation of lossy line characteristics by rational transfer functions is presented. The method employs nonlinear minimization techniques and yields function coefficients suitable for time-domain modeling. Furthermore, the exponential wave propagation function is represented in the time domain, and discrete-time convolution is employed to calculate the transmission line response. Also described is a filtering method which considerably improves the stability of the simulation, while the deviation in the simulation results is smaller than the local truncation error. In addition, implementation of the lossy line simulator "UAFLICS" is outlined, and practical applications demonstrate the significance of coupling and loss effects.
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Shuai. "Protein-Surface Interactions with Coarse-Grain Simulation Methods." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3943.

Full text
Abstract:
The interaction of proteins with surfaces is a major process involved in protein microarrays. Understanding protein-surface interactions is key to improving the performance of protein microarrays, but current understanding of the behavior of proteins on surfaces is lacking. Prevailing theories on the subject, which suggest that proteins should be stabilized when tethered to surfaces, do not explain the experimentally observed fact that proteins are often denatured on surfaces. This document outlines several studies done to develop a model which is capable of predicting the stabilization and destabilization of proteins tethered to surfaces. As the start point of the research, part of this research showed that the stability of five mainly-alpha, orthogonal-bundle proteins tethered to surfaces can be correlated to the shape of the loop region where the tether is placed and the free rotation ability of the part of proteins near surfaces. To test the expandability of the protein stability prediction pattern derived for mainly-alpha, orthogonal-bundle proteins, same analysis is performed for proteins from other structure motifs. Besides the study in these small two-state proteins, a further analysis of surface-induced change of folding mechanism is also studied with a multi-state lysozyme protein 7LZM. The result showed that by tethering a protein on a surface, the melting temperature of a part of the protein changed, which leads to an avoidance of the meta-stable state. Besides the change of folding mechanism, by tethering the lysozyme protein to a certain site, the protein could both keep a stable structure and a good orientation, allowing active sites to be available to other proteins in bulk solution. All the work described above are done with a purely repulsive surface model which was widely used to roughly simulate solid surfaces in protein microarrays. For a next-level understanding of protein-surface interactions, a novel coarse-grain surface model was developed, parameterized, and validated according to experimental results from different groups. A case study of interaction between lysozyme protein 7LZM and three types of surfaces with the novel model has been performed. The results showed that protein stabilities and structures are dependent on the types of surfaces and their different hydrophobicities. This result is consistent with previously published experimental work.
APA, Harvard, Vancouver, ISO, and other styles
11

Featherkile, B. Nadine 1937. "STRUCTURING AN ENGINEERING AND AN ECOLOGICAL SYSTEM BY Q-ANALYSIS (POLYHEDRAL DYNAMICS, DROSOPHILA, SONORAN DESERT)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/276346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Péraud, Jean-Philippe M. (Jean-Philippe Michel). "Low variance methods for Monte Carlo simulation of phonon transport." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/69799.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-97).
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing the theory of phonons, we use scientific literature to write a Monte Carlo code solving the Boltzmann Transport Equation for phonons. As a first improvement to the particle method presented, we choose to use the Boltzmann Equation in terms of energy as a more convenient and accurate formulation to develop such a code. Then, we use the concept of control variates in order to introduce the notion of deviational particles. Noticing that a thermalized system at equilibrium is inherently a solution of the Boltzmann Transport Equation, we take advantage of this deterministic piece of information: we only simulate the deviation from a nearby equilibrium, which removes a great part of the statistical uncertainty. Doing so, the standard deviation of the result that we obtain is proportional to the deviation from equilibrium. In other words, we are able to simulate signals of arbitrarily low amplitude with no additional computational cost. After exploring two other variants based on the idea of control variates, we validate our code on a few theoretical results derived from the Boltzmann equation. Finally, we present a few applications of the methods.
by Jean-Philippe M. Péraud.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
13

Tan, Nicola. "Inventory management for perishable goods using simulation methods." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90752.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2014. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014. In conjunction with the Leaders for Global Operations Program at MIT.
15
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 66-67).
Amazon.com is the world's largest online retailer, and continues to grow its business by expanding into new markets and new product lines that have not traditionally been sold online. These product categories create new challenges to inventory and operations management. One example of this new type of products sold online includes the category of perishable goods. Perishable goods provide a unique inventory challenge due to the fact that products may expire at unknown times while in stock, making them unavailable for the customer to purchase. This thesis discusses a method for managing perishable goods inventory by characterizing the key variables into empirical probability distributions and developing a computational model for determining the key inventory attribute: the reorder point. This model captures both the demand and loss due to shrinkage based on the age of the product in inventory. The resulting model results in a 25% improvement in simulated inventory levels with more accurate results than current methods. This improvement is shown to come from accounting for the known variability in lead time, as well as survival rate of the product.
by Nicola Tan.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Weiyun. "INVESTIGATION OF FILTERING METHODS FOR LARGE-EDDY SIMULATION." UKnowledge, 2014. http://uknowledge.uky.edu/me_etds/46.

Full text
Abstract:
This thesis focuses on the phenomenon of aliasing and its mitigation with two explicit filters, i.e., Shuman and Padé filters. The Shuman filter is applied to velocity components of the Navier--Stokes equations. A derivation of this filter is presented as an approximation of a 1-D “pure math” mollifier and extend this to 2D and 3D. Analysis of the truncation error and wavenumber response is conducted with a range of grid spacings, Reynolds numbers and the filter parameter, β. Plots of the relationship between optimal filter parameter β and grid spacing, L2-norm error and Reynolds number to suggest ways to predict β are also presented. In order to guarantee that the optimal β is obtained under various stationary flow conditions, the power spectral density analysis of velocity components to unequivocally identify steady, periodic and quasi-periodic behaviours in a range of Reynolds numbers between 100 and 2000 are constructed. Parameters in Pade filters need not be changed. The two filters are applied to velocities in this paper on perturbed sine waves and a lid-driven cavity. Comparison is based on execution time, error and experimental results.
APA, Harvard, Vancouver, ISO, and other styles
15

Preece, Adam. "An investigation into methods to aid the simulation of turbulent separation control." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/94093/.

Full text
Abstract:
The reduction of drag on commercial aircraft is an active field of study especially with environmental pressures to reduce the carbon emissions associated with climate change. To this end, the AEROMEMS-II project was commissioned by the EU with a view to investigate methods for reducing drag by using MEMS devices for controlling separation. One method for investigating flow control devices is to use the field of Computational Fluid Dynamics (CFD) to simulate the flow interactions produced in flow control applications and assess their effect. Simulating such flows can be computationally expensive so a number of methods have been investigated here to assess their use in flow control simulation applications. The first of these is the Immersed Boundary Method (IBM) which allows complex geometries to be simulated using simple cartesian grid CFD codes. IBMs are found to reduce requirements whilst maintaining flow resolution and accuracy. Next is the use of turbulence modelling with wall functions to reduce the need for fine grids near any solid surfaces. This method is found to work well and can allow the grid spacing near the wall to be 100 times coarser than with no wall functions applied. Finally, Detached Eddy Simulation (DES) has been considered as a method for allowing unsteady flow control structures to be simulated without being damped by conventional turbulence modelling. Each of these methods is presented, implemented and validated against known flow cases to assess their abilities fully. All three methods have then been applied together to a known experimental turbulent flow-control set-up at the University of Lille (fellow partners in the AEROMEMS-II project) in order to assess the feasibility of using all of these methods together to simulate flow control. All three of these methods are seen to work well together although not always with the same effect.
APA, Harvard, Vancouver, ISO, and other styles
16

Esgandari, Mohammad. "Simulation methods for vehicle disc brake noise, vibration & harshness." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5762/.

Full text
Abstract:
After decades of investigating brake noise using advanced tools and methods, brake squeal remains a major problem of the automotive industry. The Finite Element Analysis (FEA) method has long been used as a means of reliable simulation of brake noise, mainly using the Complex Eigenvalue Analysis (CEA) to predict the occurrence of instabilities resulting in brake noise. However it has been shown that CEA often over-predicts instabilities. A major improvement for CEA proposed in this study is tuning the model with an accurate level of damping. Different sources of damping are investigated and the system components are tuned using Rayleigh damping method. Also, an effective representative model for the brake insulator is proposed. The FEA model of the brake system tuned with the damping characteristics highlights the actual unstable frequencies by eliminating the over-predictions. This study also investigates effectiveness of a hybrid Implicit-Explicit FEA method which combines frequency domain and time domain solution schemes. The time/frequency domain co-simulation analysis presents time-domain analysis results more efficiently. Frictional forces are known as a major contributing factor in brake noise generation. A new brake pad design is proposed, addressing the frictional forces at the disc-pad contact interface. This concept is based on the hypothesis that variation of frictional coefficient over the radius of the brake pad is effective in reducing the susceptibility of brake squeal.
APA, Harvard, Vancouver, ISO, and other styles
17

Zuev, Konstantin. "Advanced stochastic simulation methods for solving high-dimensional reliability problems /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202009%20ZUEV.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mead, Alex Robert. "Hardware-in-the-Loop Modeling and Simulation Methods for Daylight Systems in Buildings." Thesis, University of California, Berkeley, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10283149.

Full text
Abstract:

This dissertation introduces hardware-in-the-loop modeling and simulation techniques to the daylighting community, with specific application to complex fenestration systems. No such application of this class of techniques, optimally combining mathematical-modeling and physical-modeling experimentation, is known to the author previously in the literature.

Daylighting systems in buildings have a large impact on both the energy usage of a building as well as the occupant experience within a space. As such, a renewed interest has been placed on designing and constructing buildings with an emphasis on daylighting in recent times as part of the "green movement.''

Within daylighting systems, a specific subclass of building envelope is receiving much attention: complex fenestration systems (CFSs). CFSs are unique as compared to regular fenestration systems (e.g. glazing) in the regard that they allow for non-specular transmission of daylight into a space. This non-specular nature can be leveraged by designers to "optimize'' the times of the day and the days of the year that daylight enters a space. Examples of CFSs include: Venetian blinds, woven fabric shades, and prismatic window coatings. In order to leverage the non-specular transmission properties of CFSs, however, engineering analysis techniques capable of faithfully representing the physics of these systems are needed.

Traditionally, the analysis techniques available to the daylighting community fall broadly into three classes: simplified techniques, mathematical-modeling and simulation, and physical-modeling and experimentation. Simplified techniques use "rules-of-thumb'' heuristics to provide insights for simple daylighting systems. Mathematical-modeling and simulation use complex numerical models to provide more detailed insights into system performance. Finally, physical-models can be instrumented and excited using artificial and natural light sources to provide performance insight into a daylighting system. Each class of techniques, broadly speaking however, has advantages and disadvantages with respect to the cost of execution (e.g. money, time, expertise) and the fidelity of the provided insight into the performance of the daylighting system. This varying tradeoff of cost and insight between the techniques determines which techniques are employed for which projects.

Daylighting systems with CFS components, however, when considered for simulation with respect to these traditional technique classes, defy high fidelity analysis. Simplified techniques are clearly not applicable. Mathematical-models must have great complexity in order to capture the non-specular transmission accurately, which greatly limit their applicability. This leaves physical modeling, the most costly, as the preferred method for CFS. While mathematical-modeling and simulation methods do exist, they are in general costly and and still approximations of the underlying CFS behavior. Meaning in fact, measurements of CFSs are currently the only practical method to capture the behavior of CFSs. Traditional measurements of CFSs transmission and reflection properties are conducted using an instrument called a goniophotometer and produce a measurement in the form of a Bidirectional Scatter Distribution Function (BSDF) based on the Klems Basis. This measurement must be executed for each possible state of the CFS, hence only a subset of the possible behaviors can be captured for CFSs with continuously varying configurations. In the current era of rapid prototyping (e.g. 3D printing) and automated control of buildings including daylighting systems, a new analysis technique is needed which can faithfully represent these CFSs which are being designed and constructed at an increasing rate.

Hardware-in-the-loop modeling and simulation is a perfect fit to the current need of analyzing daylighting systems with CFSs. In the proposed hardware-in-the-loop modeling and simulation approach of this dissertation, physical-models of real CFSs are excited using either natural or artificial light. The exiting luminance distribution from these CFSs is measured and used as inputs to a Radiance mathematical-model of the interior of the space, which is proposed to be lit by the CFS containing daylighting system. Hence, the components of the total daylighting and building system which are not mathematically-modeled well, the CFS, are physically excited and measured, while the components which are modeled properly, namely the interior building space, are mathematically-modeled. In order to excite and measure CFSs behavior, a novel parallel goniophotometer, referred to as the CUBE 2.0, is developed in this dissertation. The CUBE 2.0 measures the input illuminance distribution and the output luminance distribution with respect to a CFS under test. Further, the process is fully automated allowing for deployable experiments on proposed building sites, as well as in laboratory based experiments.

In this dissertation, three CFSs, two commercially available and one novel—Twitchell's Textilene 80 Black, Twitchell's Shade View Ebony, and Translucent Concrete Panels (TCP)—are simulated on the CUBE 2.0 system for daylong deployments at one minute time steps. These CFSs are assumed to be placed in the glazing space within the Reference Office Radiance model, for which horizontal illuminance on a work plane of 0.8 m height is calculated for each time step. While Shade View Ebony and TCPs are unmeasured CFSs with respect to BSDF, Textilene 80 Black has been previously measured. As such a validation of the CUBE 2.0 using the goniophotometer measured BSDF is presented, with measurement errors of the horizontal illuminance between +3% and -10%. These error levels are considered to be valid within experimental daylighting investigations. Non-validated results are also presented in full for both Shade View Ebony as well as TCP.

Concluding remarks and future directions for HWiL simulation close the dissertation.

APA, Harvard, Vancouver, ISO, and other styles
19

Szady, Michael Joseph. "Finite element methods for the time dependent simulation of viscoelastic fluid flows." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zisman, Simon. "Simulation of contaminant transport in groundwater systems using Eulerian-Lagrangian localized adjoint methods." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/14233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Elouafiq, Ismail. "Implementation and Simulation Study of Methods for the Evolution of Interdependent Networks." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-337673.

Full text
Abstract:
The purpose of this work is to study different methods for the evolution of interdependent networks. Different systems such as social networks, proteininteractions or transportation systems can be represented using multilayer networks, which can provide a simple and unified means of expression. In this work,we start by selecting a representation of multilayer networks and outlining the pre-existingmethods to simulate their evolution. We then propose a model of multilayer network formation that considers target measure for the network to be generated and focuses on the case of finite multiplex networks. Thus, before defining the model we propose different measures and properties that should enable us to differentiate between a generated network and a real life counterpart. We simulate network formation following the previous method and show in which cases we get closer to our target measures.
APA, Harvard, Vancouver, ISO, and other styles
22

Zechman, Emily Michelle. "Improving Predictability of Simulation Models using Evolutionary Computation-Based Methods for Model Error Correction." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-08082005-105133/.

Full text
Abstract:
Simulation models are important tools for managing water resources systems. An optimization method coupled with a simulation model can be used to identify effective decisions to efficiently manage a system. The value of a model in decision-making is degraded when that model is not able to accurately predict system response for new management decisions. Typically, calibration is used to improve the predictability of models to match more closely the system observations. Calibration is limited as it can only correct parameter error in a model. Models may also contain structural errors that arise from mis-specification of model equations. This research develops and presents a new model error correction procedure (MECP) to improve the predictive capabilities of a simulation model. MECP is able to simultaneously correct parameter error and structural error through the identification of suitable parameter values and a function to correct misspecifications in model equations. An evolutionary computation (EC)-based implementation of MECP builds upon and extends existing evolutionary algorithms to simultaneously conduct numeric and symbolic searches for the parameter values and the function, respectively. Non-uniqueness is an inherent issue in such system identification problems. One approach for addressing non-uniqueness is through the generation of a set of alternative solutions. EC-based techniques to generate alternative solutions for numeric and symbolic search problems are not readily available. New EC-based methods to generate alternatives for numeric and symbolic search problems are developed and investigated in this research. The alternatives generation procedures are then coupled with the model error correction procedure to improve the predictive capability of simulation models and to address the non-uniqueness issue. The methods developed in this research are tested and demonstrated for an array of illustrative applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Skowronn, Dietmar Reinhard. "Simulation of Switched Linear Networks." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4644.

Full text
Abstract:
This thesis deals with the time-domain analysis of switched linear networks and investigates inherent problems which have to be considered when analyzing this class of networks. Computer simulation requires the use of numerical methods and we focus on the transmission -line modelling technique (TLM) and the numerical inverse Laplace transform. A general approach based on the one-graph modified nodal description is given which allows the formulation of circuit equations of a TLM-modelled circuit by inspection. The numerical equivalence of TLM and trapezoidal rule has been found and a proof is given. A variable step size simulator has been developed based on the 4th order numerical inverse Laplace transform. The properties of this method are reviewed and its limitations are discussed. Simulation results are given to illustrate capabilities of the simulator.
APA, Harvard, Vancouver, ISO, and other styles
24

Glover, Peter Benedict Myers. "Computer simulation and analysis methods in the development of the hydraulic ram pump." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66359/.

Full text
Abstract:
The purpose of this study was primarily to promote the wider deployment of the hydraulic ram pump, and secondarily to provide the technical input into a programme aimed at using hydraulic ram pump technologies for third world development. Hitherto hydraulic ram pump technologies have been restricted by poor understanding of operational parameters, poor performance prediction, and poor design of pumps and installations. In pursuit of greater understanding the work utilised a computer simulation developed by the author as part of a previous research programme. This simulation was then greatly enhanced to provide improved accuracy and functionality. The enhanced simulation was then used to provide significant insight into the operation of a hydraulic ram pump and subsequently used to identify design improvements for the hydraulic ram pump. The simulation was used to investigate operational restrictions on the hydraulic ram, and was ultimately use to develop a model of hydraulic ram pump operation. The model of operation developed by the use of the simulation was computerised and used to predict the performance of hydraulic ram pump installations. This computerised model was then used to provide the most comprehensive design charts yet created for hydraulic ram pump, and was also used in the investigation of operational limits for the device. The study represents: the development of the first detailed simulation of the hydraulic ram pump and the most significant insight to date into the detail of operation of a hydraulic ram pump. The result of the study is the provision of an accurate method of pump calibration, an accurate method of pump performance prediction, and the first comprehensive design charts to be produced for the hydraulic ram pump.
APA, Harvard, Vancouver, ISO, and other styles
25

Surleraux, Anthony. "Numerical simulation and optimization of micro-EDM using geometrical methods and machine learning." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/80776/.

Full text
Abstract:
As the need for smaller, more compact and integrated products has evolved, it is no surprise that manufacturing technologies have significantly evolved in order to make miniaturisation to smaller scales possible. More specifically non-conventional machining technologies, relative newcomers in the field of machining, have proven well suited to the task at hand. Among those technologies is micro-EDM (short for Electrical Discharge Machining) that has been the subject of numerous developments. A certain number of variants of micro-EDM exists among which are wire micro-EDM, die-sinking micro-EDM, micro-EDM milling and micro-EDM drilling. While die-sinking macro-EDM is quite common, its micro counterpart isn’t due to problematic tool wear. In order to optimise the die-sinking micro-EDM process in terms of time and cost and make its use more interesting and viable, the present work aims at optimizing the initial tool shape so that it compensates for future wear. The first step was to design a simulation tool effectively able to predict the location and magnitude of wear during the simulation process. An iterative geometrical method was developed, first using NURBS as support geometries then voxels embedded in an octree data structure in order to improve speed and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
26

Carlsson, Magnus. "Methods for Early Model Validation : Applied on Simulation Models of Aircraft Vehicle Systems." Licentiate thesis, Linköpings universitet, Maskinkonstruktion, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91277.

Full text
Abstract:
Simulation  models of physical systems, with or without control software, are widely used in the aeronautic industry in applications ranging from system development to verification and end-user training. With the main drivers of reducing the cost of physical testing and in general enhancing the ability to take early model-based design decisions, there is an ongoing trend of further increasing the portion of modeling and simulation. The work presented in this thesis is focused on development of methodology for model validation, which is a key enabler for successfully reducing the amount of physical testing without compromising safety. Reducing the amount of physical testing is especially interesting in the aeronautic industry, where each physical test commonly represents a significant cost. Besides the cost aspect, it may also be difficult or hazardous to carry out physical testing. Specific to the aeronautic industry are also the relatively long development cycles, implying long periods of uncertainty during product development. In both industry and academia a common viewpoint is that verification, validation, and uncertainty quantification of simulation models are critical activities for a successful deployment of model-based systems engineering. However, quantification of simulation results uncertainty commonly requires a large amount of certain information, and for industrial applications available methods often seem too detailed or tedious to even try. This in total constitutes more than sufficient reason to invest in research on methodology for model validation, with special focus on simplified methods for use in early development phases when system measurement data are scarce. Results from the work include a method supporting early model validation. When sufficient system level measurement data for validation purposes is unavailable, this method provides a means to use knowledge of component level uncertainty for assessment of model top level uncertainty. Also, the common situation of lacking data for characterization of parameter uncertainties is to some degree mitigated. A novel concept has been developed for integrating uncertainty information obtained from component level validation directly into components, enabling assessment of model level uncertainty. In this way, the level of abstraction is raised from uncertainty of component input parameters to uncertainty of component output  characteristics. The method is integrated in a Modelica component library for modeling and simulation of aircraft vehicle systems, and is evaluated in both deterministic and probabilistic frameworks using an industrial application example. Results also include an industrial applicable process for model development, validation, and export, and the concept of virtual testing and virtual certification is discussed.
Simmuleringsmodeller av fysikaliska system, med eller utan reglerande mjukvara, har sedan lång tid tillbaka ett brett användningsområde inom flygindustrin. Tillämpningar finns inom allt från systemutveckling till produktverifiering och träning. Med de huvudsakliga drivkrafterna att reducera mängden fysisk provning samt att öka förutsättningarna till att fatta välgrundade modellbaserade designbeslut pågår en trend att ytterligare öka andelen modellering och simulering. Arbetet som presenteras i denna avhandling är fokuserat på utveckling av metodik för validering av simuleringsmodeller, vilket anses vara ett kritiskt område för att framgångsrikt minska mängden fysisk provning utan att äventyra säkerheten. Utveckling av metoder för att på ett säkert sätt minska mängden fysisk provning är speciellt intressant inom flygindustrin där varje fysiskt prov vanligen utgör en betydande kostnad. Utöver de stora kostnaderna kan det även vara svårt eller riskfyllt att genomföra fysisk provning. Specifikt är även de långa utvecklingscyklerna som innebär att man har långa perioder av osäkerhet under produktutvecklingen. Inom såväl industri som akademi ses verifiering, validering och osäkerhetsanalys av simuleringsmodeller som kritiska aktiviteter för en framgångsrik tillämpning av modellbaserad systemutveckling. Kvantifiering av osäkerheterna i ett simuleringsresultat kräver dock vanligen en betydande mängd säker information, och för industriella tillämpningar framstår tillgängliga metoder ofta som alltför detaljerade eller arbetskrävande. Totalt sett ger detta särskild anledning till forskning inom metodik för modellvalidering, med speciellt fokus på förenklade metoder för användning i tidiga utvecklingsfaser då tillgången på mätdata är knapp. Resultatet från arbetet inkluderar en metod som stöttar tidig modellvalidering. Metoden är avsedd att tillämpas vid brist på mätdata från aktuellt system, och möjliggör utnyttjande av osäkerhetsinformation från komponentnivå för bedömning av osäkerhet på modellnivå. Avsaknad av data för karaktärisering av parameterosäkerheter är även ett vanligt förekommande problem som till viss mån mildras genom användning av metoden. Ett koncept har utvecklats för att integrera osäkerhetsinformation hämtad från komponentvalidering direkt i en modells komponenter, vilket möjliggör en förenklad osäkerhetsanalys på modellnivå. Abstraktionsnivån vid osäkerhetsanalysen höjs på så sätt från parameternivå till komponentnivå. Metoden är implementerad i ett Modelica-baserat komponentbibliotek för modellering och simulering av grundflygplansystem, och har utvärderats i en industriell tillämpning i kombination med både deterministiska och probabilistiska tekniker. Resultatet från arbetet inkluderar även en industriellt tillämplig process för utveckling, validering och export av simuleringsmodeller, och begreppen virtuell provning och virtuell certifiering diskuteras.
APA, Harvard, Vancouver, ISO, and other styles
27

J, Labossière-Hickman Travis. "Modeling and simulation of The Transient Reactor Test Facility using modern neutron transport methods." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123360.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 111-113).
The Transient Reactor Test Facility (TREAT) has regained the interest of the nuclear engineering community in recent years. While TREAT's design makes it uniquely suited to transient fuel testing, it also makes the reactor very challenging to model and simulate. In this thesis, we build a Monte Carlo model of TREAT's Minimum Critical Mass core to examine the effects of fuel impurities, calculate a reference solution, and analyze a number of multigroup cross section generation approaches. Several method of characteristics (MOC) simulations employing these cross sections are then converged in space and angle, corrected for homogenization, and compared to the Monte Carlo reference solution. The thesis concludes with recommendations for future analysis of TREAT using MOC.
by Travis J. Labossière-Hickman.
S.M.
S.M. Massachusetts Institute of Technology, Department of Nuclear Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
28

Cheung, Sai Hung. "Novel simulation methods for calculating the reliability of structural dynamical systems subjected to stochastic loads /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202003%20CHEUNGS.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 113-116). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
29

Durning, John Patrick. "Modeling of acoustic phenomena in computer generated forces." Honors in the Major Thesis, University of Central Florida, 2002. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/271.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering
Science
APA, Harvard, Vancouver, ISO, and other styles
30

Boles, John Arthur. "Hybrid Large-Eddy Simulation/Reynolds-Averaged Navier-Stokes Methods and Predictions for Various High-Speed Flows." NCSU, 2009. http://www.lib.ncsu.edu/theses/available/etd-08122009-170842/.

Full text
Abstract:
Hybrid Large Eddy Simulation/Reynolds-Averaged Navier-Stokes (LES/RANS) simulations of several high-speed flows are presented in this work. The solver blends a Menter BSL two-equation model for the RANS part of the closure with a Smagorisnky sub-grid model for the LES component. The solver uses a flow-dependent blending function based on wall distance and a modeled form of the Taylor micro-scale to transition from RANS to LES. Turbulent fluctuations are initiated and are sustained in the inflow region using a recycling/rescaling technique. A new multi-wall recycling/rescaling technique is described and tested. A spanwise-shifting method is introduced that is intended to alleviate unphysical streamwise streaks of high- and low-momentum fluid that appear in the time-averaged solution due to the recycling procedure. Simulations of sonic injection of air, helium and ethylene into a Mach 2 cross-flow of air are performed. Also, simulations of Mach 5 flow in a subscale inlet/isolator configuration with and without back-pressuring are performed. Finally, a Mach 3.9 flow through a square duct is used as an initial test case for the new multi-wall recycling and rescaling method as well as a multi-wall shifting procedure. A discussion of the methods, implementation and results of these simulations is included.
APA, Harvard, Vancouver, ISO, and other styles
31

Compere, Marc Damon. "Simulation of engineering systems described by high-index DAE and discontinuous ODE using single step methods." Thesis, Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3025206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Senneberg, Sofia. "Methods for validating a flight mechanical simulation model for dynamic maneuvering." Thesis, KTH, Flygdynamik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299412.

Full text
Abstract:
Flight mechanical simulators play an important role in the design steps during development of a new aircraft. To be able to simulate and evaluate flight mechanical characteristics during development it is important to minimize development time and cost while keeping flight safety high during early flights. The aim of the project presented in this thesis is to develop a method for validating a flight mechanical simulator against flight test data from dynamic maneuvering. An important part in this thesis is about how deviations in the result data can be found and analyzed, for example deviations between aircraft individuals or store configurations. The work presented here results in a good model for comparison of a big amount of data where it is easy to backtrace where the deviation occurs.
Flygmekaniska simulatorer är av stor betydelse under utvecklingen av ett nytt stridsflygplan. Möjligheten att simulera och utvärdera under tidens gång har stor betydelse både ur tid- och kostnadsbesparings perspektiv men även ur flygsäkerhetsperspektiv när det är dags för första flygning. Syftet med det här projektet är att utveckla en metod för jämförelse mellan simulering och flygprov för att validera hur bra den flygmekaniska simulatorn kan förutspå flygplansbeteende. En viktig del i projektet syftar till hur skillnader i resultaten kan hittas och analyseras, till exempel skillnader mellan olika flygplansindivider eller lastkonfigurationer. Arbetet presenterat här har resulterat i en modell som är bra för jämförelse av en stor mängd data där det är enkelt att spåra var skillnaderna har uppstått.
APA, Harvard, Vancouver, ISO, and other styles
33

Güldogus, Melih. "Proof of Concept of Closed Loop Re-Simulation (CLR) Methods in Verification of Autonomous Vehicles." Thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223978.

Full text
Abstract:
This degree project, conducted at Volvo Cars, investigates whether closed-loopre-simulation (CLR) methods can provide a safety proof for the autonomousdriving (AD) functions based on previously collected driving data. The elementsunder study for this closed loop approach are model-in-loop based SimulationPlatform Active Safety (SPAS) environment and Active Safety (AS)software.The prerequisites for securing the closed loop re-simulation environment areperforming open-loop simulations with AS software under test and preparing avalidated vehicle model constituting the sensors and actuators. The validatedvehicle model against a set of physical data ensures high confidence in the CAEenvironment. This results in high correlation between physical and simulateddata for the closed loop tests performed for testing the Active Safety algorithms.This thesis work focuses on preparing the vehicle model in SPAS with the emphasison performance of auto-brake functionality in CLR. The vehicle modelin SPAS was prepared by tuning the brake model focusing on the EuNCAPcases in which CLR environment was subsequently tested with respect to Eu-NCAP scenarios.In the procedure of securing CLR methods, it was crucial to design the scenariosin virtual test environment as close as possible to field test conditions tomake reliable comparison with the reality. Therefore, the verification of CLRenvironment was carried out by subjecting the CAE Environment to EuNCAPbraking scenarios with dry surfaces, host vehicle velocities up to 80 km/h andtarget vehicle deceleration levels being 2m/s2 and 6m/s2.As a result of all these virtual tests, it was empirically verified that CLR environmentcan be used to predict braking behaviour of the vehicle in certaintraffic scenarios for the verification of autonomous driving functions.
I detta examensarbete, som utförs på Volvo Cars, undersöks hurvida ett closedloopre-simuleringsverktyg kan användas för att bevisa att en självkörande(AD) funktionalitet är säker baserat på tidigare insamlad kördata. Dennastudie involverar användandet av ett Model-in-the-loop baserat simuleringsverktygkallat Simulation Platform for Active Safety (SPAS) och en mjukvara förAktiv Säkerhet (AS).Förutsättningarna för att säkra en closed-loop re-simuleringsmiljö är att mjukvaransexekvering och fordonsmodellen i simuleringsmiljön valideras genomopen-loop tester. Den valididerade fordonsmodellen jämförs med data frånfysiska prover för att säkra hög konfidens i simuleringarna.Detta examensarbete fokuserar på att förbereda fordonsmodellen i SPAS medtryck på prestandan av auto-broms systemet. Fordonsmodellen i SPAS beredesgenom att ställa in bromsmodellen med fokus på EuNCAP lastfall där CLRmiljön skulle tillämpas. I processen att säkra CLR metoden var det viktigt attdesigna testfall i den virtuella miljön som så bra som möjligt matcha fältprovsfall för att kunna göra en trovärdig jämförelse, därav användes EuNCAP bromstestfall vid torrt underlag, ego hastighet upp mot 80km/h och målbilshasdeccelerationmellan 2 m/s2 och 6 m/s2Som ett resultat av dessa virtuella test har det empiriskt verifierat att CLRmetoden kan användas för att förutspå broms prestanda av fordonet i specifikatrafikscenarion för självkörande funktionalitet.
APA, Harvard, Vancouver, ISO, and other styles
34

Josefsson, Andreas. "Identification and Simulation Methods for Nonlinear Mechanical Systems Subjected to Stochastic Excitation." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00507.

Full text
Abstract:
With an ongoing desire to improve product performance, in combination with the continuously growing complexity of engineering structures, there is a need for well-tested and reliable engineering tools that can aid the decision making and facilitate an efficient and effective product development. The technical assessment of the dynamic characteristics of mechanical systems often relies on linear analysis techniques which are well developed and generally accepted. However, sometimes the errors due to linearization are too large to be acceptable, making it necessary to take nonlinear effects into account. Many existing analysis techniques for nonlinear mechanical systems build on the assumption that the input excitation of the system is periodic and deterministic. This often results in highly inefficient analysis procedures when nonlinear mechanical systems are studied in a non-deterministic environment where the excitation of the system is stochastic. The aim of this thesis is to develop and validate new efficient analysis methods for the theoretical and experimental study of nonlinear mechanical systems under stochastic excitation, with emphasis on two specific problem areas; forced response simulation and system identification from measurement data. A fundamental concept in the presented methodology is to model the nonlinearities as external forces acting on an underlying linear system, and thereby making it possible to use much of the linear theories for simulation and identification. The developed simulation methods utilize a digital filter to achieve a stable and condensed representation of the linear subparts of the system which is then solved recursively at each time step together with the counteracting nonlinear forces. The result is computationally efficient simulation routines, which are particularly suitable for performance predictions when the input excitation consist of long segments of discrete data representing a realization of the stochastic excitation of the system. Similarly, the presented identification methods take advantage of linear Multiple-Input-Multiple-Output theories for random data by using the measured responses to create artificial inputs which can separate the linear system from the nonlinear parameters. The developed methods have been tested with extensive numerical simulations and with experimental test rigs with promising results. Furthermore, an industrial case study of a wave energy converter, with nonlinear characteristics, has been carried out and an analysis procedure capable of evaluating the performance of the system in non-deterministic ocean waves is presented.
APA, Harvard, Vancouver, ISO, and other styles
35

Adams, Ryan, and s200866s@student rmit edu au. "Evaluation of computerised methods of design optimisation and its application to engineering practice." RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070130.122013.

Full text
Abstract:
The ongoing drive for lighter and more efficient structural components by the commercial engineering industry has resulted in the rapid adoption of the finite element method (FE) for design analysis. Satisfied with the success of finite elements in reducing prototyping costs and overall production times, the industry has begun to look at other areas where the finite element method can save time, and in particular, improve designs. First, the mathematical methods of optimisation, on which the methods of structural design improvement are based, are presented. This includes the methods of: topology, influence functions, basis vectors, geometric splines and direct sensitivity methods. Each method is demonstrated with the solution of a sample structural improvement problem for various objectives (frequency, stress and weight reduction, for example). The practical application of the individual methods has been tested by solving three structural engineering problems sourced from the automotive engineering industry: the redesign of two different front suspension control arms, and the cost-reduction of an automatic brake tubing system. All three problems were solved successfully, resulting in improved designs. Each method has been evaluated with respect the practical application, popularity of the method and also any problems using the method. The solutions presented in each section were all solved using the FE design improvement software ReSHAPE from Advea Engineering Pty. Ltd.
APA, Harvard, Vancouver, ISO, and other styles
36

Bernshteyn, Mikhail. "Simulation optimization methods that combine multiple comparisons and genetic algorithms with applications in design for computer and supersaturated experiments /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486397841221374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Louw, Nicolaas Hendrik. "Real time full circuit driving simulation system." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50077.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch Universit, 2004.
ENGLISH ABSTRACT: The requirements regarding the quality of engines and vehicles have increased constantly, requiring more and more sophisticated engine testing. At the same time, there is a strong demand to reduce lead time and cost of development. For many years steady state engine testing was the norm using standard principles of power absorption. Since the mid 1980's increasing importance has been attached to the optimisation of transient engine characteristics and the simulation of dynamic real world driving situations on engine test stands. This has led to the use of bi-directional DC or AC regenerative dynamometers a practice now known as dynamic engine testing. Interfacing a computer with vehicle simulation software to an engine on a dynamic test stand and using "hardware in the loop" techniques, enables the simulation of real world driving situations in a test facility. In dynamic engine testing a distinction can be made between simulation testing and transient testing. In simulation testing the set point values are predetermined whereas in transient testing a model generates set point values in real time. Speeds and loads are calculated in real time on the basis of real time measurements. The model can be in the form of a human or driver simulation. This project involved the application of dynamic engine testing to simulating a racing application. It is termed Real Time Full Circuit Driving Simulation System due to the simulation of a race car circling a race track, controlled by a driver model and running the engine on a dynamic test bench in real time using "hardware in the loop" techniques. By measuring the simulated lap times for a certain engine configuration on the test bench in real time, it is possible to select the optimal engine set-up for every circuit. The real time nature of the simulation subjects the engine on the test bench to similar load and speed conditions as experienced by its racing counterpart in the race car yielding relevant results. The racing simulation was achieved by finding a suitable dynamic vehicle model and a three dimensional race track model, developing a control strategy, programming the software and testing the complete system on a dynamic test stand. In order to verify the simulation results it was necessary to conduct actual track testing on a representative vehicle. A professional racing driver completed three flying laps of the Killarney racing circuit in a vehicle fitted with various sensors including three axis orientation and acceleration sensors, a GPS and an engine control unit emulator for capturing engine data. This included lap time, vehicle accelerations, engine speed and manifold pressure, an indicator of driver input. The results obtained from the real time circuit simulation were compared to actual track data and the results showed good correlation. By changing the physical engine configuration in the hardware and gear ratios in the software, comparative capabilities of the system were evaluated. Again satisfactory results were obtained with the system clearly showing which configuration was best suited for a certain race track. This satisfies the modem trend of minimizing costs and development time and proved the value of the system as a suitable engineering tool for racing engine and drive train optimisation. The Real Time Full Circuit Driving Simulation System opened the door to further development in other areas of simulation. One such area is the driveability of a vehicle. By expanding the model it would be possible to evaluate previously subjective characteristics of a vehicle in a more objective manner.
AFRIKAANSE OPSOMMING: Die vereistes om die kwaliteit van enjins en voertuie te verhoog, word daagliks hoër. Meer gesofistikeerde enjintoetse word daarom vereis. Terselfdertyd is dit 'n groot uitdaging om die tydsduur en koste van ontwikkeling so laag as moontlik te hou. Gestadigde toestand enjintoetse, wat op die prinsiep van krag absorpsie werk, was vir baie jare die norm. Vanaf die middel tagtigerjare het die optimering van dinamiese enjinkarakteristieke en die simulasie van werklike bestuursituasies op enjintoetsbanke van al hoe groter belang geword. Die gevolg was die gebruik van twee rigting wisselof gelykstroomdinamometers en staan vandag bekend as dinamiese enjintoetsing. Deur 'n rekenaar met simulasiesagteware aan 'n enjin op 'n dinamiese toetsbank te koppel, word die moontlikheid geskep om enige werklike bestuursituasies van 'n voertuig te simuleer in die enjintoetsfasiliteit. Dinamiese enjintoetse kan opgedeel word in simulasietoetse en oorgangstoestandtoetse. By laasgenoemde genereer 'n "bestuurdersmodel" die beheerwaardes intyds deur te kyk na intydse metings terwyl by simulasietoetse die beheerwaardes vooraf bepaal word. Die "bestuurder" kan in die vorm van 'n persoon of rekenaarsimulasie wees. Die projek behels die toepassing van dinamiese enjintoetse vir renbaansimulasie en staan bekend as'n Intydse, Volledige Renbaansisteem weens die simulasie van 'n renmotor om 'n renbaan, onder die beheer van 'n bestuurdersmodel. Dit geskied terwyl die enjin intyds op 'n dinamiese enjintoetsbank loop en gekoppel is aan die simulasie. Deur die intydse, gesimuleerde rondtetye te analiseer, word die moontlikheid geskep om die enjinkonfigurasie te optimeer vir 'n sekere renbaan. Dit is bereik deur die keuse van 'n gepaste dinamiese voertuigmodel, 'n driedimensionele renbaanmodel, ontwikkeling van 'n beheermodel, programmering van die sagteware en integrasie van die dinamiese enjintoetsstelsel. Die simulasieresultate verkry is gestaaf deur werklike renbaantoetse. 'n Professionele renjaer het drie rondtes van die Killarney renbaan voltooi in 'n verteenwoordigende voertuig wat toegerus was met verskeie sensors o.a. drie as versnellings- en orientasiesensors, GPS en 'n enjinbeheereenheidemmuleerder vir die verkryging en stoor van enjindata. Die sensors het data versamel wat insluit rondtetyd, voertuigversnellings, enjinspoed en inlaatspruitstukdruk. Die korrelasie tussen die simulasie waardes en werklik gemete data was van hoë gehalte. Deur die fisiese enjinkonfigurasie te verander in die hardeware en ratverhoudings in die sagteware, is die vergelykbare kapasiteite van die renbaansimulasie geevalueer. Die resultate was weer bevredigend en die simulasie was in staat om die beste enjinkonfigurasie vir die renbaan uit te wys. Dit bevredig die moderne neiging om koste en ontwikkelingstyd so laag as moontlik te hou. Sodoende is bewys dat die stelsel waarde in die ingenieurswêreld het. 'n Intydse, Volledige Renbaansisteem die skep die geleentheid vir verdere ontwikkeling op verskeie terreine van simulasie. Een so 'n veld is die bestuurbaarheid van 'n voertuig. Deur die model verder te ontwikkel word die moontlikheid geskep om voorheen subjektiewe karakteristieke van 'n voertuig meer wetenskaplik te analiseer.
APA, Harvard, Vancouver, ISO, and other styles
38

Beck, Joseph A. "Stochastic Mistuning Simulation of Integrally Bladed Rotors using Nominal and Non-Nominal Component Mode Synthesis Methods." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1278600105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lundgren, Jan. "Behavioral Level Simulation Methods for Early Noise Coupling Quantification in Mixed-Signal Systems." Licentiate thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-3434.

Full text
Abstract:
In this thesis, noise coupling simulation is introduced into the behavioral level. Methods and models for simulating on-chip noise coupling at a behavioral level in a design flow are presented and verified for accuracy and validity. Today, designs of electronic systems are becoming denser and more and more mixed-signal systems such as System-on-Chip (SoC) are being devised. This raises problems when the electronics components start to interfere with each other. Often, digital components disturb analog components, introducing noise into the system causing degradation of the performance or even introducing errors into the functionality of the system. Today, these effects can only be simulated at a very late stage in the design process, causing large design iterations and increased costs if the designers are required to return and make alterations, which may have occurred at a very early stage in the process. This is why the focus of this work is centered on extracting noise coupling simulation models that can be used at a very early design stage such as the behavioral level and then follow the design through the various design stages. To realize this, SystemC is selected as a platform and implementation example for the behavioral level models. SystemC supports design refinement, which means that when designs are being refined and are crossing the design levels, the noise coupling models can also be refined to suit the current design. This new way of thinking in primarily mixed-signal designs is called Behavioral level Noise Coupling (BeNoC) simulation and shows great promise in enabling a reduction in the costs of design iterations due to component cross-talk and simplifies the work for mixed-signal system designers.
Electronics Design Division
APA, Harvard, Vancouver, ISO, and other styles
40

Peura, Johan, and Jessica Torssell. "Evaluation of simulation methods and optimal installation conditions for bifacial PV modules : A case study on Swedish PV installations." Thesis, Linköpings universitet, Energisystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150517.

Full text
Abstract:
During the recent years the popularity of solar power have increased tremendously. With the increased interest in solar power comes a development of more efficient and different types of technology to harvest the sun rays. Monofacial panels have been on the market for a long time and have rather developed simulation models. The bifacial technology on the other hand have been researched for years but just recently found its way to the market. Simulation models for the bifacial panels are continuously being developed and they are a key aspect to increase the knowledge about the bifacial technology. Most of the research that has been conducted until today is mainly about the bifacial gain, not about the bifacial simulation models.The purpose of this thesis was to evaluate and validate simulation models of bifacial solar panels in PVsyst with comparisons to measured data from six different bifacial installations in Sweden. The installations had different system configurations and varied in: tilt, azimuth, pitch, elevation, number of rows and albedo. Furthermore, the installation configuration parameters were analyzed to see how they affect the bifacial system and what an optimal configuration would be for a bifacial installation in Sweden.The results show that the main difficulties for an accurate simulation model is to determine the proper input data. The irradiance and albedo proved to be the most difficult parameters to determine. The irradiance was accurate looking at yearly level but already during monthly distribution the error is taking effect. One of the reasons for the errors is the difficulties to determine the diffuse irradiance fraction of the light, especially during cloudy days. The albedo was found to have a linear dependency on the yield, which meant that it is possible that the inaccuracy of the model are solely dependent on albedo.For tilted installations without optimizers the yearly error of the simulation ranged between -5,2% to +3,9% where the lower limit value is suspected to be caused by a wrong albedo value. For a tilted installation with optimizers the error was +9,1%. This could be caused by two reasons; the optimizers are even more dependent on the irradiance or that the software exaggerates the benefits of optimizers. The simulations of vertical installations had an error between -5,4% to -3% and are more accurate than the tilted simulations.Different parameters effect on the specific yield were studied using a simplified simulation model and stepwise change of each parameter. The results were that four of the six studied parameters have no characteristic change on each other and the optimal conditions was to maximize the pitch, elevation and albedo and minimize the number of rows. The remaining two parameters tilt and azimuth showed a dependence on the other parameters, where the optimal azimuth only was affected by tilt while the optimum tilt was affected by all the other parameters. This revelation lead to the conclusion that tilt is the most suitable parameter for optimization of installations because of its dependence on ambient conditions. The optimum tilt was found for the studied cases and in five of the six cases it would have an increased specific yield if the tilt was optimized. Note that for four of those five would lead to an increase of less than 0,5% while for the fifth an increase by 14,2%.
APA, Harvard, Vancouver, ISO, and other styles
41

Amini, Mohammadhossein. "A study of multiple attributes decision making methods facing uncertain attributes." Thesis, Kansas State University, 2015. http://hdl.handle.net/2097/20542.

Full text
Abstract:
Master of Science
Department of Industrial & Manufacturing Systems Engineering
Shing I. Chang
Many decision-making methods have been developed to help decision makers (DMs) make efficient decisions. One decision making method involves selecting the best choice among alternatives based on a set of criteria. Multiple Attribute Decision-Making (MADM) methods allow opportunities to determine the optimal alternative based on multiple attributes. This research aims to overcome two concerns in current MADM methods: uncertainty of attributes and sensitivity of ranking results. Based on availability of information for attributes, a DM maybe certain or uncertain on his judgment on alternatives. Researchers have introduced the use of linguistic terms or uncertain intervals to tackle the uncertainty problems. This study provides an integrated approach to model uncertainty in one of the most popular MADM methods: TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). Current MADM methods also provide a final ranking of alternatives under consideration and, the final solution is based on a calculated number assigned to each alternative. Results have shown that the final value of alternatives may be close to each other uncertain attributes, but current methods rank alternatives according to the final scores. It exhibits a sensitivity issue related to formation of the ranking list. The proposed method solves this problem by simulating random numbers within uncertain intervals in the decision matrix. The proposed outcome is a ranking distribution for alternatives. The proposed method is based on TOPSIS, which defines the best and the worst solution for each attribute and defines the best alternative as closest to best and farthest from the worst solution. Random number distributions were studied under the proposed simulation solution approach. Result showed that triangular random number distribution provides better ranking results than uniform distribution. A case study of building design selection considering resiliency and sustainability attributes was presented to demonstrate use of the proposed method. The study demonstrated that proposed method can provide better decision option for designers due to the ability to consider uncertain attributes. In addition using the proposed method, a DM can observe the final ranking distribution resulted from uncertain attribute values.
APA, Harvard, Vancouver, ISO, and other styles
42

Butler, William M. "The Impact of Simulation-Based Learning in Aircraft Design on Aerospace Student Preparedness for Engineering Practice: A Mixed Methods Approach." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27601.

Full text
Abstract:
It has been said that engineers create that which never was. The university experience is a key component in preparing engineers who support the creation of products and systems that improve the world we live in. The way in which engineers have been trained in universities has changed throughout history in America, moving from an apprentice-like approach to the still-used engineer scientist. Some in industry and academia feel that this model of engineer preparation needs to change in order to better address the complexities of engineering in the 21st century, and help fill a perceived gap between academic preparation and 21st century industrial necessity. A new model for student preparation centering on engineering design called the Live Simulation Based Learning (LSBL) approach is proposed based upon the theories of situated learning, game-based learning, epistemic frames, and accidental competencies. This dissertation discusses the results of a study of the application of LSBL in a two term capstone design class in aerospace engineering aircraft design at Virginia Tech. It includes LSBLâ s impact on student professional and technical skills in relation to aerospace engineering design practice. Results indicate that the participants found the LSBL experience to be more engaging than the traditional lecture approach and does help students respond and think more like aerospace engineering practicing professionals and thus begin to address the â gapâ between academia and industry.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Bailey, William. "Using model-based methods to support vehicle analysis planning." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50377.

Full text
Abstract:
Vehicle system analysis models are becoming crucial to automotive designers wishing to better understand vehicle-level attributes and how they vary under different operating conditions. Such models require substantial planning and collaboration between multidisciplinary engineering teams. To improve the process used to create a vehicle system analysis model, the broader question of how to plan and develop any model should be addressed. Model-Based Systems Engineering (MBSE) is one approach that can be used to make such complex engineering tasks more efficient. MBSE can improve these tasks in several ways. It allows for more formal communication among stakeholders, avoids the ambiguity commonly found in document-based approaches to systems engineering, and allows stakeholders to all contribute to a single, integrated system model. Commonly, the Systems Modeling Language (SysML) is used to integrate existing analysis models with a system-level SysML model. This thesis, on the other hand, focuses on using MBSE to support the planning and development of the analysis models themselves. This thesis proposes an MBSE approach to improve the development of system models for Integrated Vehicle Analysis (IVA). There are several contributions of this approach. A formal process is proposed that can be used to plan and develop system analysis models. A comprehensive SysML model is used to capture both a descriptive model of a Vehicle Reference Architecture (VRA), as well as the requirements, specifications, and documentation needed to plan and develop vehicle system analysis models. The development of both the process and SysML model was performed alongside Ford engineers to investigate how their current practices can be improved. For the process and SysML model to be implemented effectively, a set of software tools is used to create a more intuitive user interface for the stakeholders involved. First, functionality is added to views and viewpoints in SysML so that they may be used to formally capture the concerns of different stakeholders as exportable XML files. Using these stakeholder-specific XML files, a custom template engine can be used to generate unique spreadsheets for each stakeholder. In this way, the concerns and responsibilities of each stakeholder can be defined within the context of a formally defined process. The capability of these two tools is illustrated through the use of examples which mimic current practices at Ford and can demonstrate the utility of such an approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Anguiano, Sanjurjo David. "Investigation of Hybrid Simulation Methods for Evaluation of EMF Exposure in Close Proximity of 5G Millimeter-Wave Base Stations." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284324.

Full text
Abstract:
With the emergence of Fifth Generation (5G) mobile networks, the employment ofhigher frequencies in the millimeter-wave (mmWave) range and the realization of agreat number of beams in 5G radio base stations (RBS) make the electromagnetic (EM)simulation of RBS products very costly in terms of hardware and time requirements.In order to compute the electromagnetic field (EMF) exposure in close proximity of theRBS, more efficient simulation methods are required.The move to mmWave frequencies enables the use of the so-called high frequencymethods for EM simulation with RBS antennas. In this thesis, conventional fullwavesimulation solvers and different implementations of hybridization of highfrequency methods with conventional methods are used with different commercial EMsimulation tools, and their performance is evaluated for the purpose of EMF exposureassessment in close proximity of 5G mmWave RBS.Among all the investigated methods, the hybrid scheme with Finite IntegrationTechnique (FIT) and Shooting and Bouncing Rays (SBR) methods, e.g., thatimplemented in CST Studio Suite 2020, outperforms in terms of hardwarerequirements and time costs, although the accuracy is compromised on the side andbehind the mmWave RBS. The Multilevel Fast Multipole Method (MLFMM), e.g.,that implemented in Altair FEKO 2019, though not a hybrid method, also has goodperformance but requires very large Random Access Memory (RAM), and it cannothandle very exquisite details of RBS. The Finite Difference Time Domain (FDTD)method implemented in EMPIRE XPU can also handle the investigated problemseffciently, but for extremely large problems, its requirements on RAM may become thebottleneck. In the thesis, many other hybrid implementations are also investigated,but it is found that they are not suitable for the EMF exposure assessment in closeproximity of the mmWave RBS with evaluation on a planar area of 0.42 m × 1 m at 28 GHz due to various reasons.
För den femte generationens (5G) mobilnät kommer användningen av millimetervågoroch det stora antalet lober som en radiobasstation (RBS) kan hantera att betydaett kraftigt ökat behov av hårdvara och större tidsåtgång för att göra beräkningarav exponeringen för elektromagnetiska fält nära utrustningen. Därför behövs mereffektiva simuleringsmetoder.Eftersom systemen opererar på millimetervåg-frekvenser kan högfrekvensmetoderanvändas i simuleringen av simuleringen av en RBS. I den här avhandlingenutvärderas konventionella metoder, samt olika hybridmetoder för beräkningenav EMF-exponeringen av millimetervågor i närheten av en RBS. De utvärderadehybridmetoderna är implementerade i olika mjukvaror och blandar användandet avhögfrekvensmetoder och konventionella metoder.Av alla utvärderade metoder fungerar hybridmetoden implementerad med finitaintegralmetoden (FIT) och ”Shooting and Bouncing Rays”-metoden (SBR) i CST bästi termer av vilken hårdvara som behövs för beräkningarna och för tidsåtgången.Dock är noggrannheten i beräkningarna på sidan av och bakom RBSen mindrebra. Multilevel Fast Multipole Method (MLFMM)”-lösaren i Feko i FEKO använderingen hybridmetod men presterar bra, men den kräver mycket RAM-minne och kaninte ta hänsyn till små detaljer i RBSen. Finita differensmetoden i tidsdomänen(FDTD) i EMPIRE kan också användas men dess RAM-krav blir en flaskhals förstora simuleringar. Ytterligare hybridmetoder är undersökta i avhandlingen men medslutsatsen att de inte är användbara (av olika anledningar) för beräkningen av EMFexponeringenfrån en RBS opererandes på frekvensen 28 GHz och över en yta som är0.42 x 1 m.
APA, Harvard, Vancouver, ISO, and other styles
45

Vadambacheri, Manian Karthik. "Novel Methods to Improve the Energy Efficiency of Multi-core Synchronization Primitives." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1511858440610247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Martinez, Luis Iñaki. "Investigation of CFD conjugate heat transfer simulation methods for engine components at SCANIA CV AB." Thesis, Linköpings universitet, Mekanisk värmeteori och strömningslära, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138758.

Full text
Abstract:
The main objective of this Master Thesis project is the development of a new methodology to perform Computational Fluid Dynamics (CFD) conjugate heat transfer simulations for internal combustion engines, at the Fluid and Combustion Simulations Department (NMGD) at Scania CV AB, Södertalje, Sweden. This new method allows to overcome the drawbacks identified in the former methodology, providing the ability to use the more advanced polyhedral mesh type to generate good quality grids in complex geometries like water cooling jackets, and integrating all the different components of the engine cylinder in one unique multi-material mesh. In the method developed, these advantages can be used while optimizing the process to perform the simulations, and obtaining improved accuracy in the temperature field of engine components surrounding the water cooling jacket when compared to the experimental data from Scania CV AB tests rigs. The present work exposes the limitations encountered within the former methodology and presents a theoretical background to explain the physics involved, describing the computational tools and procedures to solve these complex fluid and thermal problems in a practical and cost-effective way, by the use of CFD.A mesh sensitivity analysis performed during this study reveals that a mesh with low y+ values, close to 1 in the water cooling jacket, is needed to obtain an accurate temperature distribution along the cylinder head, as well as to accurately identify boiling regions in the coolant domain. Another advantage of the proposed methodology is that it provides new capabilities like the implementation of thermal contact resistance in periodical contact regions of the engine components, improving the accuracy of the results in terms of temperature profiles of parts like valves, seats and guides. The results from this project are satisfactory, providing a reliable new methodology for multi-material thermal simulations, improving the efficiency of the work to be performed in the NMGD department, with a better use of the available engineering and computational resources, simplifying all the stages of multi-material projects, from the geometry preparation and meshing, to the post-processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
47

Rodriguez, Simonetta Andrea 1952. "Human/environmental relations analysis & simulation using human-centered systems methods for design and evaluation of complex habitable environments." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/84809.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, February 2002.
Includes bibliographical references (p. 69-75).
by Simonetta Andrea Rodriguez.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
48

El, Hayek Mustapha Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Optimizing life-cycle maintenance cost of complex machinery using advanced statistical techniques and simulation." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/24955.

Full text
Abstract:
Maintenance is constantly challenged with increasing productivity by maximizing up-time and reliability while at the same time reducing expenditure and investment. In the last few years it has become evident through the development of maintenance concepts that maintenance is more than just a non-productive support function, it is a profit- generating function. In the past decades, hundreds of models that address maintenance strategy have been presented. The vast majority of those models rely purely on mathematical modeling to describe the maintenance function. Due to the complex nature of the maintenance function, and its complex interaction with other functions, it is almost impossible to accurately model maintenance using mathematical modeling without sacrificing accuracy and validity with unfeasible simplifications and assumptions. Analysis presented as part of this thesis shows that stochastic simulation offers a viable alternative and a powerful technique for tackling maintenance problems. Stochastic simulation is a method of modeling a system or process (on a computer) based on random events generated by the software so that system performance can be evaluated without experimenting or interfering with the actual system. The methodology developed as part of this thesis addresses most of the shortcomings found in literature, specifically by allowing the modeling of most of the complexities of an advanced maintenance system, such as one that is employed in the airline industry. This technique also allows sensitivity analysis to be carried out resulting in an understanding of how critical variables may affect the maintenance and asset management decision-making process. In many heavy industries (e.g. airline maintenance) where high utilization is essential for the success of the organization, subsystems are often of a rotable nature, i.e. they rotate among different systems throughout their life-cycle. This causes a system to be composed of a number of subsystems of different ages, and therefore different reliability characteristics. This makes it difficult for analysts to estimate its reliability behavior, and therefore may result in a less-than-optimal maintenance plan. Traditional reliability models are based on detailed statistical analysis of individual component failures. For complex machinery, especially involving many rotable parts, such analyses are difficult and time consuming. In this work, a model is proposed that combines the well-established Weibull method with discrete simulation to estimate the reliability of complex machinery with rotable subsystems or modules. Each module is characterized by an empirically derived failure distribution. The simulation model consists of a number of stages including operational up-time, maintenance down-time and a user-interface allowing decisions on maintenance and replacement strategies as well as inventory levels and logistics. This enables the optimization of a maintenance plan by comparing different maintenance and removal policies using the Cost per Unit Time (CPUT) measure as the decision variable. Five different removal strategies were tested. These include: On-failure replacements, block replacements, time-based replacements, condition-based replacements and a combination of time-based and condition-based strategies. Initial analyses performed on aircraft gas-turbine data yielded an optimal combination of modules out of a pool of multiple spares, resulting in an increased machine up-time of 16%. In addition, it was shown that condition-based replacement is a cost-effective strategy; however, it was noted that the combination of time and condition-based strategy can produce slightly better results. Furthermore, a sensitivity analysis was performed to optimize decision variables (module soft-time), and to provide an insight to the level of accuracy with which it has to be estimated. It is imperative as part of the overall reliability and life-cycle cost program to focus not only on reducing levels of unplanned (i.e. breakdown) maintenance through preventive and predictive maintenance tasks, but also optimizing inventory of spare parts management, sometimes called float hardware. It is well known that the unavailability of a spare part may result in loss of revenue, which is associated with an increase in system downtime. On the other hand increasing the number of spares will lead to an increase in capital investment and holding cost. The results obtained from the simulation model were used in a discounted NPV (Net Present Value) analysis to determine the optimal number of spare engines. The benefits of this methodology are that it is capable of providing reliability trends and forecasts in a short time frame and based on available data. In addition, it takes into account the rotable nature of many components by tracking the life and service history of individual parts and allowing the user to simulate different combinations of rotables, operating scenarios, and replacement strategies. It is also capable of optimizing stock and spares levels as well as other related key parameters like the average waiting time, unavailability cost, and the number of maintenance events that result in extensive durations due to the unavailability of spare parts. Importantly, as more data becomes available or as greater accuracy is demanded, the model or database can be updated or expanded, thereby approaching the results obtainable by pure statistical reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Pengfei. "Stochastic Methods for Dilemma Zone Protection at Signalized Intersections." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28805.

Full text
Abstract:
Dilemma zone (DZ), also called decision zone in other literature, is an area where drivers face an indecisiveness of stopping or crossing at the yellow onset. The DZ issue is a major reason for the crashes at high-speed signalized intersections. As a result, how to prevent approaching vehicles from being caught in the DZ is a widely concerning issue. In this dissertation, the author addressed several DZ-associated issues, including the new stochastic safety measure, namely dilemma hazard, that indicates the vehiclesâ changing unsafe levels when they are approaching intersections, the optimal advance detector configurations for the multi-detector green extension systems, the new dilemma zone protection algorithm based on the Markov process, and the simulation-based optimization of traffic signal systems with the retrospective approximation concept. The findings include: the dilemma hazard reaches the maximum when a vehicle moves in the dilemma zone and it can be calculated according the caught vehiclesâ time to the intersection; the new (optimized) GES design can significantly improve the safety, but slightly improve the efficiency; the Markov process can be used in the dilemma zone protection, and the Markov-process-based dilemma zone protection system can outperform the prevailing dilemma zone protection system, the detection-control system (D-CS). When the data collection has higher fidelity, the new system will have an even better performance. The retrospective approximation technique can identify the sufficient, but not excessive, simulation efforts to model the true system and the new optimization algorithm can converge fast, as well as accommodate the requirements by the RA technique.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Flynn, Julie. "Simulation of millisecond catalytic partial oxidation of methane in a monolithic reactor for the production of hydrogen using finite element methods." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99409.

Full text
Abstract:
Hydrogen can be the key solution of all our energy needs in the future and to face climate change while reducing greenhouse gases. Syngas, H 2 and CO, is industrially produced by steam reforming of methane. A potential alternative is the catalytic partial oxidation of methane. The process is fast, exothermic and auto-thermal. A dual sequential bed catalyst is used, which makes use of a combustion catalyst followed by a reforming catalyst in order to carry out catalytic partial oxidation in two steps.
Numerical simulations using finite elements methods coupled with global kinetics are performed to have a better understanding of the transient process and the solid and gas temperature profiles in a catalyst. The results include temporal and spatial reactant conversion, product selectivity, and temperature profiles in the catalyst. Where possible simulation results are compared to experimental data.
The model shows high yields of hydrogen from methane and air which fits the experimental results in most of the cases. It also fits qualitatively the transient results. The influence of the kinetics was investigated and it is the principle limitation of the model which leads to a poor quantitative description.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography