Dissertations / Theses on the topic 'Simulations'

To see the other types of publications on this topic, follow the link: Simulations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Simulations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dave, Jagrut Durdant. "Parallel Discrete Event Simulation Techniques for Scientific Simulations." Thesis, Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6942.

Full text
Abstract:
Exponential growth in computer technology, both in terms of individual CPUs and parallel technologies over the past decades has triggered rapid progress in large scale simulations. However, despite these achievements it has become clear that many conventional state-of-the-art techniques are ill-equipped to tackle problems that inherently involve multiple scales in configuration space. Our difficulty is that conventional ("time driven" or "time stepped") techniques update all parts of simulation space (fields, particles) synchronously, i.e. at time intervals assumed to be the same throughout the global computation domain or at best varying on a sub-domain basis (in adaptive mesh refinement algorithms). Using a serial electrostatic model, it was recently shown that discrete event techniques can lead to more than two orders of magnitude speedup compared to the time-stepped approach. In this research, the focus is on the extension of this technique to parallel architectures, using parallel discrete event simulation. Previous research in parallel discrete event simulations of scientific phenomena has been limited This thesis outlines a technique for converting a time-stepped simulation in the scientific domain into an equivalent parallel discrete event model. As a candidate simulation, an electromagnetic hybrid plasma simulation is considered. The experiments and analysis show the trade-offs on performance by varying the following factors: the simulations model characteristics (e.g. lookahead), applications load balancing, and accuracy of simulation results. The experiments are performed on a high performance cluster, using a conservative synchronization mechanism. Initial performance results are encouraging, demonstrating very good parallel speedup for large-scale model configurations containing tens of thousands of cells. Overheads for inter-processor communication remain a challenge for smaller computations.
APA, Harvard, Vancouver, ISO, and other styles
2

Svensson, Henrik. "Simulations." Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-98050.

Full text
Abstract:
This thesis is concerned with explanations of embodied cognition as internal simulation. The hypothesis is that several cognitive processes can be explained in terms of predictive chains of simulated perceptions and actions. In other words, perceptions and actions are reactivated internally by the nervous system to be used in cognitive phenomena such as mental imagery. This thesis contributes by advancing the theoretical foundations of simulations and the empirical grounds on which they are based, including a review of the empiricial evidence for the existence of simulated perceptions and actions in cognition, a clarification of the representational function of simulations in cognition, as well as identifying implicit, bodily and environmental anticipation as key mechanisms underlying such simulations. The thesis also develops the ³inception of simulation² hypothesis, which suggests that dreaming has a function in the development of simulations by forming associations between experienced, non-experienced but realistic, and even unrealistic perceptions during early childhood. The thesis further investigates some aspects of simulations and the ³inception of simulation² hypothesis by using simulated robot models based on echo state networks. These experiments suggest that it is possible for a simple robot to develop internal simulations by associating simulated perceptions and actions, and that dream-like experiences can be beneficial for the development of such simulations.
APA, Harvard, Vancouver, ISO, and other styles
3

Pawlik, Amadeusz, and Henry Andersson. "Visualising Interval-Based Simulations." Thesis, Högskolan i Halmstad, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-28592.

Full text
Abstract:
Acumen is a language and tool for modeling and simulating cyber-physical systems. It allows the user to conduct simulations using a technique called rigorous simulation that produces results with explicit error bounds, expressed as intervals. This feature can be useful when designing and testing systems where the reliability of results or taking uncertainty into account is important. Unfortunately, analyzing these simulation results can be difficult, as Acumen supports only two ways of presenting them: raw data tables and 2D-plots. These views of the data make certain kinds of analysis cumbersome, such as understanding correlations between variables. This is especially true when the model in question is large. This project proposes a new way of visualising rigorous simulation results in Acumen. The goal of this project is to create a method for visualising intervallic values in 3D, and implement it in Acumen. To achieve that, every span of values is represented as a series of overlapping objects. This family of objects, which constitutes an under-approximation of the true simulation result, is then wrapped inside a semi-translucent box that is a conservative over-approximation of the simulation result. The resulting implementation makes for a combination of mathematical correctness (rigour), and mediation of intervals in question. It enables the user to explore the results of his rigorous simulations as conveniently as with the existing, non-rigorous simulation methods, using the 3D visualisation to simplify the study of real-life problems. To our knowledge, no existing software features visualisation of interval-based simulation results, nor is there any convention for doing this. Some ways in which the proposed solution could be improved are suggested at the end of this report
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Ya-Lin. "Ad hoc distributed simulation: a method for embedded online simulations." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49060.

Full text
Abstract:
The continual growth of computing power in small devices has motivated the development of novel approaches to optimizing operational systems efficiently and effectively. These optimization problems are often so complex that solving them analytically may be difficult, if not prohibited. One method for solving such problems is to use online simulation. However, challenges in using online simulation include the issues of responsiveness (e.g., because of communication delays), scalability, and failure resistance. To tackle these issues, this study proposes embedding online simulations into a network of sensors that monitors the system under investigation. This thesis explores an approach termed “ad hoc distributed simulation,” which is based on embedding online simulations into a sensor network and adding communication and synchronization among simulators to model operational systems. This approach offers several potential advantages over existing approaches: (1) it can provide rapid response to system dynamics as well as efficiency since data exchange is local to the sensor network, (2) it can achieve better scalability to incorporate more sensors, and (3) it can provide better robustness to failures because portions of the system are still under local control. This research addresses several statistical issues in this ad hoc approach: (1) rapid and effective estimation of the input processes at model boundaries, (2) estimation of system-wide performance measures from individual simulator outputs, and (3) correction mechanisms responding to unexpected events or inaccuracies within the model. This thesis examines ad hoc distributed simulation analytically and experimentally, mainly focusing on the accuracy of predicting the performance of open queueing networks. First, the analytical part formalizes the ad hoc approach and evaluates its accuracy at modeling certain class of open queueing networks with regard to the steady-state system performance measures. This work concerning steady-state metrics is extended to a broader class of networks by an empirical study, which presents evidence to show that the ad hoc approach can generate predictions comparable to those from sequential simulations. Furthermore, a “buffered-area” mechanism is proposed to substantially reduce prediction bias with a moderate increase in execution time. In addition to those steady-state studies, another empirical study targets the prediction accuracy of the ad hoc approach at open queueing networks with short-term system-state transients. This study demonstrates that, with slight modification to the prior design of the ad hoc queueing simulation method for those steady-state studies, system dynamics can be well modeled. The results, again, support the conclusion that the ad hoc approach is competitive to the sequential simulation method in terms of prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Andersson, Håkan. "Parallel Simulation : Parallel computing for high performance LTE radio network simulations." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12390.

Full text
Abstract:
Radio access technologies for cellular mobile networks are continuously being evolved to meet the future demands for higher data rates, and lower end‐to‐end delays. In the research and development of LTE, radio network simulations play an essential role. The evolution of parallel processing hardware makes it desirable to exploit the potential gains of parallelizing LTE radio network simulations using multithreading techniques in contrast to distributing experiments over processors as independent simulation job processes. There is a hypothesis that parallel speedup gain diminishes when running many parallel simulation jobs concurrently on the same machine due to the increased memory requirements. A proposed multithreaded prototype of the Ericsson LTE simulator has been constructed, encapsulating scheduling, execution and synchronization of asynchronous physical layer computations. In order to provide implementation transparency, an algorithm has been proposed to sort and synchronize log events enabling a sequential logging model on top of non‐deterministic execution. In order to evaluate and compare multithreading techniques to parallel simulation job distribution, a large number of experiments have been carried out for four very diverse simulation scenarios. The evaluation of the results from these experiments involved analysis of average measured execution times and comparison with ideal estimates derived from Amdahl’s law in order to analyze overhead. It has been shown that the proposed multithreaded task‐oriented framework provides a convenient way to execute LTE physical layer models asynchronously on multi‐core processors, still providing deterministic results that are equivalent to the results of a sequential simulator. However, it has been indicated that distributing parallel independent jobs over processors is currently more efficient than multithreading techniques, even though the achieved speedup is far from ideal. This conclusion is based on the observation that the overhead caused by increased memory requirements, memory access and system bus congestion is currently smaller than the thread management and synchronization overhead of the proposed multithreaded Java prototype.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Harpreet. "Computer simulations of realistic microstructures implications for simulation-based materials design/." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22564.

Full text
Abstract:
Thesis (Ph. D.)--Materials Science and Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dr. Arun Gokhale; Committee Member: Dr. Hamid Garmestani; Committee Member: Dr. Karl Jacob; Committee Member: Dr. Meilin Liu; Committee Member: Dr. Steve Johnson.
APA, Harvard, Vancouver, ISO, and other styles
7

Strauss, Martin. "Dynamic market simulations." Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Dept. für Informatik, 2001. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Bruno. "Simulations d'automates cellulaires." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00212057.

Full text
Abstract:
Ce mémoire est composé de deux grandes parties. Dans la première, nous simulons le fonctionnement d'automates cellulaires par différents modèles de calcul parallèle comme les PRAM, les XPRAM et les machines spatiales. Nous obtenons ainsi différentes preuves de l'universalité de ces modèles. Nous tirons quelques conséquences de ces résultats du point de vue de la calculabilité et de la complexité. Dans la seconde partie, nous considérons les automates cellulaires définis sur des graphes de Cayley finis. Nous rappelons la simulation de Róka qui permet de mimer le fonctionnement d'un tore hexagonal d'automates par un tore d'automates de dimension deux. Nous décrivons ensuite différentes manières de plonger un tore d'automates de dimension deux dans un anneau d'automates. Nous déduisons de ces résultats la simulation de tores de dimension finie par un anneau d'automates et celle d'un tore hexagonal d'automates par un anneau d'automates.
APA, Harvard, Vancouver, ISO, and other styles
9

Rautio, R. P. (Riku-Petteri). "Cosmological Zoom simulations." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201610272951.

Full text
Abstract:
Cosmological “zoom-in” simulations are a modern class of numerical simulations with many interesting uses. In zooms an area of interest, such as a galaxy, is resolved to a high degree, while it’s surroundings are left at a coarser resolution to relieve the computational burden. Here I review some recent studies using zoom simulations and discuss the benefits and potential of zoom simulations compared to regular numerical simulations as well as semianalytic models. I will focus on the advantages of the increased resolution in zooms, such as the ability to resolve giant molecular clouds and to better model high redshift galaxies.
APA, Harvard, Vancouver, ISO, and other styles
10

Yaiche, Francis. "Les simulations globales." Paris 3, 1993. http://www.theses.fr/1994PA030012.

Full text
Abstract:
On parle beaucoup aujourd'hui de simulations globales en didactique des langues. Cette these present six canevas d'invention "generaliste" et quatre utilisables en langue de specialite (canevas elabores au belc depuis 1973). La simulation globale est une faire "debarquer" sur un lieu-theme (une ile, un immeuble, un village, un cirque, un hotel, une entreprise, etc. ) l'imaginaire d'un groupe d'eleves qui prendront une identite fictive, lieu-theme sur lequel l'enseignant federera toutes les activites d'expression ecrite et orale. Cette facon de faire entrer dans la classe le reel est aussi une maniere de faire parler les eleves de la vie, de l7amour et de la mort et de lever les verrous et les inhibitions qui bloquent les processus d'apprentissage. En fait, les simulations globales obligent a reconsiderer certains aspects de la relation enseignant-enseigne-savoir et a reflechir aux questions posees par la nouvelle donne pedagogique : comment sauver sa classe de l'ennui ? peut-on apprendre en jouant? quels sont les roles d'un enseignant? et d'un eleve? comment corriger et evaluer les productions d'un jeu ? etc. Une simulation globale est un lieu edifiant ou se construisent l'apprentissage d'une langue et d'une culture, la connaissance de soi et de l'autre
Today people talk a lot about global simulations in the area of language teaching. This thesis introduces six "general" canvases and four used in teaching the language to specific group (developed at belc since 1973). A global simulation is a type of educational game which consists of launching a group of pupils in to an imaginary identity and also in to a plays which is a theme (such as an island, a block of flats, a village, a circus, a hotel, etc. ) where the teacher uses oral and written exercises. This way of allowing the real world to came into the classroom is also a way of allowing the pupils to talk about life, love and death, and to remove blockages and inhitions which hinder the learning process. Indeed global simulations force people to reconsider certain aspects about the teacherlearner-learning relationship and to think about questions which wer raised by the new educational gift : how could they save the class from boredom ? could pupils play and learn at the same time ? what roles does the teacher play? and the pupil? how can a teacher correct and assess the result of a game? a global simulation is therefore a place where the learning of a language and a culture, the knowledge of oneself and of others can be built on
APA, Harvard, Vancouver, ISO, and other styles
11

Gutiérrez, Daniel. "Green Fuel Simulations." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79244.

Full text
Abstract:
Many industries have entered a new global phase which takes the environment in mind. The gas turbine industry is no exception, where the utilization of green fuels is the future to spare the environment from carbon dioxide and NOx emissions. Hydrogen has been identified as a fuel which can fulfil the global requirements set by governments worldwide. Combustion instabilities are not inevitable during gas turbine operations, especially when using a highly reactive and diffusive fuel as hydrogen. These thermoacoustics instabilities can damage mechanical components and have economic consequences in terms of maintenance and reparation. Understanding these thermoacoustic instabilities in gas turbine burners is of great interest. COMSOL Multiphysics offers a robust acoustic module compared to other available acoustic simulation programs. In this thesis, an Acoustic finite element model was built representing an atmospheric combustion rig (ACR), used to test the burners performance and NOx emissions. Complementary computational fluid dynamics (CFD) simulations were performed for 100 % hydrogen as fuel by using the Reynolds average Navier-Stokes (RANS) lag EB k - epsilon turbulence model. Necessary data was successfully imported to the Acoustic finite element model. Different techniques of building the mesh were used in COMSOL Multiphysics and NX. Similar results were obtained, proving that both mesh tools work well in acoustic simulations. Two different ways of solving the eigenvalue problem in acoustics were implemented, the classic Helmholtz equation and Linearized Navier-Stokes equations, both in the frequency domain. The Helmholtz equation proved to be efficient and detected multiple modes in the frequency range of interest. Critical modes which lived in the burner and the combustion chamber were identified. Defining a hard and soft wall boundary condition at the inlets and outlet of the atmospheric combustion rig gave similar eigenfrequencies when comparing the two boundary conditions. The soft wall boundary condition was defined with a characteristic impedance, giving a high uncertainty whether the results were trustworthy or not. A boundary condition study revealed that the boundary condition at the outlet was valid for modes living in the burner and combustion chamber. Solving the eigenvalue problem with the Linearized Navier-Stokes equations proved to be computationally demanding compared to the Helmholtz equation. Similar modes shapes were found at higher frequencies, but pressure perturbations were observed in the region where the turbulence was dominant. A prestudy for a stability analysis was established, where the ACR and the flame was represented as a generic model. Implementing a Flame Transfer Function (FTF), more specifically a linear n - tau model, showed that the time delay tau is most sensible for a parametric change and hence needs to be chosen cautiously
APA, Harvard, Vancouver, ISO, and other styles
12

Tarmyshov, Konstantin B. "Molecular dynamics simulations." Phd thesis, [S.l.] : [s.n.], 2007. https://tuprints.ulb.tu-darmstadt.de/787/1/000_pdfsam_PhD_thesis_-_All_-_LinuxPS2PDF.ps.pdf.

Full text
Abstract:
Molecular simulations can provide a detailed picture of a desired chemical, physical, or biological process. It has been developed over last 50 years and is being used now to solve a large variety of problems in many different fields. In particular, quantum calculations are very helpful to study small systems at a high resolution where electronic structure of compounds is accounted for. Molecular dynamics simulations, in turn, are employed to study development of a certain molecular ensemble via its development in time and space. Chapter 1 gives a short overview of techniques used today in molecular simulations field, their limitations, and their development. Chapter 2 concentrates on the description of methods used in this work to perform molecular dynamics simulations of cucurbit[6]uril in aqueous and salt solutions as well as metal-isopropanol interface. This is followed by Chapter 3 that outlines main areas in our life where these systems can be used. The development of instruments is as important as the scientific part of molecular simulations like methods and algorithms. Parallelization procedure of the atomistic molecular dynamics program YASP for shared-memory computer architectures is described in Chapter 4. Parallelization was restricted to the most CPU-time consuming parts: neighbour-list construction, calculation of non-bonded, angle and dihedral forces, and constraints. Most of the sequential FORTRAN code was kept; parallel constructs were inserted as compiler directives using the OpenMP standard. Only in the case of the neighbour list the data structure had to be changed. The parallel code achieves a useful speed-up over the sequential version for systems of several thousand atoms and above. On an IBM Regatta p690+, the throughput increases with the number of processors up to a maximum of 12-16 processors depending on characteristics of the simulated systems. On dual-processor Xeon systems, the speed-up is about 1.7. Certainly, these results will be of interest to other scientific groups in academia and industry that would like to improve their own simulation codes. In order to develop a molecular receptor or choose from already existing ones that fits certain needs one must have quite good knowledge of non-covalent host-guest interactions. One also wants to have control over the capture/release process via environment of the receptor (pH, salt concentration, etc.). Chapter 5 is devoted to molecular dynamics simulations preformed to study the microscopic structure and dynamics of cations bound to cucurbit[6]uril (CB[6]) in water and in aqueous solutions of sodium, potassium, and calcium chloride. The molarities are 0.183M for the salts, and 0.0184M for CB[6]. The cations bind only to CB[6] carbonyl oxygens. They are never found inside the CB[6] cavity. Complexes with Na+ and K+ mostly involve one cation, whereas with Ca2+ single- and double-cation complexes are formed in similar proportions. The binding dynamics strongly depends on the type of cation. A smaller size or higher charge increases the residence time of a cation at a given carbonyl oxygen. The diffusion dynamics also corresponds to the binding strength of cations: the stronger binding the slower diffusion and reorientation dynamics. When bound to CB[6], sodium and potassium cations jump mainly between nearest or second-nearest neighbours. Calcium shows no hopping dynamics. It is coordinated predominantly by one CB[6] oxygen. A few water molecules (zero to four) can occupy the CB[6] cavity, which is delimited by the CB[6] oxygen faces. Their residence time is hardly influenced by sodium and potassium ions. In the case of calcium the residence time of the inner water increases notably. A simple structural model for the cations acting as “lids” over the CB[6] portal cannot, however, be confirmed. The slowing of the water exchange by the ions is a consequence of the generally slower dynamics in their presence and of their stable solvation shells. The study of binding behaviour of simple hydrophobic (Lennard-Jones) particles by CB[6] showed that these particles do not bind. A simple test showed that the size of hydrophobic particles in this case is important for a stable encapsulation. Another challenging field of research is the metal-organic interfaces. Particularly, transition metals are more difficult as they form chemical bonds, though sometimes very weak, with a large number of organic compounds. In Chapter 6 a molecular dynamics model and its parameterization procedure are devised and used to study adsorption of isopropanol on platinum(111) (Pt(111)) surface in unsaturated and oversaturated coverages regimes. Static and dynamic properties of the interface between Pt(111) and liquid isopropanol are also investigated. The magnitude of the adsorption energy at unsaturated level increases at higher coverages. At the oversaturated coverage (multilayer adsorption) the adsorption energy reduces, which coincides with findings by Panja et al. in their temperature-programmed desorption experiment (ref. 25). The density analysis showed a strong packing of molecules at the interface followed by a depletion layer and then by an oscillating density profile up to 3 nm. The distribution of individual atom types showed that the first adsorbed layer forms a hydrophobic methyl “brush”. This “brush” then determines the distributions further from the surface. In the second layer methyl and methine groups are closer to the surface and are followed by the hydroxyl groups; the third layer has exactly the inverted distribution. The alternating pattern extends up to about 2 nm from the surface. The orientational structure of molecules as a function of distance of molecules is determined by the atoms distribution and surprisingly does not depend on the electrostatic or chemical interactions of isopropanol with the metal surface. However, possible formation of hydrogen bonds in the first layer is notably influenced by these interactions. The surface-adsorbate interactions influence mobility of isopropanol molecules only in the first layer. Mobility in the higher layers is independent of these interactions. Finally, Chapter 7 summarizes main conclusions of the studies presented in this thesis and outlines perspectives of the future research.
APA, Harvard, Vancouver, ISO, and other styles
13

Fong, Heung Wah. "Editing explosion simulations /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20FONG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 69-71). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
14

Jain, Sunny. "Hypersonic nonequilibrium flow simulations over a blunt body using bgk simulations." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lyu, Yeonhwan. "Simulations and Second / Foreign Language Learning: Improving communication skills through simulations." See Full Text at OhioLINK ETD Center (Requires Adobe Acrobat Reader for viewing), 2006. http://www.ohiolink.edu/etd/view.cgi?toledo1147363791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Buchta, Christian, and Sara Dolnicar. "Learning by simulation. Computer simulations for strategic marketing decision support in tourism." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2003. http://epub.wu.ac.at/1718/1/document.pdf.

Full text
Abstract:
This paper describes the use of corporate decision and strategy simulations as a decision-support instrument under varying market conditions in the tourism industry. It goes on to illustrate this use of simulations with an experiment which investigates how successful different market segmentation approaches are in destination management. The experiment assumes a competitive environment and various cycle-length conditions with regard to budget and strategic planning. Computer simulations prove to be a useful management tool, allowing customized experiments which provide insight into the functioning of the market and therefore represent an interesting tool for managerial decision support. The main drawback is the initial setup of a customized computer simulation, which is time-consuming and involves defining parameters with great care in order to represent the actual market environment and to avoid excessive complexity in testing cause-effect-relationships. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
17

Leveugle, Benoît. "Simulation DNS de l’interaction flamme-paroi dans les moteurs à allumage commandé." Thesis, Rouen, INSA, 2012. http://www.theses.fr/2012ISAM0021/document.

Full text
Abstract:
Dans le cadre du projet INTERMARC (INTERaction dans les Moteurs à Allumage Commandé), la tâche du CORIA a consisté à produire une base de données à l'échelle RANS (provenant de données DNS) afin de tester, valider et modifier le modèle d'interaction développée par IFPen. Ce modèle vise l'ajout d'une composante d'interaction, phénomène non pris en compte par les lois de paroi actuelles.Ce projet repose sur l'interaction forte entre les différents protagonistes présents. Le CORIA et le CETHIL ont travaillé ensemble à la réalisation d'une base de données pour tester les modèles initiaux proposés par IFPen, puis en fonction des résultats obtenus, à itérer avec IFPen pour modifier et améliorer les modèles. Ces tests ont inclus des simulations 2D laminaires, 2D turbulentes, et 3D turbulentes
Under the INTERMARC project (Flame wall interaction in spark ignition engines), CORIA's job was to produce a database to RANS scale (from DNS data) to test, validate and modify the interaction model developed by IFPEN. This model aims the addition of the interaction phenomena, non-captured by the current wall laws. This project is based on the strong interaction between the different actors. The CORIA and the CETHIL have worked together in the creation of the database, where the experimental data were also used to validate the resuslts of the DNS code.CORIA then used this database to test the original model proposed by IFPPEN, then according to the results obtained, CORIA iterated with IFPEN to modify and improve the models. These tests included laminar 2D simulations, 2D turbulent and 3D turbulent simulations
APA, Harvard, Vancouver, ISO, and other styles
18

Celik, Nurcin. "INTEGRATED DECISION MAKING FOR PLANNING AND CONTROL OF DISTRIBUTED MANUFACTURING ENTERPRISES USING DYNAMIC-DATA-DRIVEN ADAPTIVE MULTI-SCALE SIMULATIONS (DDDAMS)." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/195427.

Full text
Abstract:
Discrete-event simulation has become one of the most widely used analysis tools for large-scale, complex and dynamic systems such as supply chains as it can take randomness into account and address very detailed models. However, there are major challenges that are faced in simulating such systems, especially when they are used to support short-term decisions (e.g., operational decisions or maintenance and scheduling decisions considered in this research). First, a detailed simulation requires significant amounts of computation time. Second, given the enormous amount of dynamically-changing data that exists in the system, information needs to be updated wisely in the model in order to prevent unnecessary usage of computing and networking resources. Third, there is a lack of methods allowing dynamic data updates during the simulation execution. Overall, in a simulation-based planning and control framework, timely monitoring, analysis, and control is important not to disrupt a dynamically changing system. To meet this temporal requirement and address the above mentioned challenges, a Dynamic-Data-Driven Adaptive Multi-Scale Simulation (DDDAMS) paradigm is proposed to adaptively adjust the fidelity of a simulation model against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective data update. To the best of our knowledge, the proposed DDDAMS methodology is one of the first efforts to present a coherent integrated decision making framework for timely planning and control of distributed manufacturing enterprises.To this end, comprehensive system architecture and methodologies are first proposed, where the components include 1) real time DDDAM-Simulation, 2) grid computing modules, 3) Web Service communication server, 4) database, 5) various sensors, and 6) real system. Four algorithms are then developed and embedded into a real-time simulator for enabling its DDDAMS capabilities such as abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation. As part of the developed algorithms, improvements are made to the resampling techniques for sequential Bayesian inferencing, and their performance is benchmarked in terms of their resampling qualities and computational efficiencies. Grid computing and Web Services are used for computational resources management and inter-operable communications among distributed software components, respectively. A prototype of proposed DDDAM-Simulation was successfully implemented for preventive maintenance scheduling and part routing scheduling in a semiconductor manufacturing supply chain, where the results look quite promising.
APA, Harvard, Vancouver, ISO, and other styles
19

Stucki, Pascal. "Obstacles in pedestrian simulations." Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Department of Computer Science, 2003. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Barakat, Firas Risnes. "Simulations of imitative learning." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10108.

Full text
Abstract:

This Master thesis presents simulations within the field of imitative learning. The thesis starts with a review of the work done in my depth study, looking at imitative learning in general. Further, forward and inverse models are studied, and a case study of a Wolpert et al article is done. An architecture using the recurrent neural network with parametric bias (RNNPB) and a PID-controller by Tani et al is presented, and later simulated using MATLAB and the breve simulation environment. It is tested if the RNNPB is suitable for imitative learning. The first experiment was quite successful, and interesting results were discovered. The second experiment was less successful. Generally, it was confirmed that RNNPB is able to reproduce actions, interact with the environment, and indicate situations using the parametric bias (PB). It was also observed that the PB values tend to reflect common characteristics in similar training patterns. A comparison between the forward and inverse model and the RNNPB model was done. The former appears to be more modular and a predictor of consequence of actions, while the latter predicts sequences and is able to represent the situation it is in. The work done to connect MATLAB and breve is also presented.

APA, Harvard, Vancouver, ISO, and other styles
21

Feldmeier, Achim, Wolf-Rainer Hamann, D. Rätzel, and Lidia M. Oskinova. "Hydrodynamic simulations of clumps." Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1797/.

Full text
Abstract:
Clumps in hot star winds can originate from shock compression due to the line driven instability. One-dimensional hydrodynamic simulations reveal a radial wind structure consisting of highly compressed shells separated by voids, and colliding with fast clouds. Two-dimensional simulations are still largely missing, despite first attempts. Clumpiness dramatically affects the radiative transfer and thus all wind diagnostics in the UV, optical, and in X-rays. The microturbulence approximation applied hitherto is currently superseded by a more sophisticated radiative transfer in stochastic media. Besides clumps, i.e. jumps in the density stratification, so-called kinks in the velocity law, i.e. jumps in dv/dr, play an eminent role in hot star winds. Kinks are a new type of radiative-acoustic shock, and propagate at super-Abbottic speed.
APA, Harvard, Vancouver, ISO, and other styles
22

Sigurjonsson, Kjartan Örn. "Dual gradient drilling simulations." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18362.

Full text
Abstract:
The system studied in this thesis is called the Low Rise Return system and uses a partly filled marine drilling riser with a variable mud level which is used control the bottom holes pressure.Initially main components of the Low Riser Return System are listed and explained. Then the performance characteristics of the system are explored. Level movements in riser during level increase and decrease at constant mud pump rates are explained along with the effect of mud pump rate on maximum level increase and decrease rates.A simple simulator is then presented that calculates the bottom hole pressure when pump rates are changed. The simulator includes a function that enables it to simulate lost circulation scenarios.The simulator is used to simulate some preferred scenarios. First a pressure increase and decrease at constant mud pump rates are simulated. Then it is shown how a faster pressure decrease can be achieved by temporarily lowering the mud pump rate. Next simulations are shown where changes in mud level are used to compensate for changes in equivalent circulation density as mud pump rates are changed. Finally simulations are run that demonstrate how mud level can be reduced to cure lost circulation scenarios. Results and lessons learned are then discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Banon, Navarro Alejandro. "Gyrokinetic large Eddy simulations." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209592.

Full text
Abstract:
Le transport anormal de l’energie observé en régime turbulent joue un rôle majeur dans les propriétés de stabilite des plasmas de fusion par confinement magnétique, dans des machines comme ITER. En effet, la turbulence plasma est intimement corrélée au temps de confinement de l’energie, un point clé des recherches en fusion thermonucléaire.

Du point de vue théorique, la turbulence plasma est décrite par les équations gyrocinétiques, un ensemble d équations aux dérivées partielles non linéaires couplées. Par suite des très différentes échelles spatiales mises en jeu dans des conditions expérimentales réelles, une simulation numérique directe et complète (DNS) de la turbulence gyrocinétique est totalement hors de portée des plus puissants calculateurs actuels, de sorte que démontrer la faisabilité d’une alternative permettant de réduire l’effort numérique est primordiale. En particulier, les simulations de grandes échelles (”Large-Eddy Simulations” - LES) constituent un candidat pertinent pour permettre une telle r éduction. Les techniques LES ont initialement été développées pour les simulations de fluides turbulents à haut nombre de Reynolds. Dans ces simulations, les plus grandes échelles sont explicitement simulées numériquement, alors que l’influence des plus petites est prise en compte via un modèle implémenté dans le code.

Cette thèse présente les premiers développements de techniques LES dans le cadre des équations gyrocinétiques (GyroLES). La modélisation des plus petites échelles est basée sur des bilans d’énergie libre. En effet, l’energie libre joue un rôle important dans la théorie gyrocinétique car elle en est un invariant non lin éaire bien connu. Il est démontré que sa dynamique partage de nombreuses propriétés avec le transfert d’energie dans la turbulence fluide. En particulier, il est montré l’existence d’une cascade d énergie libre, fortement locale et dirigée des grandes échelles vers les petites, dans le plan perpendiculaire â celui du champ magnétique ambiant.

La technique GyroLES est aujourd’hui implantée dans le code GENE et a été testée avec succès pour les instabilités de gradient de température ionique (ITG), connues pour jouer un rôle crucial dans la micro-turbulence gyrocinétique. A l’aide des GyroLES, le spectre du flux de chaleur obtenu dans des simulations à très hautes résolutions est correctement reproduit, et ce avec un gain d’un facteur 20 en termes de coût numérique. Pour ces raisons, les simulations gyrocinétiques GyroLES sont potentiellement un excellent candidat pour réduire l’effort numérique des codes gyrocinétiques actuels.

/ Anomalous transport due to plasma micro-turbulence is known to play an important role in confinement properties of magnetically confined fusion plasma devices such as ITER. Indeed, plasma turbulence is strongly connected to the energy confinement time, a key issue in thermonuclear fusion research. Plasma turbulence is described by the gyrokinetic equations, a set of nonlinear partial differential equations. Due to the various scales characterizing the turbulent fluctuations in realistic experimental conditions, Direct Numerical Simulations (DNS) of gyrokinetic turbulence remain close to the computational limit of current supercomputers, so that any alternative is welcome to decrease the numerical effort. In particular, Large-Eddy Simulations (LES) are a good candidate for such a decrease. LES techniques have been devised for simulating turbulent fluids at high Reynolds number. In these simulations, the large scales are computed explicitly while the influence of the smallest scales is modeled.

In this thesis, we present for the first time the development of the LES for gyrokinetics (GyroLES). The modeling of the smallest scales is based on free energy diagnostics. Indeed, free energy plays an important role in gyrokinetic theory, since it is known to be a nonlinear invariant. It is shown that its dynamics share many properties with the energy transfer in fluid turbulence. In particular, one finds a (strongly) local, forward (from large to small scales) cascade of free energy in the plane perpendicular to the background magnetic field.

The GyroLES technique is implemented in the gyrokinetic code Gene and successfully tested for the ion temperature gradient instability (ITG), since ITG is suspected to play a crucial role in gyrokinetic micro-turbulence. Employing GyroLES, the heat flux spectra obtained from highly resolved direct numerical simulations are recovered. It is shown that the gain of GyroLES runs is 20 in terms of computational time. For this reason, Gyrokinetic Large Eddy Simulations can be considered a serious candidate to reduce the numerical cost of gyrokinetic simulations.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
24

Sliozberg, Yelena R. Abrams Cameron F. "Molecular simulations of chaperonins /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Banfield, Robert E. "Learning on complex simulations." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wong, Jeffrey. "Simulations of Surfactant Spreading." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/hmc_theses/1.

Full text
Abstract:
Thin liquid films driven by surface tension gradients are studied in diverse applications, including the spreading of a droplet and fluid flow in the lung. The nonlinear partial differential equations that govern thin films are difficult to solve analytically, and must be approached through numerical simulations. We describe the development of a numerical solver designed to solve a variety of thin film problems in two dimensions. Validation of the solver includes grid refinement studies and comparison to previous results for thin film problems. In addition, we apply the solver to a model of surfactant spreading and make comparisons with theoretical and experimental results.
APA, Harvard, Vancouver, ISO, and other styles
27

Erikson, Lars. "Automatic Well Control Simulations." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18381.

Full text
Abstract:
Every year kick incidents occur, maybe best remembered by the Macondo blowout in April 2010 resulting in devastating oil spills throughout the Gulf of Mexico. Well control is one of the most important factors in any drilling operation, preventing disastrous blowouts where people and the environment will be affected. The development of new technologies has increased significantly, lowering the risks of blowouts, mostly because of the reliability of blowout preventers. Better hardware systems have been developed and better materials has increased the performance during critical parts of an operation. There are several causes why we encounter kicks; not keeping the hole full, lost circulation, swabbing, underbalanced pressures, trapped fluids/pressures and mechanical failures. Before an actual kick, there are warning signs that might occur and knowing how to interpret positive indicators of kick is very important. Pit gain, increase in return flow rate and abnormalities in drillpipe pressure are all signs that formation fluid has entered the well. When experiencing a kick, procedures to reduce the danger and the non productive time have to be started. Firstly the well has to be shut in by either the hard shut-in method or the soft shut-in method. Then the influx has to be circulated out of the well by the use of either the Driller’s Method or the Wait and Weight Method. To better understand and visualize the behavior of formation fluid entering the well, the simulation program Drillbench Kick has been used. The soft shut-in has been compared against the hard shut-in and the Driller’s Method has been run against the Wait & Weight Method. The simulations have been performed with both oil based mud and water based mud.
APA, Harvard, Vancouver, ISO, and other styles
28

Raicevic, Milan. "Simulations of cosmic reionization." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/323/.

Full text
Abstract:
In this thesis we investigate numerically how ionizing photons emitted by stars in galaxies cause the reionization of the Universe, the transition during which most of the gas in the Universe from a mostly neutral, to a highly ionised state it is in today. To this end, we discuss and improve two techniques for the transport of ionising radiation across cosmological volumes, analyse the sources of ionising photons at high redshifts predicted by a semi-analytical galaxy formation model (GALFORM), and combine these to make consistent model of how reionization proceeds. Our improvements to the hybrid characteristics (HC) radiative transport scheme are significant, making the code faster and more accurate, as demonstrated by our contribution to a code comparison paper (Iliev et al., 2009). Our improvements to the SimpleX radiative transport scheme allow for accurate and significantly better numerically converged calculations of the speeds of ionization fronts of cosmological HII regions. This is accomplished by a much more thorough analysis of how to properly model the density field on the unstructured density field in SimpleX. The dependence of the ionizing emissivity of GALFORM galaxies on various parameters of the model is examined. We show that massive stars formed in abundance because of the assumed top-heavy stellar initial mass function during starbursts in the Baugh et al. (2005) model, triggered by galaxy mergers, are the dominant source of ionizing photons. We show that the luminosity functions predicted by this model are in good agreement with the most recent Hubble Space Telescope results at z \gtrsim 8. The model also demonstrates that most photons are produced in faint galaxies which are not yet seen in the current data. We then combine the sources predicted by GALFORM with the SimpleX RT scheme to model inhomogeneous reionization including the effects of source suppression. We investigate how the morphology of reionization depends on the model for the sources, which may be crucial for future observations of this cosmic epoch.
APA, Harvard, Vancouver, ISO, and other styles
29

Archer, T. D. "Computer simulations of calcite." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596141.

Full text
Abstract:
In this dissertation I have created and applied a parametric model for bulk carbonate materials. The new empirical model for carbonates is stable for a wide range of carbonate structures and reproduces experimental results with reasonable accuracy. To study the surface of calcite the ab initio code SIESTA has been used. New implementation has been introduced into the SIESTA code to allow the calculation of effective charges using the modern theory of polarisation. Using these charges the calculation of the long range electrostatic effects, which are removed by the zero electric field boundary conditions, have been introduced into the phonon methodology, reproducing the LO-TO splitting within the calculated phonon modes near the F-point. Furthermore the effective charges have been used in the calculation of the infrared intensity for each phonon mode. The SIESTA implementation of DFT relies upon the evaluation of electron density on a real-space grid. Such discretization of the real-space integrals introduces an oscillatory error in the energy and forces, with the periodicity of the real-space grid. A method for reducing this error has been introduced. The SIESTA code with the new methodology has been used to study bulk calcite, {211} calcite surface and the interaction of water with the {211} sur­face. The structure and phonon frequencies for the bulk match well with experimental values. The {211} surface has been calculated showing the response of the crystal in both distortion of the ion position and the electronic configuration. Surface relaxations and phonon frequencies show no symmetry breaking reconstruction of the calcite {211} surface. Calculation of the interaction of water molecules with the {211} surface predicts the optimum position for water on the surface.
APA, Harvard, Vancouver, ISO, and other styles
30

Sak, Halis. "Efficient Simulations in Finance." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/1068/1/document.pdf.

Full text
Abstract:
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
31

Smith, Gregory K. "Simulations of chemical catalysis." Thesis, The University of New Mexico, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3612623.

Full text
Abstract:

This dissertation contains simulations of chemical catalysis in both biological and heterogeneous contexts. A mixture of classical, quantum, and hybrid techniques are applied to explore the energy profiles and compare possible chemical mechanisms both within the context of human and bacterial enzymes, as well as exploring surface reactions on a metal catalyst. A brief summary of each project follows.

Project 1 - Bacterial Enzyme SpvC

The newly discovered SpvC effector protein from Salmonella typhimurium interferes with the host immune response by dephosphorylating mitogen-activated protein kinases (MAPKs) with a β-elimination mechanism. The dynamics of the enzyme substrate complex of the SpvC effector is investigated with a 3.2 ns molecular dynamics simulation, which reveals that the phosphorylated peptide substrate is tightly held in the active site by a hydrogen bond network and the lysine general base is positioned for the abstraction of the alpha hydrogen. The catalysis is further modeled with density functional theory (DFT) in a truncated active-site model at the B3LYP/6-31 G(d,p) level of theory. The truncated model suggested the reaction proceeds via a single transition state. After including the enzyme environment in ab initio QM/MM studies, it was found to proceed via an E1cB-like pathway, in which the carbanion intermediate is stabilized by an enzyme oxyanion hole provided by Lys104 and Tyr158 of SpvC.

Project 2 - Human Enzyme CDK2

Phosphorylation reactions catalyzed by kinases and phosphatases play an indispensable role in cellular signaling, and their malfunctioning is implicated in many diseases. Ab initio quantum mechanical/molecular mechanical studies are reported for the phosphoryl transfer reaction catalyzed by a cyclin-dependent kinase, CDK2. Our results suggest that an active-site Asp residue, rather than ATP as previously proposed, serves as the general base to activate the Ser nucleophile. The corresponding transition state features a dissociative, metaphosphate-like structure, stabilized by the Mg(II) ion and several hydrogen bonds. The calculated free-energy barrier is consistent with experimental values.

Project 3 - Bacterial Enzyme Anthrax Lethal Factor

In this dissertation, we report a hybrid quantum mechanical and molecular mechanical study of the catalysis of anthrax lethal factor, an important first step in designing inhibitors to help treat this powerful bacterial toxin. The calculations suggest that the zinc peptidase uses the same general base-general acid mechanism as in thermolysin and carboxypeptidase A, in which a zinc-bound water is activated by Glu687 to nucleophilically attack the scissile carbonyl carbon in the substrate. The catalysis is aided by an oxyanion hole formed by the zinc ion and the side chain of Tyr728, which provide stabilization for the fractionally charged carbonyl oxygen.

Project 4 - Methanol Steam Reforming on PdZn alloy

Recent experiments suggested that PdZn alloy on ZnO support is a very active and selective catalyst for methanol steam reforming (MSR). Plane-wave density functional theory calculations were carried out on the initial steps of MSR on both PdZn and ZnO surfaces. Our calculations indicate that the dissociation of both methanol and water is highly activated on flat surfaces of PdZn such as (111) and (100), while the dissociation barriers can be lowered significantly by surface defects, represented here by the (221), (110), and (321) faces of PdZn. The corresponding processes on the polar Zn-terminated ZnO(0001) surfaces are found to have low or null barriers. Implications of these results for both MSR and low temperature mechanisms are discussed.

APA, Harvard, Vancouver, ISO, and other styles
32

Hamilton, Angela. "Simulations for Financial Literacy." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5235.

Full text
Abstract:
Financially literate consumers are empowered with the knowledge and skills necessary to make sound financial decisions that ensure their long-term economic well-being. Within the context of the range of cognitive, psychological, and social factors that influence consumer behavior, simulations enhance financial literacy by developing consumers' mental models for decision-making. Technical communicators leverage plain language and visual language techniques to communicate complex financial concepts in ways that consumers can relate to and understand. Simulations for financial education and decision support illustrate abstract financial concepts, provide a means of safe experimentation, and allow consumers to make informed choices based on a longitudinal comparison of decision outcomes. Technical communicators develop content based on best practices and conduct evaluations to ensure that simulations present information that is accessible, usable, and focused on the end-user. Potential simulation formats range from low- to high-fidelity. Low-fidelity simulations present static data in print or digital formats. Mid-fidelity simulations provide digital interactive decision support tools with dynamic user inputs. More complex high-fidelity simulations use narrative and dramatic elements to situate learning in applied contexts.
ID: 031001493; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Adviser: Dan Jones.; Title from PDF title page (viewed July 25, 2013).; Thesis (M.A.)--University of Central Florida, 2012.; Includes bibliographical references (p. 76-80).
M.A.
Masters
English
Arts and Humanities
English; Technical Communications
APA, Harvard, Vancouver, ISO, and other styles
33

Raveendran, Karthik. "Control of liquid simulations." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53011.

Full text
Abstract:
Over the last decade, advances in fluid simulation and rendering have helped animators synthesize photorealistic shots for movies that would have been virtually impossible to create by manually animating the liquid. Despite the advent of these computational methods, fluid simulation in movie production still involves a large degree of trial and error. In this dissertation, we propose a set of techniques for creating animations of liquids that meet desired artistic criteria without the customary tuning of numerous physical parameters. The basis for our work is the mesh-based representation of the liquid surface which lends itself to efficient algorithms that can control the output of simulations. First, we show how an animator can create animated characters and shapes that behave as if they were made of water using our mesh-based control method. Our approach allows for multiple levels of control over the simulation, ranging from the overall tracking of the desired shapes to highly detailed secondary effects. Next, we present a novel technique for interpolating between fluid simulations with free surfaces. We construct 4D spacetime meshes from animations and register them using a non-rigid ICP algorithm. By incorporating user input to align visually important regions, we can produce plausible animations that look like a blend of the two input sequences, all without re-simulating the fluid. We demonstrate how this could have applications in pre-visualization and video games.
APA, Harvard, Vancouver, ISO, and other styles
34

Jennings, Vincent Louis. "Computational simulations of titanates." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chappell, Helen Fiona. "Atomistic simulations of hydroxyapatite." Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/290022.

Full text
Abstract:
The ain of this work was to provide a deeper understanding of the electronic and geometrical structures of HA when substituted by various ions considered important in the field of biomaterials. Calculations were carried out using Density Functional Theory, (DFT), on bulk and surface, substituted-HA structures. Particular attention is given to the substation of phosphate ions by silicate ions. Bulk structures are investigated with supercells and the Virtual Crystal Approximation, which simulates low concentrations (up to 2.8 wt%) of silicon in the unit cell. The amount of silicon that can be substituted into a single cell is limited by the need for charge compensation, as the silicate ion has a formal charge of -4 and phosphate -3. Charge compensation is therefore explored, showing that hydroxyl-deficient HA is more favourable than stoichiometric HA when silicon is introduced. The HA-britholite-(Y) solid state series is also investigated, using geometry optimisation and theoretical NMR spectra, as a potential way of increasing the silicon content of a unit cell by charge compensating with the replacement of a +2 calcium ion by a +3 yttrium ion. Further substitutions of titanium and magnesium are also thoroughly investigated with the single unit cell model. A HA (100) surface slab is also constructed and electronically optimised. This model is used in the study of surface structures and interactions and is compared to previous experimental and theoretical results. Substitution of silicon into the surfaces is investigated in addition to protonation of surface phosphate and silicate ions and the adsorption of a glutamic acid fragment.
APA, Harvard, Vancouver, ISO, and other styles
36

Nagy, Lesleis. "Parallelisation of micromagnetic simulations." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20433.

Full text
Abstract:
The field of paleomagnetism attempts to understand in detail the the processes of the Earth by studying naturally occurring magnetic samples. These samples are quite unlike those fabricated in the laboratory. They have irregular shapes; they have been squeezed and stretched, heated and cooled and subjected to oxidation. However micromagnetic modelling allows us to simulate such samples and gain some understanding of how a paleomagnetic signal is acquired and how it is retained. Micromagnetics provides a theory for understanding how the domain structure of a magnetic sample alters subject to what it is made from and the environment that it is in. It furnishes the mathematics that describe the energy of a given domain structure and how that domain structure evolves in time. Combining micromagnetics and ever increasing computer power, it has been possible to produce simulations of small to medium size grains within the so-called single to pseudo single domain state range. However processors are no longer built with increasing speed but with increasing parallelism and it is this that must be exploited to model larger and larger paleomagnetic samples. The purpose of the work presented here is twofold. Firstly a micromagnetics code that is parallel and scalable is presented. This code is based on FEniCS, an existing finite element framework, and is shown to run on ARCHER the UK’s national supercomputing service. The strategy of using existing libraries and frameworks allow future extension and inclusion of new science in the code base. In order to achieve scalability, a spatial mapping technique is used to calculate the demagnetising field - the most computationally intensive part of micromagnetic calculations. This allows grain geometries to be partitioned in such a way that no global communication is required between parallel processes - the source of favourable scaling behaviour. The second part of the theses presents an exploration of domain state evolution in increasing sizes of magnetite grains. This simulation, whilst a first approximation that excludes magneto-elastic effects, is the first attempt to map out the transition from pseudo-single domain states to multi domain states using a full micromagnetic simulation.
APA, Harvard, Vancouver, ISO, and other styles
37

Öqvist, Mona. "Numerical simulations of wear." Licentiate thesis, Luleå tekniska universitet, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26185.

Full text
Abstract:
The objective of this licentiate thesis was to study the effect of tool wear for sheet metal forming tools and how the wear process can be simulated in an efficient manner. Three Papers are appended to this licentiate thesis. Paper A covers the influence of tool geometry in deep drawing. In paper B is the way of calculating with finite element analysis described. The wear of a steel cylinder oscillating against a steel plate was studied experimentally. The worn shape of the cylinder was then compared with a numerical simulation of the shape. Paper C shows how numerical simulations can be used to simulate wear of deep drawing tools. The wear of two different deep drawing tools has been investigated. The shape of the tools before and after wear have been compared as well as the stresses and strains in the formed cups.
Godkänd; 2000; 20070317 (ysko)
APA, Harvard, Vancouver, ISO, and other styles
38

Penny, Matthew Thomas. "Simulations of gravitational microlensing." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/simulations-of-gravitational-microlensing(ddc24ee8-c2a6-432a-a168-7d56662a9247).html.

Full text
Abstract:
Gravitational microlensing occurs when a massive lens (typically a star) deflects light from a more distant source, creating two unresolvable images that are magnified. The effect is transient due to the motions of the lens and source, and the changing magnification gives rise to a characteristic lightcurve. If the lensing object is a binary star or planetary system, more images are created and the lightcurve becomes more complicated. Detection of these lightcurve features allows the lens companion's presence to be inferred. Orbital motion of the binary lens can be detected in some microlensing events, but the expected fraction of events which show orbital motion has not been known previously. We use simulations of orbiting-lens microlensing events to determine the fraction of binary-lens events that are expected to show orbital motion. We also use the simulations to investigate the factors that affect this detectability. Following the discovery of some rapidly-rotating lenses in the simulations, we investigate the conditions necessary to detect lenses that undergo a complete orbit during a microlensing event. We find that such events are detectable and that they should occur at a low but detectable rate. We also derive approximate expressions to estimate the lens parameters, including the period, from the lightcurve. Measurement of the orbital period can in some cases allow the lens mass to be measured. Finally we develop a comprehensive microlensing simulator, MaBμLS, that uses the output of the Besançon Galaxy model to produce synthetic images of Galactic starfields. Microlensing events are added to the images and photometry of their lightcurves simulated. We apply these simulations to a proposed microlensing survey by the Euclid space mission to estimate its planet detection yield.
APA, Harvard, Vancouver, ISO, and other styles
39

Tara, Sylvia. "Computer simulations of acetylcholinesterase /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9908501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Chi-Shao. "Equatorial entrainment zone simulations." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA237234.

Full text
Abstract:
Thesis (M.S. in Physical Oceanography)--Naval Postgraduate School, June 1990.
Thesis Advisor(s): Garwood, Roland W. Second Reader: Chu, Pecheng. "June 1990." Description based on signature page as viewed on October 19, 2009. DTIC Identifier(s): Air water interactions, ocean models, ocean currents, entrainment, ocean circulation, heat flux, wind velocity, mathematical prediction, equatorial regions, theses. Author(s) subject terms: Air-sea interaction, equatorial circulation. Includes bibliographical references (p. 73-75). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
41

Prière, Céline Poinsot Thierry. "Simulations aux grandes échelles." Toulouse : INP Toulouse, 2005. http://ethesis.inp-toulouse.fr/archive/00000098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ramakrishnan, Siddharth. "Proving Ground Durability Simulations." Thesis, KTH, Maskinkonstruktion (Avd.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259664.

Full text
Abstract:
Virtual durability simulations have been explored in the automotive industry to complement physical testing in designing durable vehicles. Simulations are useful to check the validity of the design before even building the prototype of the vehicle. They are also useful in checking the effect of changes in vehicle design to the durability of the vehicle. Buses are designed and tested for durability before they are sold to customers. Bus manufacturers use special test tracks consisting of different kinds of maneuvers/obstacles to test the buses for durability. Proving ground durability test schedules defines the combination of different test track maneuvers/obstacles at which the bus is to be run. The test schedules are created to achieve accelerated fatigue damage in the bus comparable with the fatigue damage occurring in typical customer usage. This thesis is an attempt to check if a proving ground durability test schedule can be simulated in a computer. A Multibody dynamic model of the bus with its constituent subsystems is modeled in a multibody simulation software MSC ADAMS. Sub-systems like bus chassis frame and axle are modeled as flexible as their dynamic properties are assumed to influence the simulation results. The virtual bus is run on the virtual version of the test tracks. Loads at suspension torque rods, anti-roll bars, axles and displacement of dampers are extracted from the simulation. The load signals are post-processed to derive fatigue damage. The simulation model is compared with the test results of a single standard test track maneuver. The simulation model is tuned by adjusting the parameters to match with the test results of the given maneuver. Finally, the tuned model is used to run the bus in a test schedule. Results achieved at the end of the thesis shows that well-tuned simulation model is necessary for simulating test schedules with enough accuracy. Comparison with test results are to be treated with caution as the conditions of the test bus should be exactly same as the simulation model; which is difficult to achieve. Future extension of the work involves improving the accuracy of simulations and using simulations to iterate new kinds of maneuvers/obstacles to improve existing test schedules.
Virtuell hållfasthetsprovning utnyttjats inom fordonsindustrin för att komplettera fysisk provning med avseende att konstruera hållfasta fordon. Simuleringar är användbara för att kontrollera designen innan första prototypen har byggts men även för att kontrollera hur hållfastheten påverkas av olika fordonskoncept. Bussar utvecklas och provas så att de ska klara målen för hållfasthet innan de säljs. Busstillverkarna använder speciella provbanor bestående av olika hinder och manövrar för att testa hållfastheten. Tillsammans med speciella provningsprogram som specificerar vilka provbanehinder och manövrar som bussen ska provas enligt kan hållfastheten säkerställas. Dessa provprogram är framtagna för att den accelererade utmattningen på provbanan ska matcha den utmattning bussen utsätt för hos kund. Denna avhandling undersöker huruvida provprogram kan utvecklas digitalt via simuleringar. Multidynamiska modeller av bussens delsystem modelleras i programvaran MSC ADAMS. Delsystem som buss chassi ram och axlar modelleras som flexibla då deras dynamik egenskaper anses påverka simuleringsresultaten. Den virtuella bussen provas på en digital provbanan. Krafterna i reaktionsstag, kränghämmare, axlar och förskjutningen i stötdämpare beräknas. Dessa kraft- och förskjutningssignalser används senare för att beräkna utmattning. Simulerade resultat av ett hinder jämförs med resultat från fysisk provning för att därefter justera vissa parametrar för att virtuella resultat ska matcha fysiska. Efter att modellen är optimerad kan slutligen delskadan för ett helt provprogram simuleras. Resultat visar på att en väll optimerad simuleringsmodell är nödvändig för att simulera fram provprogram med bra noggrannhet. Att jämföra simulerade resultat med fysiska ska göras med viss aktsamhet då den fysiska bussen bör vara identisk med den virtuella; vilket är mycket svårt att uppnå. Framtida arbete inom ämnet bör innefatta förbättringar av simuleringsnoggrannheten och använda simulering för framtagandet av nya hinder/manövrar för att förbättra befintliga provprogram.
APA, Harvard, Vancouver, ISO, and other styles
43

Beersing-Vasquez, Kiran. "Suturing in Surgical Simulations." Thesis, KTH, Numerisk analys, NA, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260254.

Full text
Abstract:
The goal of this project is to develop virtual surgical simulation software in order to simulate the suturing and knot tying processes associated with surgical thread. State equations are formulated using Lagrangian mechanics, which is useful for the conservation of energy. Solver methods are developed with theory based in Differential Algebraic Equations (DAEs) which concern governing Ordinary Differential Equations (ODEs) that are constraint with Algebraic Equations (AE). An implicit integration scheme and Newton's method is used to solve the system in each step. Furthermore, a collision response process based on the Linear Complementarity Problem (LCP) is implemented to handle collisions and measure their forces. Models have been developed to represent the different types of objects. A spline model is used to represent the suture and mass-spring model for the tissue. They were both selected for their efficiency and base on real physical properties. The spline model was also chosen as it is continuous and can be evaluated at any point along the length. Other objects are also defined such as rigid bodies. The Lagrangian multiplier method is used to define the constraints in the model. This allows for the construction of complex models. An important constraint is the suturing constraint, which is created when a sufficient force is applied by the suture tip on to the tissue. This constraint allows only a sliding point along the suture to pass through a specific point on the tissue. This results in a virtual suturing model which can be built on for use in surgical simulations. Further investigations would be interesting to increase performance, accuracy and scope of the simulator.
Det här projektet syftar till att utveckla mjukvara för virtuell simulering av kirurgi som involverar knytande av suturtråd. Lagranges ekvationer används för att härleda energibevarande tillståndsekvationer. Lösningsmetoderna grundar sig i teori från området Differential-Algebraiska Ekvationer (DAEer), som avser att kontrollera Ordinära Differentialekvationer (ODEer) med algebraiska bivillkor. Ett implicit integrationsschema och Newtons metod används för att lösa systemet i varje steg. Utöver det så implementeras en kollisionsrespons-process baserad på det linjära komplementaritetsproblemet (LCP) för att hantera kollisioner och mäta deras krafter. Modeller har utvecklats för att representera olika typer av objekt. En spline-modell används för att representera suturtråden och ett mass-fjäder system för vävnaden. Valet baserades på deras höga prestanda samt starka anknytning till objektens fysiska egenskaper. Spline-modellen valdes också då dess kontinuitet innebär att den går att evaluera för en godtycklig punkt inom dess domän. Andra objekt, såsom stela kroppar, finns också definierade. Lagrangemultiplikator används för att definiera bivillkor i modellen. Detta tillåter konstruktionen av komplexa modeller. Ett viktigt bivillkor är sutur-bivillkoret som uppstår när tillräcklig kraft från spetsen på den kirurgiska nålen appliceras på vävnaden. Detta bivillkor tillåter att endast en glidande punkt längsmed suturen passerar genom en specifik punkt på vävnaden. Detta resulterar i en virtuell modell för stygn som kan byggas vidare på för användning i kirurgiska simulationer. Det vore intressant med ytterligare undersökningar för att förbättra prestandan, precisionen och simulatorns omfattning.
APA, Harvard, Vancouver, ISO, and other styles
44

Predari, Maria. "Load balancing for parallel coupled simulations." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0369/document.

Full text
Abstract:
Dans le contexte du calcul scientique, l'équilibrage de la charge est un problème crucial qui conditionne la performance des simulations numériques parallèles. L'objectif est de répartir la charge de travail entre un nombre de processeurs donné, afin de minimiser le temps global d'exécution. Une stratégie populaire pour résoudre ce problème consiste à modéliser la simulation à l'aide d'un graphe et à appliquer des algorithmes de partitionnement. En outre, les simulations numériques tendent à se complexifier, notamment en mixant plusieurs codes représentant des physiques différentes ou des échelles différentes. On parle alors de couplage de codes multi-physiques ou multi-échelles. Dans ce contexte, le problème de l'équilibrage de charge devient également plus difficile, car il ne s'agit plus d'équilibrer chacun des codes séparément, mais l'ensemble de ces codes pris dans leur globalité. Dans ce travail, on propose de resoudre ce problème en utilisant le modèle de partitionnement à sommets fixes qui pourrait représenter efficacement les contraintes supplémentaires imposées par les codes couplés (co-partitionnement). Nous avons donc développé un algorithme direct de partitionnement de graphe qui gère des sommets fixes. L'algorithme a été implémenté dans le partitionneur Scotch et une série d'expériences ont été menées sur la collection des graphes DIMACS. Ensuite nous avons proposé trois algorithmes de co-partitionnement qui respectent les contraintes issues des codes couplés respectifs. Nous avons egalement validé nos algorithmes par une étude expérimentale en comparant nos méthodes aux strategies actuelles sur des cas artificiels ainsi que sur des codes réels couplés
Load balancing is an important step conditioning the performance of parallel applications. The goal is to distribute roughly equal amounts of computational load across a number of processors, while minimising interprocessor communication. A common approach to model the problem is based on graph structures and graph partitioning algorithms. Moreover, new challenges involve the simulation of more complex physical phenomena, where different parts of the computational domain exhibit different physical behavior. Such simulations follow the paradigm of multi-physics or multi-scale modeling approaches. Combining such different models in massively parallel computations is still a challenge to reach high performance. Additionally, traditional load balancing algorithms are often inadequate, and more sophisticated solutions should be explored. In this thesis, we propose new graph partitioning algorithms that balance the load of such simulations, refered to as co-partitioning. We formulate this problem with the use of graph partitioning with initially fixed vertices which we believe represents efficiently the additional constraints of coupled simulations. We have therefore developed a direct algorithm for graph partitioning that manages successfully problems with fixed vertices. The algorithm is implemented inside Scotch partitioner and a series of experiments were carried out on the DIMACS graph collection. Moreover we proposed three copartitioning algorithms that respect the constraints of the respective coupled codes. We finally validated our algorithms by an experimental study comparing our methods with current strategies on artificial cases and on real-life coupled simulations
APA, Harvard, Vancouver, ISO, and other styles
45

Falcigno, Steven V. "The Simulation Engine, a platform for developing industrial process knowledge-based discrete event simulations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0009/MQ33370.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Balasubramanian, Sivaramakrishnan. "A Novel Approach for the Direct Simulation of Subgrid-Scale Physics in Fire Simulations." NCSU, 2010. http://www.lib.ncsu.edu/theses/available/etd-12212009-122246/.

Full text
Abstract:
A Lagrangian framework for computing subgrid-scale combustion physics in Large Eddy Simulations (LES) of fire is formulated and validated. The framework is based on coupling LES formulation, based on the Fire Dynamic Simulator (FDS) with the One-Dimensional Turbulence (ODT) model. The ODT model involves reaction-diffusion and turbulent transport along one-dimensional domains. The one-dimensional domains are attached to the flame brush positions, computed in LES, and are allowed to propagate along its surface. The Lagrangian LES-ODT framework involves various implementations including a) momentum, energy, and species solution along one-dimensional ODT domain, b) Tracking of ODT domains through their anchor points, c) Filtering of ODT solutions on the LES grid, d) Inverse filtering (interpolation) of LES velocity fields in ODT domains, and e) The management of ODT domains at the flow inlets and as they reach the flame tip. Comparison of LES-ODT solutions with FDS solutions shows that the LES-ODT implementation reproduces reasonably well the flame topology and structure.
APA, Harvard, Vancouver, ISO, and other styles
47

Guesnet, Étienne. "Modélisation du comportement mécanique et thermique des silices nano-architecturées." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAI075/document.

Full text
Abstract:
Les silices nanostructurées sont des matériaux ultra-poreux (plus de 80% de porosité) utilisés pour la confection de Panneaux Isolants sous Vides (PIV). Elles possèdent des propriétés thermiques exceptionnelles, mais de piètres propriétés mécaniques.L’enjeu de cette thèse est d’étudier ces matériaux aux échelles de la particule (quelques nm), de l’agrégat de particules (quelques dizaines de nm) et de l’agglomérat d’agrégats (quelques centaines de nm), afin de mieux comprendre les comportements mécanique et thermique à l’aide de simulations, et de proposer des pistes pour améliorer le compromis thermique / mécanique. La nature particulaire du matériau et son caractère multi-échelle justifient l’utilisation de méthodes de simulations discrètes (DEM : Discrete Element Method). Un modèle original permettant de générer des agrégats à morphologiecontrôlée (dimension fractale, rayon de giration, porosité) est proposé. Le comportement à la compaction des agrégats est ensuite étudié par simulations DEM. Une approche par cyclage à faible densité a été développée pour obtenir des arrangements initiaux réalistes d’agrégats. La prépondérance des phénomènes adhésifs dans le système rend en effet celui-ci très sensible à l’arrangement initial. La réponse en traction des structures générées par compaction est également évaluée.L’influence de la morphologie des agrégats, de l’adhésion et du frottement ont été étudiées. L’accent est mis sur la comparaison de deux types de silices (pyrogénées et précipitées) présentant des morphologies différentes et pour lesquelles des données expérimentales permettent une confrontation avec les simulations. Les simulations présentées permettent d’apporter des réponses sur l’origine des différences de comportement mécanique observées expérimentalement pour ces deux types de silice.Une modélisation de la conductivité thermique du matériau, avec une focalisation sur la conductivité solide, est également proposée
Nanostructured silicas are ultra-porous materials (more than 80 % porosity) used to make Vacuum Insulation Panels (VIP).They have exceptional thermal properties, but poor mechanical properties. The goal of this thesis is to study these materials at the scale of the particle (a few nm), the aggregate of particles (a few tens of nm) and the agglomerate of aggregates (a few hundred nm), in order to better understand mechanical and thermal behaviour using simulations, and to propose ways to improve the thermal / mechanical compromise. The particulate nature of the material and its multi-scale naturejustify the use of Discrete Element Methods (DEM). An original model allowing to generate aggregates with controlledmorphology (fractal dimension, radius of gyration, porosity) is proposed. The compaction behaviour of the aggregates is then studied by DEM. A low-density cycling approach has been developed to obtain realistic initial aggregate arrangements.The preponderance of adhesive phenomena in the system makes it very sensitive to the initial arrangement. The tensile response of structures generated by compaction is also evaluated. The influence of aggregate morphology, adhesion and friction were studied. Emphasis is placed on the comparison of two types of silica (pyrogenic and precipitated) with different morphologies and for which experimental data allow a comparison with simulations. The simulations presented allow us to provide answers on the origin of the differences in mechanical behaviour observed experimentally for these two types of silica.A modeling of the thermal conductivity of the material, with a focus on solid conductivity, is also proposed
APA, Harvard, Vancouver, ISO, and other styles
48

ABDEL-MOMEN, SHERIF SAMIR. "DYNAMIC RESOURCE BALANCING BETWEEN TWO COUPLED SIMULATIONS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060893659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Linke, Gunnar Torsten. "Eigenschaften fluider Vesikeln bei endlichen Temperaturen." Phd thesis, Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2005/583/.

Full text
Abstract:
In der vorliegenden Arbeit werden die Eigenschaften geschlossener fluider Membranen, sogenannter Vesikeln, bei endlichen Temperaturen untersucht. Dies beinhaltet Betrachtungen zur Form freier Vesikeln, eine Untersuchung des Adhäsionsverhaltens von Vesikeln an planaren Substraten sowie eine Untersuchung der Eigenschaften fluider Vesikeln in eingeschränkten Geometrien. Diese Untersuchungen fanden mit Hilfe von Monte-Carlo-Simulationen einer triangulierten Vesikeloberfläche statt. Die statistischen Eigenschaften der fluktuierenden fluiden Vesikeln wurden zum Teil mittels Freier-Energie-Profile analysiert. In diesem Zusammenhang wurde eine neuartige Histogrammethode entwickelt.

Die Form für eine freie fluide Vesikel mit frei veränderlichem Volumen, die das Konfigurationsenergie-Funktional minimiert, ist im Falle verschwindender Temperatur eine Kugel. Mit Hilfe von Monte-Carlo-Simulationen sowie einem analytisch behandelbaren Modellsystem konnte gezeigt werden, daß sich dieses Ergebnis nicht auf endliche Temperaturen verallgemeinern lässt und statt dessen leicht prolate und oblate Vesikelformen gegenüber der Kugelgestalt überwiegen. Dabei ist die Wahrscheinlichkeit für eine prolate Form ein wenig gröoßer als für eine oblate. Diese spontane Asphärizität ist entropischen Ursprungs und tritt nicht bei zweidimensionalen Vesikeln auf. Durch osmotische Drücke in der Vesikel, die größer sind als in der umgebenden Flüssigkeit, lässt sich die Asphärizität reduzieren oder sogar kompensieren. Die Übergänge zwischen den beobachteten prolaten und oblaten Formen erfolgen im Bereich von Millisekunden in Abwesenheit osmotisch aktiver Partikel. Bei Vorhandensein derartiger Partikel ergeben sich Übergangszeiten im Bereich von Sekunden.

Im Rahmen der Untersuchung des Adhäsionsverhaltens fluider Vesikeln an planaren, homogenen Substraten konnte mit Hilfe von Monte-Carlo-Simulationen festgestellt werden, dass die Eigenschaften der Kontaktfläche der Vesikeln stark davon abhängen, welche Kräfte den Kontakt bewirken. Für eine dominierende attraktive Wechselwirkung zwischen Substrat und Vesikelmembran sowie im Falle eines Massendichteunterschieds der Flüssigkeiten innerhalb und außerhalb der Vesikel, der die Vesikel auf das Substrat sinken lässt, ndet man innerhalb der Kontakt ache eine ortsunabhangige Verteilung des Abstands zwischen Vesikelmembran und Substrat. Drückt die Vesikel ohne Berücksichtigung osmotischer Effekte auf Grund einer Differenz der Massendichten der Membran und der umgebenden Flüssigkeit gegen das Substrat, so erhält man eine Abstandsverteilung zwischen Vesikelmembran und Substrat, die mit dem Abstand vom Rand der Kontaktfläche variiert. Dieser Effekt ist zudem temperaturabhängig.

Ferner wurde die Adhäsion fluider Vesikeln an chemisch strukturierten planaren Substraten untersucht. Durch das Wechselspiel von entropischen Eekten und Konfigurationsenergien entsteht eine komplexe Abhängigkeit der Vesikelform von Biegesteifigkeit, osmotischen Bedingungen und der Geometrie der attraktiven Domänen.

Für die Bestimmung der Biegesteifigkeit der Vesikelmembranen liefern die existierenden Verfahren stark voneinander abweichende Ergebnisse. In der vorliegenden Arbeit konnte mittels Monte-Carlo-Simulationen zur Bestimmung der Biegesteifigkeit anhand des Mikropipettenverfahrens von Evans gezeigt werden, dass dieses Verfahren die a priori für die Simulation vorgegebene Biegesteifigkeit im wesentlichen reproduzieren kann.

Im Hinblick auf medizinisch-pharmazeutische Anwendungen ist der Durchgang fluider Vesikeln durch enge Poren relevant. In Monte-Carlo-Simulationen konnte gezeigt werden, dass ein spontaner Transport der Vesikel durch ein Konzentrationsgefälle osmotisch aktiver Substanzen, das den physiologischen Bedingungen entspricht, induziert werden kann. Es konnten die hierfür notwendigen osmotischen Bedingungen sowie die charakteristischen Zeitskalen abgeschätzt werden. Im realen Experiment sind Eindringzeiten in eine enge Pore im Bereich weniger Minuten zu erwarten. Ferner konnte beobachtet werden, dass bei Vesikeln mit einer homogenen, positiven spontanen Krümmung Deformationen hin zu prolaten Formen leichter erfolgen als bei Vesikeln ohne spontane Krümmung. Mit diesem Effekt ist eine Verringerung der Energiebarriere für das Eindringen in eine Pore verbunden, deren Radius nur wenig kleiner als der Vesikelradius ist.
In this thesis, the properties of closed fluid membranes or vesicles are studied at finite temperatures. The work contains investigations of the shape of free vesicles, studies of the adhesion behavior of vesicles to planar substrates, and investigations of the properties of fluid vesicles in confined geometries. The investigations have been performed with Monte Carlo simulations of triangulated vesicles. The statistical properties of fluctuating vesicles have been analyzed in detail by means of free energy profiles. In this context, a new histogram method was developed.

The shape of minimum configurational energy for a free vesicle without volume constraint at zero temperature is a sphere. It is shown by means of Monte Carlo simulations and a model which can be analyzed analytically, that this result does not apply to finite temperatures. Instead, prolate and oblate shapes prevail and the probability for a prolate shape is slightly larger than that for an oblate shape. This spontaneous asphericity is of entropic origin and cannot be observed in two dimensions. Osmotic pressures inside the vesicle that are larger than in the surrounding liquid may reduce or even compensate the asphericity. The transitions between the observed prolate and oblate states occur on the time scale of milliseconds in the absence of osmotically active particles and on the time scale of seconds in the presence of osmotically active particles.

As far as the adhesion behavior of fluid vesicles to planar homogeneous substrates is concerned, Monte Carlo simulations reveal a strong dependence of the properties of the contact area on its driving force. In the case of a dominating attractive interaction between vesicle membran and substrate as well as for a mass density difference of the liquids inside and outside the vesicle, which push the vesicle against the substrate, the distribution of the distance between the vesicle membrane and the substrate is homogenous. If the vesicle is pushed against the substrate by a difference of the mass densities of the membrane and the surrounding liquid, neglecting all osmotic effects, one gets a distance distribution between the vesicle membrane and the substrate which varies with the distance from the rim of the contact area. Moreover, this effect is temperature-dependent.

Furthermore, the adhesion of fluid vesicles to chemically structured planar substrates has been studied. The interplay between entropic effects and configurational energies causes a complex dependence of the vesicle shape on the bending rigidity, osmotic conditions, and the geometry of the attractive domains.

There are several experimental methods for measuring the bending rigidity of vesicle membranes which lead to rather different results for the numerical value. Monte Carlo simulations of Evans' micropipette method show that the difference between the measured bending rigidity and the a priori chosen bending rigidity is small.

The passage of fluid vesicles through narrow pores has some relevance to medical/pharmaceutical applications. In Monte Carlo simulations it is shown that a spontaneous transport of vesicles can be induced by a concentration gradient of osmotically active particles which corresponds to the physiological conditions. The necessary osmotic conditions and the charateristic time scales are calculated. For real experiments, penetration into the pore should occur within a few minutes. Moreover, it was observed that vesicles with a homogeneous positive spontaneous curvature can be deformed more easily into prolate shapes than vesicles with zero spontaneous curvature. This effect leads to a decrease of the energy barrier for the penetration into a wide pore, which has a radius slightly smaller than that of the vesicle.
APA, Harvard, Vancouver, ISO, and other styles
50

Jelinek, Bohumir. "Molecular dynamics simulations of metals." Diss., Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-11072008-130216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography