Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Numerical understanding.

Dissertationen zum Thema „Numerical understanding“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Numerical understanding" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tan, Lynne S. C. „Numerical understanding in infancy“. Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388999.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Collins, Benjamin Forster Stevenson David John Sari Re'em. „Understanding the solar system with numerical simulations and Lévy flights /“. Diss., Pasadena, Calif. : California Institute of Technology, 2009. http://resolver.caltech.edu/CaltechETD:etd-05292009-130440.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Park, Hyekyung. „Toward a Comprehensive Developmental Theory for Symbolic Magnitude Understanding“. The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159136679184101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lachky, Stephen Thomas. „Understanding patterns of rural decline : a numerical analysis among Kansas counties“. Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/3746.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Green, Daniel. „Understanding urban rainfall-runoff responses using physical and numerical modelling approaches“. Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33530.

Der volle Inhalt der Quelle
Annotation:
This thesis provides a novel investigation into rainfall-runoff processes occurring within a unique two-tiered depth-driven overland flow physical modelling environment, as well as within a numerical model context where parameterisation and DEM/building resolution influences have been investigated using an innovative de-coupled methodology. Two approaches to simulating urban rainfall-runoff responses were used. Firstly, a novel, 9 m2 physical modelling environment consisting of a: (i) a low-cost rainfall simulator component able to simulate consistent, uniformly distributed rainfall events of varying duration and intensity, and; (ii) a modular plot surface layer was used. Secondly, a numerical hydroinundation model (FloodMap2D-HydroInundation) was used to simulate a short-duration, high intensity surface water flood event (28th June 2012, Loughborough University campus). The physical model showed sensitivities to a number of meteorological and terrestrial factors. Results demonstrated intuitive model sensitivity to increasing the intensity and duration of rainfall, resulting in higher peak discharges and larger outflow volumes at the model outflow unit, as well as increases in the water depth within the physical model plot surface. Increases in percentage permeability were also shown to alter outflow flood hydrograph shape, volume, magnitude and timing due to storages within the physical model plot. Thus, a reduction in the overall volume of water received at the outflow hydrograph and a decrease in the peak of the flood event was observed with an increase in permeability coverage. Increases in the density of buildings resulted in a more rapid receding limb of the hydrograph and a steeper rising limb, suggesting a more rapid hydrological response. This indicates that buildings can have a channelling influence on surface water flows as well as a blockage effect. The layout and distribution of permeable elements was also shown to affect the rainfall-runoff response recorded at the model outflow, with downstream concentrated permeability resulting in statistically different hydrograph outflow data, but the layout of buildings was not seen to result in significant changes to the outflow flood hydrographs; outflow hydrographs appeared to only be influenced by the actual quantity and density of buildings, rather than their spatial distribution and placement within the catchment. Parameterisation of hydraulic (roughness) and hydrological (drainage rate, infiltration and evapotranspiration) model variables, and the influence of mesh resolution of elevation and building elements on surface water inundation outputs, both at the global and local level, were studied. Further, the viability of crowdsourced approaches to provide external model validation data in conjunction with dGPS water depth data was assessed. Parameterisation demonstrated that drainage rate changes within the expected range of parameter values resulted in considerable losses from the numerical model domain at global and local scales. Further, the model was also shown to be moderately sensitive to hydraulic conductivity and roughness parameterisation at both scales of analysis. Conversely, the parameterisation of evapotranspiration demonstrated that the model was largely insensitive to any changes of evapotranspiration rates at the global and local scales. Detailed analyses at the hotspot level were critical to calibrate and validate the numerical model, as well as allowing small-scale variations to be understood using at-a-point hydrograph assessments. A localised analysis was shown to be especially important to identify the effects of resolution changes in the DEM and buildings which were shown to be spatially dependent on the density, presence, size and geometry of buildings within the study site. The resolution of the topographic elements of a DEM were also shown to be crucial in altering the flood characteristics at the global and localised hotspot levels. A novel de-coupled investigation of the elevation and building components of the DEM in a strategic matrix of scenarios was used to understand the independent influence of building and topographic mesh resolution effects on surface water flood outputs. Notably, the inclusion of buildings on a DEM surface was shown to have a considerable influence on the distribution of flood waters through time (regardless of resolution), with the exclusion of buildings from the DEM grid being shown to produce less accurate results than altering the overall resolution of the horizontal DEM grid cells. This suggests that future surface water flood studies should focus on the inclusion and representation of buildings and structural features present on the DEM surface as these have a crucial role in modifying rainfall-runoff responses. Focus on building representation was shown to be more vital than concentrating on advances in the horizontal resolution of the grid cells which make up a DEM, as a DEM resolution of 2 m was shown to be sufficiently detailed to conduct the urban surface water flood modelling undertaken, supporting previous inundation research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Nowak, Stephanie Beth. „Understanding Time-Variant Stress-Strain in Turkey: A Numerical Modeling Approach“. Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/26072.

Der volle Inhalt der Quelle
Annotation:
Over the past century, a series of large (> 6.5) magnitude earthquakes have struck along the North Anatolian Fault Zone (NAFZ) in Turkey in a roughly East to West progression. The progression of this earthquake sequence began in 1939 with the Ms 8.0 earthquake near the town of Erzincan and continued westward, with two of the most recent ruptures occurring near the Sea of Marmara in 1999. The sequential nature of ruptures along this fault zone implies that there is a connection between the location of the previous rupture and that of the future rupture zones. This study focuses on understanding how previous rupture events and tectonic influences affect the stress regime of the NAFZ and how these stress changes affect the probability of future rupture along any unbroken segments of the fault zone using a two dimensional finite element modeling program. In this study, stress changes due to an earthquake are estimated using the slip history of the event, estimations of rock and fault properties along the fault zone (elastic parameters), and the far-field tectonic influence due to plate motions. Stress changes are not measured directly. The stress regime is then used to calculate the probability of rupture along another segment of the fault zone. This study found that when improper estimates of rock properties are utilized, the stress changes may be under- or over- estimated by as much as 350% or more. Because these calculated stress changes are used in probability calculations, the estimates of probability can be off by as much as 20%. A two dimensional model was built to reflect the interpreted geophysical and geological variations in elastic parameters and the 1939 through 1999 rupture sequence was modeled. The far-field tectonic influence due to plate motions contributed between 1 and 4 bars of stress to the unbroken segments of the fault zone while earthquake events transferred up to 50 bars of stress to the adjacent portions of the fault zone. The 1999 rupture events near Izmit and Düzce have increased the probability of rupture during the next ten years along faults in the Marmara Sea to 38% while decreasing the probability of rupture along the faults near the city of Bursa by ~6%. Large amounts of strain accumulation are interpreted along faults in the Marmara Sea, further compounding the case for a large rupture event occurring in that area in the future.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Williams, Randolph T. „A Combined Experimental and Numerical Approach to Understanding Quartz Cementation in Sandstones“. Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1339354653.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gotow, Drusilla Frey. „Identification of numerical principles prerequisite to a functional understanding of place value“. Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/52293.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study was to find some remedy to frustrations engendered when children fall to grasp the essential principle of place value after several attempts at reteaching. It was hypothesized that these children must have failed to acquire understanding of some numerical principle(s) prerequisite to understanding the place value aspect of the numeration system. Four plausible prerequisite principles were identified (1) synthesis of ordinal and cardinal properties of the numeration system, (2) both the addition and subtraction operations, (3) understanding of counting by groups, and (4) understanding of exchange equivalences such as one ten for ten ones, etc. It was hypothesized that understanding of analog clock reading was also dependent upon understanding of the same four prerequisite principles. By conducting four pilot studies, six interview protocol instruments were developed to measure levels of understanding for the four prerequisite principles and the place value and clock reading criterion principles. Three levels of understanding: no understanding, transitional understanding, and competence were designated to correspond with Plagetian stages in the development of a new operation. Forty-eight children, twenty with second grade completed and twenty-eight with third grade completed, were tested on all six instruments. Hypotheses tested were: (1) if the four identified prerequisite principles are necessary to understanding of place value, then subjects will demonstrate a level of understanding on the place value measure no higher than their lowest level of understanding achieved on the four prerequisite measures; and (2) if the four identified prerequisite . principles are necessary to understanding of clock reading, then subjects will demonstrate a level of understanding on the clock reading measure no higher than their lowest level of understanding achieved on . the four prerequisite measures. The findings were that both hypotheses were supported at the .01 probability level. Analysis of the research design and examiner observations suggested possible explanations for anomalous aspects of the obtained data. Limitations, directions for further research, and implications for teachers were also discussed.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Davis, Clayton Paul. „Understanding and Improving Moment Method Scattering Solutions“. Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd620.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Burke, Lisa Michelle. „Numerical Modeling for Increased Understanding of the Behavior and Performance of Coal Mine Stoppings“. Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/32692.

Der volle Inhalt der Quelle
Annotation:
To date, research has not focused on the behavior of concrete block stoppings subjected to excessive vertical loading due to roof to floor convergence. For this reason, the failure mechanism of stoppings under vertical loading has not been fully understood. Numerical models were used in combination with physical testing to study the failure mechanisms of concrete block stoppings. Initially, the behavior of a single standard CMU block was observed and simulated using FLAC. Full-scale stoppings were then tested in the Mine Roof Simulator and modeled using UDEC. Through a combination of physical testing and numerical modeling a failure mechanism for concrete block stoppings was established. This failure mechanism consists of development of stress concentrations where a height difference as small as 1/32â exists between adjacent blocks. These stress concentrations lead to tensile cracking and, ultimately, premature failure of the wall.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Vickers, Andrew William. „Improve the understanding of uncertainties in numerical analysis of moored floating wave energy converters“. Thesis, University of Exeter, 2012. http://hdl.handle.net/10871/12142.

Der volle Inhalt der Quelle
Annotation:
The wave energy industry, still in its infancy compared to similar activities offshore, must look to the oil and gas industry for guide lines on design criteria for survival, safety and operational optimisation for installations at sea. Numerical analysis tools for prediction of the response of floating moored structures have become an important part of the design task for the offshore industry offering a low cost and low risk option compared to scale tank testing. However, rather than having only a task of station keeping and survival, the moorings for a wave energy converters (WECs) would also be required to provide the ability of not adversely affecting the power capture task. The main aim of this work is to gain an understanding and reduce the uncertainties in the numerical modelling of WECs. Experimental work designed and performed under the HydraLab III project of which the author was a member were used to evaluate the response characteristics of a 1:20 scale “generic WEC” device with a 3 point mooring system. The investigation was enhanced through further tests implemented by the author at Heriot-Watt wave tank using a single WEC device. The outcomes from these experiments were used to aid in the implementation of the aim identified above. Two numerical model categories were set up to understand the uncertainties apparent to the mooring simulations. The first category included only the calculation of the mooring line response using experimental data to inform the motion of the floating body. The second category included the motion response of the floating body coupling the complex behaviour to the moored system. The mooring tension results for the first category shows an error between the numerical prediction and the experimental results up to 16 times that of the experimental value. This was mainly during slack conditions where the mooring line tension was lower than the pretension in the line at still water. During the higher tension events the average error was 26%. For the second category it was found that the numerical predictions of the WEC motion response in six degree of freedom (6DOF) were generally over predicted. The tension predictions for the coupled simulations identified an error of between 1.4 and 4.5%. The work presented here contributed to the understanding of uncertainties in numerical simu- lations for WEC mooring designs. The disparity between the simulation and experimental results re-enforced the requirement for a better understanding of highly dynamic responding moored cou- pled systems. From this work it is clear that the numerical models used to approximate the response of moored WECs could provide a good first design step. Whilst this work contributed to the understanding of uncertainties and consequently reduced some of these, further work is rec- ommended in chapter 6 to investigate the definition of some of the mechanical and hydrodynamic properties of the mooring line. It is also suggested that external functions should be included 2 that would allow to model the coupled effect of Power-Take-Off (PTO) system. It is intended to conduct future work deriving a fully dynamic mooring simulation including the effects of PTO.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jachec, Steven Michael. „Understanding the evolution and energetics of internal tides within Monterey Bay via numerical simulations /“. May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Friedlander, David J. „Understanding the Flow Physics of Shock Boundary-Layer Interactions Using CFD and Numerical Analyses“. University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367928417.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Malkeson, Sean. „Fundamental understanding and modelling of turbulent combustion in stratified mixtures using Direct Numerical Simulations(DNS)“. Thesis, University of Liverpool, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539482.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Wu, Chun-Chieh. „Understanding hurricane movement from a potential vorticity perspective : a numerical model and an observational study“. Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/57832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Haghighi, Roozbeh [Verfasser], Johannes [Akademischer Betreuer] Janinicka und Peter [Akademischer Betreuer] Stephan. „Towards understanding multicomponent chemistry interaction using direct numerical simulation / Roozbeh Haghighi. Betreuer: Johannes Janinicka ; Peter Stephan“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1112044493/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Tomlinson, Richard Henry. „Understanding hydrothermal fluid flow within faults and associated mineralization using a combined field and numerical approach“. Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Lee, Jinkee. „Developing microfluidic routes for understanding transport of complex and biological fluids : experimental, numerical and analytical approaches“. View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3319102.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lai, Jiawei. „Fundamental understanding and modelling of turbulent premixed flame wall interaction : a direct numerical simulation based analysis“. Thesis, University of Newcastle upon Tyne, 2018. http://hdl.handle.net/10443/4100.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on fundamental physical understanding and modelling of turbulent premixed flame-wall interaction by using Direct Numerical Simulation (DNS) data. Three-dimensional compressible simulations of turbulent premixed flame-wall interaction have been carried out for head-on quenching (HOQ) of statistically planar flames by an isothermal inert wall and also for oblique quenching of a V-flame by two isothermal inert sidewalls (top and bottom walls). Simulations have been conducted for different values of Damköhler, Karlovitz and global Lewis numbers (i.e. Da, Ka and Le), and the chemical mechanism is simplified by a single-step Arrhenius type irreversible chemistry for the sake of computational economy in the interest of a detailed parametric analysis. The flame-wall interaction has been characterised in terms of wall heat flux magnitude and wall Peclet number (i.e. normalised wall normal distance). It has been found that the maximum wall heat flux magnitude decreases, whereas the minimum wall Peclet number (which quantifies the flame quenching distance) increases with increasing Lewis number in the case of laminar head-on quenching of planar flames. However, the minimum wall Peclet number for Le < 1.0 turbulent premixed flames has been to be smaller than the corresponding laminar value, whereas the minimum Peclet number in the case of turbulent flames with Le ≥ 1.0 remains comparable to the corresponding laminar values. It has been found that heat loss through the wall and flame quenching in the vicinity of the wall significantly affect dilatation rate distribution in the near-wall region, and has influences on the behaviours of the invariants of the velocity gradient tensor, which in turn influences statistical behaviours of flow topology and enstrophy distribution in the near-wall region. The statistical behaviours of vorticity and enstrophy transports in the near-wall region and the distribution of flow topologies within the flame, and their evolution with flame quenching have been analysed in detail using DNS data, and important fundamental physical insights have been gained regarding the flame-quenching processes associated with the flame-wall interaction. The DNS data has been explicitly Reynolds averaged to analyse the statistical behaviours of turbulent kinetic energy, scalar variance, turbulent scalar flux, FlameSurface Density (FSD) and scalar dissipation rate (SDR) and their transport in the near-wall regions. It has been found that existing closures of these quantities do not adequately capture their near-wall behaviours and in this thesis modifications to the existing closures have been proposed based on a-priori DNS analysis to account for the wall effects in such a manner that the modified closures perform well both near to and away from the wall. Furthermore, it has been found both FSD and SDR based conventional reaction rate closures do not adequately capture the mean reaction rate close to the wall, and the current analysis offers alternative reaction rate closure expressions both in the contexts of FSD and SDR based modelling approaches. Thus, the current thesis offers a unified modelling strategy for premixed flame-wall interaction in the context of Reynolds Averaged Navier-Stokes (RANS) simulations for the very first time. Finally, in order to validate the findings based on simple chemistry DNS, a limited number of DNS calculations of head-on quenching has been conducted using a multistep chemical mechanism for methane-air combustion. It has been found that the statistics of wall heat flux magnitude and wall Peclet number obtained from detailed chemistry simulations are in good qualitative and quantitative agreements with the corresponding results from simple chemistry DNS. However, detailed chemistry DNS reveals the presence of heat release at the wall during early stages of flame quenching, whereas heat release remains identically zero at the wall for simple chemistry DNS. In spite of this difference, an FSD based reaction rate closure which was proposed based on a-priori analysis of simple chemistry DNS has been found to work also for detailed chemistry DNS data without any modification. This provides the confidence in the models which have been proposed based on the analysis of simple chemistry DNS data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Hirons, Linda Catherine. „Understanding advances in the simulation of the Madden-Julian Oscillation in a numerical weather prediction model“. Thesis, University of Reading, 2012. http://centaur.reading.ac.uk/57772/.

Der volle Inhalt der Quelle
Annotation:
The Madden-Julian Oscillation (MJO) is the dominant mode of intraseasonal variability in the Trop- ics. It can be characterised as a planetary-scale coupling between the atmospheric circulation and organised deep convection that propagates east through the equatorial Indo-Pacific region. The MJO interacts with weather and climate systems on a near-global scale and is a crucial source of predictability for weather forecasts on medium to seasonal timescales. Despite its global signifi- cance, accurately representing the MJO in numerical weather prediction (NWP) and climate models remains a challenge. This thesis focuses on the representation of the MJO in the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasting (ECMWF), a state-of-the-art NWP model. Recent modifications to the model physics in Cycle 32r3 (Cy32r3) of the IFS led to ad- vances in the simulation of the MJO; for the first time the observed amplitude of the MJO was maintained throughout the integration period. A set of hindcast experiments, which differ only in their formulation of convection, have been performed between May 2008 and April 2009 to asses the sensitivity of MJO simulation in the IFS to the Cy32r3 convective parameterization. Unique to this thesis is the attribution of the advances in MJO simulation in Cy32r3 to the mod- ified convective parameterization, specifically, the relative-humidity-dependent formulation for or- ganised deep entrainment. Increasing the sensitivity of the deep convection scheme to environmen- tal moisture is shown to modify the relationship between precipitation and moisture in the model. Through dry-air entrainment, convective plumes ascending in low-humidity environments terminate lower in the atmosphere. As a result, there is an increase in the occurrence of cumulus congestus, which acts to moisten the mid-troposphere. Due to the modified precipitation-moisture relationship more moisture is able to build up which effectively preconditions the tropical atmosphere for the transition to deep convection. Results from this thesis suggest that a tropospheric moisture control on convection is key to simulating the interaction between the physics and large-scale circulation associated with the MJO.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Haghighi, Roozbeh [Verfasser], Johannes Akademischer Betreuer] Janinicka und Peter [Akademischer Betreuer] [Stephan. „Towards understanding multicomponent chemistry interaction using direct numerical simulation / Roozbeh Haghighi. Betreuer: Johannes Janinicka ; Peter Stephan“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-51370.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Squires, Timothy Richard. „Efficient numerical methods for ultrasound elastography“. Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:332c7b2b-10c3-4dff-b875-ac1ee2c5d4fb.

Der volle Inhalt der Quelle
Annotation:
In this thesis, two algorithms are introduced for use in ultrasound elastography. Ultrasound elastography is a technique developed in the last 20 years by which anomalous regions in soft tissue are located and diagnosed without the need for biopsy. Due to this, the relativity cheap cost of ultrasound imaging and the high level of accuracy in the methods, ultrasound elastography methods have shown great potential for the diagnosis of cancer in soft tissues. The algorithms introduced in this thesis represent an advance in this field. The first algorithm is a two-step iteration procedure consisting of two minimization problems - displacement estimation and elastic parameter calculation that allow for diagnosis of any anomalous regions within soft tissue. The algorithm represents an improvement on existing methods in several ways. A weighting factor is introduced for each different point in the tissue dependent on the confidence in the accuracy of the data at that point, an exponential substitution is made for the elasticity modulus, an adjoint method is used for efficient calculation of the gradient vector and a total variation regularization technique is used. Most importantly, an adaptive mesh refinement strategy is introduced that allows highly efficient calculation of the elasticity distribution of the tissue though using a number of degrees of freedom several orders lower than methods that use a uniform mesh refinement strategy. Results are presented that show the algorithm is robust even in the presence of significant noise and that it can locate a tumour of 4mm in diameter within a 5cm square region of tissue. Also, the algorithm is extended into 3 dimensions and results are presented that show that it can calculate a 3 dimensional elasticity distribution efficiently. This extension into 3-d is a significant advance in the field. The second algorithm is a one-step algorithm that seeks to combine the two problems of elasticity distribution and displacement calculation into one. As in the two-step algorithm, a weighting factor, exponential substitution for the elasticity parameter, adjoint method for calculation of the gradient vector, total variation regularization and adaptive mesh refinement strategy are incorporated. Results are presented that show that this original approach can locate tumours of varying sizes and shapes in the presence of varying levels of added artificial noise and that it can determine the presence of a tumour in images taken from breast tissue in vivo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ford, Amanda Brady. „Understanding the Circumgalactic Medium Through Hydrodynamic Simulations and Hubble's Cosmic Origins Spectrograph“. Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/332661.

Der volle Inhalt der Quelle
Annotation:
My dissertation focuses on a relatively new field of study: the region immediately around galaxies known as the circumgalactic medium (CGM). The CGM holds vast quantities of mass and metals, yet its connection to galaxies is not well understood. My work uses cosmological hydrodynamic simulations and comparisons to data from Hubble's Cosmic Origins Spectrograph (COS) to understand the CGM's connection to galaxy evolution, gas accretion, outflows, star formation, and baryon cycling. This includes studies of the CGM's extent and physical conditions; the cause and nature of outflows; gas dynamics, including the first comprehensive study of tracers of inflowing and outflowing gas at low redshift (z=0.25); and direct comparison of theoretical results to observational data. Chapter 1 introduces my research and show its connection to galaxy evolution. Chapter 2 investigates hydrogen and metal line absorption around low-redshift galaxies in cosmological hydrodynamic simulations. This chapter studies different models for stellar outflows, physical conditions, and dependencies on halo mass. Chapter 3 examines the flow of gas into, out of, and around galaxies using a novel particle tracking technique. This chapter examines the baryon cycle in detail for our preferred model of stellar outflows. Chapter 4 compares our model results, including two separate prescriptions for outflows, with data from COS. We contrast these wind models, showing how they cycle baryons differently, and show degeneracies in observational diagnostics. In Chapter 5, I summarize and discuss plans for future research in this field, and how it can be more fully leveraged to understand galaxy evolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Blackett, Norman. „Developing understanding of trigonometry in boys and girls using a computer to link numerical and visual representations“. Thesis, University of Warwick, 1990. http://wrap.warwick.ac.uk/2307/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Vorpe, Katherine. „Understanding a Population Model for Mussel-Algae Interaction“. Wittenberg University Honors Theses / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wuhonors1617970789779916.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bertozzi, Barbara. „Feasibility study for understanding ice cave microclimate through thermo-fluid dynamics approaches“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Den vollen Inhalt der Quelle finden
Annotation:
Ice caves are classified as sporadic permafrost phenomena and consist of lava tubes or cave systems in which perennial ice forms. Ice within caves can be very old and can carry important information on permafrost conditions, climate changes and past climates. Until now, these systems have been investigated mainly with an experimental approach. A critical topic in ice cave studies is the understanding of how the internal environment interacts with the external and how these systems react to changes in the external conditions. In this thesis, a new numerical approach to understand ice cave microclimate is proposed. Numerical studies can contribute greatly to a better understanding of the processes involved in the formation and preservation of the ice in cave. Furthermore, computational fluid dynamic methods can be a valuable support to define new experimental setups and to interpret experimental results. The cave studied in this work is Leupa ice cave, located in Friuli Venezia Giulia region. Air flows inside Leupa ice caves were characterized with an integrated approach using both experimental and numerical methods. A general approach was initially adopted and three representative days were identified to investigate which circulation patterns can develop under different environmental conditions. The comparison of numerical and experimental data permitted to evaluate the quality of the simulations and to identify the main problematics that need to be investigated further. Deeper investigations were then performed for a single day to investigate the temperature and boundary conditions effect on the flow thermo-dynamics inside the cave. New insights on the fluid-dynamic behavior of Leupa ice cave are achieved, showing that numerical methods could represent a powerful tool to study ice caves, improving and integrating the information that could be obtained from standard experimental measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Frazer, Miles. „Advances in understanding the evolution of diagenesis in Carboniferous carbonate platforms : insights from simulations of palaeohydrology, geochemistry, and stratigraphic development“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/advances-in-understanding-the-evolution-of-diagenesis-in-carboniferous-carbonate-platforms-insights-from-simulations-of-palaeohydrology-geochemistry-and-stratigraphic-development(9a1caa9d-6f16-472a-8b38-42aca657a9b9).html.

Der volle Inhalt der Quelle
Annotation:
Carbonate diagenesis encapsulates a wide range of water rock interactions that can occur within many environments and act to modify rock properties such as porosity, permeability, and mineralogical composition. These rock modification processes occur by the supply of reactant-laden fluids to areas where geochemical reactions are thermodynamically and kinetically favoured. As such, understanding the development of diagenesis requires an understanding of both palaeohydrology and geochemistry, both of which have their own complexities. However, within geological systems, both the conditions that control fluid migration and the distribution of thermodynamic conditions can change through time in response to external factors. Furthermore, they are often coupled, with rock modification exercising a control on fluid flow by altering the permeability of sediments. Numerical methods allow the coupling of multiple complex processes within a single mathematical formulation. As such, they are well suited to investigations into carbonate diagenesis, where multiple component subsystems interact. This thesis details the application of four separate types of numerical forward modelling to investigations of diagenesis within two Carboniferous carbonate platforms, the Derbyshire Platform (Northern England) and the Tengiz Platform (Western Kazakhstan). Investigations of Derbyshire Platform diagenesis are primarily concerned with explaining the presence of Pb-mineralisation and dolomitisation observed within the Dinantian carbonate succession. A coupled palaeohydrology and basin-development simulation and a series of geochemical simulations was used to investigate the potential for these products to form as a result of basin-derived fluids being driven into the platform by compaction. The results of these models suggest that this mechanism is appropriate for explaining Pb-mineralisation, but dolomitisation requires Mg concentrations within the basin-derived fluids that cannot be attained. Geothermal convection of seawater was thus proposed as an alternative hypothesis to explain the development of dolomitisation. This was tested using an advanced reactive transport model, capable of considering both platform growth and dolomitisation. The results of this suggests that significant dolomitisation may have occurred earlier on in the life of the Derbyshire Platform than has previously been recognised. An updated framework for the development of diagenesis in the Derbyshire Platform is proposed to incorporate these new insights. The Tengiz platform forms an important carbonate oil reservoir at the northeastern shore of the Caspian Sea. The effective exploitation of any reservoir lies in an understanding of its internal distributions of porosity and permeability. Within carbonate systems, this is critically controlled by the distribution of diagenetic products. A model of carbonate sedimentation and meteoric diagenesis is used to produce a framework of early diagenesis within a sequence stratigraphic context. The studies mentioned above provide a broad overview of the capabilities and applicability of forward numerical models to two data-limited systems. They reveal the potential for these methods to guide the ongoing assessment and development of our understanding of diagenetic systems and also help identify key questions for the progression of our understanding in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Leyton, Alex Ovando. „Understanding flooding processes of large wetlands of the bolivian amazon through in situ observation, remote sensing and numerical modeling“. Instituto Nacional de Pesquisas Espaciais (INPE), 2017. http://urlib.net/sid.inpe.br/mtc-m21b/2017/04.18.17.29.

Der volle Inhalt der Quelle
Annotation:
The Amazonian wetlands of Bolivia, known as the Llanos de Moxos, are believed to play a crucial role in regulating the upper Madeira hydrological cycle, the most important southern tributary of the Amazon River. In addition to its rich natural diversity, the Llanos were the setting for many complex pre-Columbian societies. Because the area is vast and sparsely populated, the hydrological functioning of the wetlands is poorly known. In this thesis we show the feasibility of using multi-temporal flood mapping, based on optical (MODIS M*D09A1) and satellite altimetry (ENVISAT RA-2 and SARAL Altika altimeters) to characterize and monitor flood dynamics and to optimize floodplain simulations within a hydrological model (MHD-INPE model). Initially we analyzed the hydrometeorological configurations that led to the major floods of 2007, 2008 and 2014 in the upper Madeira Basin; Then, with the inclusion of altimetric information, which provided a vertical component for the two-dimensional flood maps, we analyzed the flood dynamics for the whole 2001-2014 period, including both extension and water stage variations that allowed to have initial surface water storage estimations. Finally, we critically analyzed how numerical modeling of the wetlands can be improved using additional remote sensing techniques. Our results showed that large floods are the result of the superimposition of flood waves from major sub-basins of the region and the strong influence of the occurrence of intense rainfall over saturated areas. We had identified relevant features of the flood regime, identifying three groups with particular characteristics in function of its connectivity and dependence to the Andes and piedmonts or to local processes and classified the hydraulic function of the wetlands based on remote sensed imagery. Finally, we demonstrate that remote sensing information is of major importance for improving floodplain simulations using hydrological models. However, there are still clear limitations in the existent remote sensed products for achieving seamless predictions of the hydrological behavior of the Llanos under changing climate.
As extensas terras úmidas da Amazonia Boliviana, conhecidas como Llanos de Moxos, desempenham um papel crucial na regulação do ciclo hidrológico do Alto Madeira, o mais importante tributário do sudoeste da Bacia Amazônica. Além de sua riqueza e diversidade natural, os Llanos de Moxos foram o cenário para o desenvolvimento de complexas sociedades pré-colombinas. Devido a área ser extensa e pouco povoada, o funcionamento hidrológico destas terras úmidas é pouco conhecido. Nesta tese mostrou-se a viabilidade do uso de mapeamento multitemporal baseado em imagens ópticas (MODIS M*D09A1) e altimetria por satélite (ENVISAT RA-2 and SARAL AltiKa) para caracterizar e monitorar dinâmicas de inundação e otimizar simulações de planícies de inundação dentro de um modelo hidrológico (o modelo MHD-INPE). Inicialmente analisamos as configurações hidrometeorológicas que levaram aos grandes eventos de inundação dos anos 2007, 2008 e 2014 no Alto Madeira. Em seguida, com a inclusão de informação altimétrica, que forneceu o componente vertical aos mapas de inundação bidimensionais, analisamos as dinâmicas de inundação para o período 2001-2014, incluindo extensão e variações de profundidade das inundações, o que permitiu estimar de armazenamento de água superficial nas planícies. Finalmente analisamos criticamente como a simulação numérica das planícies pode ser otimizada com informação de sensoriamento remoto. Identificamos, baseados em informações de sensoriamento remoto e altimetria, três zonas diferenciadas em função de sua conectividade e dependência aos Andes ou a processos locais. Finalmente, demonstramos que a informação de sensoriamento remoto é de grande importância para a melhoria de simulações de planícies de inundação. No entanto, ainda existem limitações claras nos produtos de sensoriamento remoto para alcançar previsões exatas do comportamento hidrológico dos Llanos de Moxos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Andrae, René, und Peter Köhler. „Methoden zur Absicherung simulationsgerechter Produktmodelle“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-215085.

Der volle Inhalt der Quelle
Annotation:
Einleitung Immer höhere Anforderungen an die Interdisziplinarität der virtuellen Produktentwicklung (VPE) erfordern qualifizierte Produktmodelle, die eine vollständige Integration und Verknüpfung aller relevanten Teilprozesse absichern. Gleichzeitig soll dabei für den Anwender das Produktverständnis, wie auch die Qualität des Produktes und des Prozesses erhöht werden. Eine Folge daraus sind kurze Innovationszyklen und eine Erhöhung der Transparenz des Prozesses. Die Anwendung numerischer Simulationsmethoden hat sich als dritter essentieller Bestandteil neben Konstruktion und Versuch in der VPE etabliert (Pährisch et al. 2012). Eine Absicherung durch virtuelle Prototypen in einer frühen Konzeptphase unterstützt dabei den Konstruktionsprozess. Ein Nachteil ist, dass die Verwendung virtueller Prototypen noch unzureichend in die übrigen Prozessschritte integriert und damit eine Sensibilisierung für eine vorausschauende Modellerzeugung noch nicht vorhanden ist. Ebenso ergab eine Studie, dass Berechnungsingenieure durchschnittlich 50% ihrer Arbeitszeit auf Datenbeschaffung verwenden müssen und nur jeweils 10% auf die Modellaufbereitung (Sendler et al. 2011). Dies liegt u. a. an der sog. Kommunikationsbarriere zwischen der Konstruktion und Simulation beschreibt. Eine Lösung dazu ist eine tiefergehende Integration dieser beiden Disziplinen in ein Produktmodell. Ein Lösungsansatz ist die Durchführung konstruktionsbegleitender Simulationen. Diese können mit in CAD-Systemen integrierten Simulationsmodulen durchgeführt werden. Die Integrationstiefe der gegebenen Verknüpfungen ist allerdings meist sehr gering. Dieser Beitrag befasst sich mit Techniken, welche einen systematischen Aufbau eines simulationsorientierten Produktmodells absichern. Umgesetzt wird dies durch die Verwendung simulationsgerechter Komponenten, Feature und Analysen. Diese unterstützen eine automatisierte Modelltransformation im CAD-Prozess, an der Schnittstelle von Konstruktion und Simulation. Damit wird die Prozesskette Konstruktion-Simulation verkürzt. Ebenso werden auch durch die Integration tiefgehender Inferenzmechanismen fortgeschrittene Simulationstechniken, wie auch die Definition und Informationsübergabe von Rand- und Lastbedingungen und weiteren Details auf höherer Instanz ermöglicht.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Evans, Rachel Catherine [Verfasser]. „Application of Sensitive API-Based Indicators and Numerical Simulation Tools to Advance Hot-Melt Extrusion Process Understanding / Rachel Catherine Evans“. Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1198933763/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Gargh, Prashant Pawan. „INVESTIGATING AND UNDERSTANDING THE MECHANICAL RESPONSE OF LINKED STRUCTURES OF HARD AND SOFT METALSUSING CONSTANT DISPLACEMENT APPROACH: A NUMERICAL STUDY“. University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1467977759.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Mahee, Durude. „Numerical Simulation and Graphical Illustration of Ionization by Charged Particles as a Tool toward Understanding Biological Effects of Ionizing Radiation“. University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1535381068931831.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Baluwala, Habib. „Physically motivated registration of diagnostic CT and PET/CT of lung volumes“. Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:797c00c0-1efa-43e2-8268-e3d09ced0e06.

Der volle Inhalt der Quelle
Annotation:
Lung cancer is a disease affecting millions of people every year and poses a serious threat to global public health. Accurate lung cancer staging is crucial to choose an appropriate treatment protocol and to determine prognosis, this requires the acquisition of contrast-enhanced diagnostic CT (d-CT) that is usually followed by a PET/CT scan. Information from both d-CT and PET scan is used by the clinician in the staging process; however, these images are not intrinsically aligned because they are acquired on different days and on different scanners. Establishing anatomical correspondence, i.e., aligning the d-CT and the PET images is an inherently difficult task due to the absence of a direct relationship between the intensities of the images. The CT acquired during the PET/CT scan is used for attenuation correction (AC-CT) and is implicitly aligned with the PET image as they are acquired at the same time using a hybrid scanner. Patients are required to maintain shallow breathing for both scans. In contrast to that, the d-CT image is acquired after the injection of a contrast agent, and patients are required to maximally inhale, for better view of the lungs. Differences in the AC-CT and d-CT image volumes are thus due to differences in breathhold positions and image contrast. Nonetheless, both images are from the same modality. In this thesis, we present a new approach that aligns the d-CT with the PET image through an indirect registration process that uses the AC-CT. The deformation field obtained after the registration of the AC-CT to d-CT is used to align the PET image to the d-CT. Conventional image registration techniques deform the entire image using homogeneous regularization without taking into consideration the physical properties of the various anatomical structures. This homogeneous regularization may lead to physiologically and physically implausible deformations. To register the d-CT and AC-CT images, we developed a 3D registration framework based on a fluid transformation model including three physically motivated properties: (i) sliding motion of the lungs against the pleura; (ii) preservation of rigid structures; and (iii) preservation of topology. The sliding motion is modeled using a direction dependent regularization that decouples the tangential and the normal components of the external force term. The rigid shape of the bones is preserved using a spatially varying filter for the deformations. Finally, the topology is maintained using the concept of log-unbiased deformations. To solve the multi-modal registration problem due to the contrast injected for the d-CT, but lack thereof in the AC-CT, we use local cross correlation (LCC) as the similarity measure. To illustrate and validate the proposed registration framework, different intra-patient CT datasets are used, including the NCAT phantom, EMPIRE10 and POPI datasets. Results show that our proposed registration framework provides improved alignment and physically motivated deformations when compared to the classic elastic and fluid registration techniques. The final goal of our work was to demonstrate the clinical utility of our new approach that aligns d-CT and PET/AC-CT images for fusion. We apply our method to ten real patients. Our results show that the PET images have much improved alignment with the d-CT images using our proposed registration technique. Our method was successful in providing a good overlap of the lungs, improved alignment of the tumours and a lower target registration error for landmarks in comparison to the classic fluid registration. The main contribution of this thesis is the development of a comprehensive registration framework that integrates important physical properties into a state-of-the-art transformation model with application to lung imaging in cancer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Ponnuchamy, Veerapandian. „Towards A Better Understanding of Lithium Ion Local Environment in Pure, Binary and Ternary Mixtures of Carbonate Solvents : A Numerical Approach“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENY004/document.

Der volle Inhalt der Quelle
Annotation:
En raison de l'augmentation de la demande d'énergie, ressources écologiques respectueux de l'environnement et durables (solaires, éoliennes) doivent être développées afin de remplacer les combustibles fossiles. Ces sources d'énergie sont discontinues, étant corrélés avec les conditions météorologiques et leur disponibilité est fluctuant dans le temps. En conséquence, les dispositifs de stockage d'énergie à grande échelle sont devenus incontournables, pour stocker l'énergie sur des échelles de temps longues avec une bonne compatibilité environnementale. La conversion d'énergie électrochimique est le mécanisme clé pour les développements technologiques des sources d'énergie alternatives. Parmi ces systèmes, les batteries Lithium-ion (LIB) ont démontré être les plus robustes et efficaces et sont devenus la technologie courante pour les systèmes de stockage d'énergie de haute performance. Ils sont largement utilisés comme sources d'énergie primaire pour des applications populaires (ordinateurs portables, téléphones cellulaires, et autres). La LIB typique est constitué de deux électrodes, séparés par un électrolyte. Celui-ci joue un rôle très important dans le transfert des ions entre les électrodes fournissant la courante électrique. Ce travail de thèse porte sur les matériaux complexes utilisés comme électrolytes dans les LIB, qui ont un impact sur les propriétés de transport du ion Li et les performances électrochimiques. Habituellement l'électrolyte est constitué de sels de Li et de mélanges de solvants organiques, tels que les carbonates cycliques ou linéaires. Il est donc indispensable de clarifier les propriétés structurelles les plus importantes, et leurs implications sur le transport des ions Li+ dans des solvants purs et mixtes. Nous avons effectué une étude théorique basée sur la théorie du fonctionnelle densité (DFT) et la dynamique moléculaire (MD), et nous avons consideré des carbonates cyclique (carbonate d'éthylène, EC, et carbonate de propylène, PC) et le carbonate de diméthyle, DMC, linéaire. Les calculs DFT ont fourni une image détaillée des structures optimisées de molécules de carbonate et le ion Li+, y compris les groupes pures Li+(S)n (S =EC,PC,DMC et n=1-5), groupes mixtes binaires, Li+(S1)m(S2)n (S1,S2=EC,PC,DMC, m+n=4), et ternaires Li+(EC)l(DMC)m(PC)n (l+m+n=4). L'effet de l'anion PF6 a également été étudié. Nous avons aussi étudié la structure de la couche de coordination autour du Li+, dans tous les cas. Nos résultats montrent que les complexes Li+(EC)4, Li+(DMC)4 et Li+(PC)3 sont les plus stables, selon les valeurs de l'énergie libre de Gibbs, en accord avec les études précédentes. Les énergies libres de réactions calculés pour les mélanges binaires suggèrent que l'ajout de molécules EC et PC aux clusters Li+ -DMC sont plus favorables que l'addition de DMC aux amas Li+-EC et Li+-PC. Dans la plupart des cas, la substitution de solvant aux mélanges binaires sont défavorables. Dans le cas de mélanges ternaires, la molécule DMC ne peut pas remplacer EC et PC, tandis que PC peut facilement remplacer EC et DMC. Notre étude montre que PC tend à substituer EC dans la couche de solvation. Nous avons complété nos études ab-initio par des simulations MD d'une ion Li immergé dans les solvants purs et dans des mélanges de solvants d'intérêt pour les batteries, EC:DMC(1: 1) et EC:DMC:PC(1:1:3). MD est un outil très puissant et nous a permis de clarifier la pertinence des structures découvertes par DFT lorsque le ion est entouré par des solvants mélangés. En effet,la DFT fournit des informations sur les structures les plus stables de groupes isolés, mais aucune information sur leur stabilité ou de la multiplicité (entropie) lorsqu'il est immergé dans un environnement solvant infinie. Les données MD, ainsi que les calculs DFT nous ont permis de donner une image très complète de la structure locale de mélanges de solvants autour le ion lithium, sensiblement amélioré par rapport aux travaux précédents
Due to the increasing global energy demand, eco-friendly and sustainable green resources including solar, or wind energies must be developed, in order to replace fossil fuels. These sources of energy are unfortunately discontinuous, being correlated with weather conditions and their availability is therefore strongly fluctuating in time. As a consequence, large-scale energy storage devices have become fundamental, to store energy on long time scales with a good environmental compatibility. Electrochemical energy conversion is the key mechanism for alternative power sources technological developments. Among these systems, Lithium-ion (Li+) batteries (LIBs) have demonstrated to be the most robust and efficient, and have become the prevalent technology for high-performance energy storage systems. These are widely used as the main energy source for popular applications, including laptops, cell phones and other electronic devices. The typical LIB consists of two (negative and positive) electrodes, separated by an electrolyte. This plays a very important role, transferring ions between the electrodes, therefore providing the electrical current. This thesis work focuses on the complex materials used as electrolytes in LIBs, which impact Li-ion transport properties, power densities and electrochemical performances. Usually, the electrolyte consists of Li-salts and mixtures of organic solvents, such as cyclic or linear carbonates. It is therefore indispensable to shed light on the most important structural (coordination) properties, and their implications on transport behaviour of Li+ ion in pure and mixed solvent compositions. We have performed a theoretical investigation based on combined density Functional Theory (DFT) calculations and Molecular Dynamics (MD) simulations, and have focused on three carbonates, cyclic ethylene carbonate (EC) and propylene carbonate (PC), and linear dimethyl carbonate (DMC). DFT calculations have provided a detailed picture for the optimized structures of isolated carbonate molecules and Li+ ion, including pure clusters Li+(S)n (S=EC, PC, DMC and n=1-5), mixed binary clusters, Li+(S1)m(S2)n (S1, S2 =EC, PC, DMC, with m+n=4), and ternary clusters Li+(EC)l(DMC)m(PC)n with l+m+n=4. Pure solvent clusters were also studied including the effect of PF6- anion. We have investigated in details the structure of the coordination shell around Li+ for all cases. Our results show that clusters such as Li+(EC)4, Li+(DMC)4 and Li+(PC)3 are the most stable, according to Gibbs free energy values, in agreement with previous experimental and theoretical studies. The calculated Gibbs free energies of reactions in binary mixtures suggest that the addition of EC and PC molecules to the Li+-DMC clusters are more favourable than the addition of DMC to Li+-EC and Li+-PC clusters. In most of the cases, the substitution of solvent to binary mixtures are unfavourable. In the case of ternary mixtures, the DMC molecule cannot replace EC and PC, while PC can easily substitute both EC and DMC molecules. Our study shows that PC tends to substitute EC in the solvation shell. We have complemented our ab-initio studies by MD simulations of a Li-ion when immersed in the pure solvents and in particular solvents mixtures of interest for batteries applications, e.g. , EC:DMC (1:1) and EC:DMC:PC(1:1:3). MD is a very powerful tool and has allowed us to clarify the relevance of the cluster structures discovered by DFT when the ion is surrounded by bulk solvents. Indeed, DFT provides information about the most stable structures of isolated clusters but no information about their stability or multiplicity (entropy) when immersed in an infinite solvent environment. The MD data, together the DFT calculations have allowed us to give a very comprehensive picture of the local structure of solvent mixtures around Lithium ion, which substantially improve over previous work
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Priestley, Kory James. „Use of First-Principle Numerical Models to Enhance the Understanding of the Operational Analysis of Space-Based Earth Radiation Budget Instruments“. Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30662.

Der volle Inhalt der Quelle
Annotation:
NASA's Clouds and the Earth's Radiant Energy System (CERES) program is a key component of the Earth Observing System (EOS). The CERES Proto-Flight Model (PFM) instrument is to be launched on NASA's Tropical Rainfall Measuring Mission (TRMM) spacecraft in November, 1997. Each CERES instrument will contain three scanning thermistor bolometer radiometers to monitor the longwave, 5.0 to >100 microns, and shortwave, 0.3 to 5.0 microns, components of the Earth's radiative energy budget. High-level, first-principle dynamic electrothermal models of the CERES radiometric channels have been completed under NASA sponsorship. These first-principle models consist of optical, thermal and electrical modules. Optical characterization of the channels is ensured by Monte-Carlo-based ray-traces. Accurate thermal and electrical characterization is assured by transient finite-difference formulations. This body of research presents the evolution of these models by outlining their development and validation. Validation of the models is accomplished by simulating the ground calibration process of the actual instruments and verifying that the models accurately predict instrument performance. The result of this agreement is a high confidence in the model to predict other aspects of instrument performance.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Escudier, Romain. „Mesoscale eddies in the western mediterranean sea: characterization and understanding from satellite observations and model simulations“. Doctoral thesis, Universitat de les Illes Balears, 2015. http://hdl.handle.net/10803/310417.

Der volle Inhalt der Quelle
Annotation:
Mesoscale eddies are relatively small structures that dominate the ocean variability and have large impact on large scale circulation, heat fluxes and biological processes.In the western Mediterranean Sea, a high number of eddies has been observed and studied in the past with in-situ observations. Yet, a systematic characterization of these eddies is still lacking due to the small scales involved in these processes in this region where the Rossby deformation radius that characterizes the horizontal scales of the eddies is small (10-15 km). The objective of this thesis is to perform a characterization of mesoscale eddies in the western Mediterranean. For this purpose, we propose to develop tools to study the fine scales of the basin. First, we develop an eddy resolving simulation of the region for the last 20 years. This simulation shows that existing altimetry maps underestimate the mesoscale signal. Therefore, we attempt to improve existing satellite altimetry products to better resolve mesoscale eddies. We show that this improvement is possible but at the cost of the homogeneity of the fields; the resolution can only be improved at times and locations where altimetric observations are densely distributed. In a second part, we apply three different eddy detection and tracking methods to extract eddy characteristics from the outputs of the high-resolution simulation, a coarser simulation and altimetry maps. The results allow the determination of some characteristics of the detected eddies. The size of the eddies can greatly vary but is around 25-30 km. About 30 eddies are detected per day in the region with a very heterogeneous spatial distribution. Unlike other areas of the open ocean, they are mainly advected by currents of the region. Eddies can be separated according to their lifespan. Long-lived eddies are larger in amplitude and scale and have a seasonal cycle with a peak in late summer, while short-lived eddies are smaller and more present in winter. The penetration depth of detected eddies has also a large variance but the mean depth is around 300 meters. Anticyclones extend deeper in the water column and have a more conic shape than cyclones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

WOLDIE, Daniel Werede. „Understanding the Role of a Less-permeable Surface in Water Dynamics of Headwater Catchments based on Various Monitoring, Analytical Methods and a Numerical Model“. 京都大学 (Kyoto University), 2011. http://hdl.handle.net/2433/142387.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Shao, Zhiyu S. „TWO-DIMENSIONAL HYDRODYNAMIC MODELING OF TWO-PHASE FLOW FOR UNDERSTANDING GEYSER PHENOMENA IN URBAN STORMWATER SYSTEM“. UKnowledge, 2013. http://uknowledge.uky.edu/ce_etds/5.

Der volle Inhalt der Quelle
Annotation:
During intense rain events a stormwater system can fill rapidly and undergo a transition from open channel flow to pressurized flow. This transition can create large discrete pockets of trapped air in the system. These pockets are pressurized in the horizontal reaches of the system and then are released through vertical vents. In extreme cases, the transition and release of air pockets can create a geyser feature. The current models are inadequate for simulating mixed flows with complicated air-water interactions, such as geysers. Additionally, the simulation of air escaping in the vertical dropshaft is greatly simplified, or completely ignored, in the existing models. In this work a two-phase numerical model solving the Navier-Stokes equations is developed to investigate the key factors that form geysers. A projection method is used to solve the Navier-Stokes Equation. An advanced two-phase flow model, Volume of Fluid (VOF), is implemented in the Navier-Stokes solver to capture and advance the interface. This model has been validated with standard two-phase flow test problems that involve significant interface topology changes, air entrainment and violent free surface motion. The results demonstrate the capability of handling complicated two-phase interactions. The numerical results are compared with experimental data and theoretical solutions. The comparisons consistently show satisfactory performance of the model. The model is applied to a real stormwater system and accurately simulates the pressurization process in a horizontal channel. The two-phase model is applied to simulate air pockets rising and release motion in a vertical riser. The numerical model demonstrates the dominant factors that contribute to geyser formation, including air pocket size, pressurization of main pipe and surcharged state in the vertical riser. It captures the key dynamics of two-phase flow in the vertical riser, consistent with experimental results, suggesting that the code has an excellent potential of extending its use to practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Schindler, Jennifer. „Estuarine Dynamics as a Function of Barrier Island Transgression and Wetland Loss: Understanding the Transport and Exchange Processes“. ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1260.

Der volle Inhalt der Quelle
Annotation:
The Northern Gulf of Mexico and coastal Louisiana are experiencing accelerated relative sea level rise rates; therefore, the region is ideal for modeling the global affects of sea level rise (SLR) on estuarine dynamics in a transgressive barrier island setting. The field methods and numerical modeling in this study show that as barrier islands are converted to inner shoals, tidal exchange increases between the estuary and coastal ocean. If marshes are unable to accrete at a pace comparable to SLR, wetlands will deteriorate and the tidal exchange and tidal prism will further increase. Secondary to hurricanes, winter storms are a primary driver in coastal morphology in this region, and this study shows that wind direction and magnitude, as well as atmospheric pressure change greatly affect estuarine exchange. Significant wetland loss and winter storm events produce changes in local and regional circulation patterns, thereby affecting the hydrodynamic exchange and resulting transport.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Chen, Chao. „Understanding social and community dynamics from taxi GPS data“. Phd thesis, Institut National des Télécommunications, 2014. http://tel.archives-ouvertes.fr/tel-01048662.

Der volle Inhalt der Quelle
Annotation:
Taxis equipped with GPS sensors are an important sensory device for examining people's movements and activities. They are not constrained to a pre-defined schedule/route. Big taxi GPS data recording the spatio-temporal traces left by taxis provides rich and detailed glimpse into the motivations, behaviours, and resulting dynamics of a city's mobile population through the road network. In this dissertation, we aim to uncover the "hidden facets" regarding social and community dynamics encoded in the taxi GPS data to better understand how urban population behaves and the resulting dynamics in the city. As some "hidden facets" are with regard to similar aspect of social and community dynamics, we further formally define three categories for study (i.e. social dynamics, traffic dynamics, and operational dynamics), and explore them to fill the wide gaps between the raw taxi GPS data and innovative applications and smart urban services. Specifically, 1. To enable applications of real-time taxi fraud alerts, we propose iBOAT algorithm which is capable of detecting anomalous trajectories "on-the-fly" and identifying which parts of the trajectory are responsible for its anomalousness, by comparing them against historically trajectories having the same origin and destination. 2. To introduce cost-effective and environment-friendly transport services to citizens, we propose B-Planner which is a two-phase approach, to plan bi-directional night bus routes leveraging big taxi GPS data. 3. To offer a personalized, interactive, and traffic-aware trip route planning system to users, we propose TripPlanner system which contains both offline and online procedures, leveraging a combination of Location-based Social Network (i.e. LBSN) and taxi GPS data sets. Finally, some promising research directions for future work are pointed out, which mainly attempt to fuse taxi GPS data with other data sets to provide smarter and personalized urban services for citizens
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Vogt, Ivo [Verfasser], Christian [Akademischer Betreuer] Boit, Christian [Gutachter] Boit, Ingrid de [Gutachter] Wolf und Dean [Gutachter] Lewis. „Optical interactions for internal signal tracking in ICs : a deeper understanding through numerical simulations, spectral investigations and specific digital & analog test structures / Ivo Vogt ; Gutachter: Christian Boit, Ingrid de Wolf, Dean Lewis ; Betreuer: Christian Boit“. Berlin : Technische Universität Berlin, 2018. http://d-nb.info/1161461841/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Bishop, Courtney Alexandra. „Development and application of image analysis techniques to study structural and metabolic neurodegeneration in the human hippocampus using MRI and PET“. Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:2549bad2-432f-4d0e-8878-be9cce6ae0d2.

Der volle Inhalt der Quelle
Annotation:
Despite the association between hippocampal atrophy and a vast array of highly debilitating neurological diseases, such as Alzheimer’s disease and frontotemporal lobar degeneration, tools to accurately and robustly quantify the degeneration of this structure still largely elude us. In this thesis, we firstly evaluate previously-developed hippocampal segmentation methods (FMRIB’s Integrated Registration and Segmentation Tool (FIRST), Freesurfer (FS), and three versions of a Classifier Fusion (CF) technique) on two clinical MR datasets, to gain a better understanding of the modes of success and failure of these techniques, and to use this acquired knowledge for subsequent method improvement (e.g., FIRSTv3). Secondly, a fully automated, novel hippocampal segmentation method is developed, termed Fast Marching for Automated Segmentation of the Hippocampus (FMASH). This combined region-growing and atlas-based approach uses a 3D Sethian Fast Marching (FM) technique to propagate a hippocampal region from an automatically-defined seed point in the MR image. Region growth is dictated by both subject-specific intensity features and a probabilistic shape prior (or atlas). Following method development, FMASH is thoroughly validated on an independent clinical dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), with an investigation of the dependency of such atlas-based approaches on their prior information. In response to our findings, we subsequently present a novel label-warping approach to effectively account for the detrimental effects of using cross-dataset priors in atlas-based segmentation. Finally, a clinical application of MR hippocampal segmentation is presented, with a combined MR-PET analysis of wholefield and subfield hippocampal changes in Alzheimer’s disease and frontotemporal lobar degeneration. This thesis therefore contributes both novel computational tools and valuable knowledge for further neurological investigations in both the academic and the clinical field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Buchanan, Aeron Morgan. „Tracking non-rigid objects in video“. Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:82efb277-abc9-4725-9506-5d114a83bd96.

Der volle Inhalt der Quelle
Annotation:
Video is a sequence of 2D images of the 3D world generated by a camera. As the camera moves relative to the real scene and elements of that scene themselves move, correlated frame-to-frame changes in the video images are induced. Humans easily identify such changes as scene motion and can readily assess attempts to quantify it. For a machine, the identification of the 2D frame-to-frame motion is difficult. This problem is addressed by the computer vision process of tracking. Tracking underpins the solution to the problem of augmenting general video sequences with artificial imagery, a staple task in the visual effects industry. The problem is difficult because tracking in general video sequences is complicated by the presence of non-rigid motion, repeated texture and arbitrary occlusions. Existing methods provide solutions that rely on imposing limitations on the scenes that can be processed or that rely on human artistry and hard work. I introduce new paradigms, frameworks and algorithms for overcoming the challenges of processing general video and thus provide solutions that fill the gap between the `automated' and `manual' approaches. The work is easily sectioned into three parts, which can be considered separately or taken together for dealing with video without limitations. The initial focus is on directly addressing practical issues of human interaction in the tracking process: a new solution is developed by explicitly incorporating the user into an interactive algorithm. It is a novel tracking system based on fast full-frame patch searching and high-speed optimal track determination. This approach makes only minimal assumptions about motion and appearance, making it suitable for the widest variety of input video. I detail an implementation of the new system using k-d trees and dynamic programming. The second distinct contribution is an important extension to tracking algorithms in general. It can be noted that existing tracking algorithms occupy a spectrum in their use of global motion information. Local methods are easily confused by occlusions, repeated texture and image noise. Global motion models offer strong predictions to see through these difficulties and have been used in restricted circumstances, but are defeated by scenes containing independently moving objects or modest levels of non-rigid motion. I present a well principled way of combining local and global models to improve tracking, especially in these highly problematic cases. By viewing rank-constrained tracking as a probabilistic model of 2D tracks instead of 3D motion, I show how one can obtain a robust motion prior that can be easily incorporated in any existing tracking algorithm. The development of the global motion prior is based on rank-constrained factorization of measurement matrices. A common difficulty comes from the frequent occurrence of occlusions in video, which means that the relevant matrices are often not complete due to missing data. This defeats standard factorization algorithms. To fully explain and understand the algorithmic complexities of factorization in this practical context, I present a common notation for the direct comparison of existing algorithms and propose a new family of hybrid approaches that combine the superb initial performance of alternation methods with the convergence power of the Newton algorithm. Together, these investigations provide a wide-ranging, yet coherent exploration of tracking non-rigid objects in video.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Sagara, Namika. „Consumer understanding and use of numeric information in product claims“. Thesis, University of Oregon, 2009. http://hdl.handle.net/1794/10588.

Der volle Inhalt der Quelle
Annotation:
xiii, 109 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
Numeric information is often presented to consumers in order to communicate important and precise information that is not well communicated through non-numeric information. The assumption of marketers, then, seems to be that numeric information is useful for consumers in evaluating products. Do consumers understand and use such numerical information in product claims? Recent research suggests that many people are "innumerate" and about half of Americans lack the minimal mathematical skills needed to use numbers embedded in printed materials. This suggests that many Americans lack the minimal mathematical skills needed to use numbers embedded in product claims and other marketing communications. In a series of five experiments, I investigated if and how consumers understand and use numeric information presented in product claims in their evaluation of consumer goods. The results demonstrated that participants, and especially less numerate individuals, were susceptible to an Illusion-of-Numeric-Truth effect: they judged false claim as true when numeric meaning was inaccurately translated (e.g., "30% of consumers" inaccurately translated to " most consumers"). Mediation analysis suggested that highly numerate participants were better at developing affective reactions toward numeric information in product claims and using these affective reactions as information when they were faced with truth judgments. Highly numerate individuals were also more sensitive to different levels of numeric information in their product evaluations. This sensitivity also seemed to depend on their drawing affective meaning from numbers and number comparisons and using this information in product evaluations. Although less numerate individuals reported that numeric information is important, they were less sensitive to numeric information unless they were encouraged to process numeric information more systematically. The results from this dissertation indicate that not all numeric information will be used and be useful to all consumers. Therefore, simply presenting numeric information may not be sufficient for numeric information to be useful for all consumers.
Committee in charge: Peter Wright, Chairperson, Marketing; Lynn Kahle, Member, Marketing; Ellen Peters, Member, Not from U of O; Robert Madrigal, Member, Marketing; Paul Slovic, Outside Member, Psychology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Rifai, Bassel. „Cavitation-enhanced delivery of therapeutics to solid tumors“. Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:374b2ee1-0711-4994-8434-bf90358d9e47.

Der volle Inhalt der Quelle
Annotation:
Poor drug penetration through tumor tissue has emerged as a fundamental obstacle to cancer therapy. The solid tumor microenvironment presents several physiological abnormalities which reduce the uptake of intravenously administered therapeutics, including leaky, irregularly spaced blood vessels, and a pressure gradient which resists transport of therapeutics from the bloodstream into the tumor. Because of these factors, a systemically administered anti-cancer agent is unlikely to reach 100% of cancer cells at therapeutic dosages, which is the efficacy required for curative treatment. The goal of this project is to use high-intensity focused ultrasound (HIFU) to enhance drug delivery via phenomena associated with acoustic cavitation. ‘Cavitation’ is the formation, oscillation, and collapse of bubbles in a sound field, and can be broadly divided into two types: ‘inertial’ and ‘stable’. Inertial cavitation involves violent bubble collapse and is associated with phenomena such as heating, fluid jetting, and broadband noise emission. Stable cavitation occurs at lower pressure amplitudes, and can generate liquid microstreaming in the bubble vicinity. It is the combination of fluid jetting and microstreaming which it is attempted to explore, control, and apply to the drug delivery problem in solid tumors. First, the potential for cavitation to enhance the convective transport of a model therapeutic into obstructed vasculature in a cell-free in vitro tumor model is evaluated. Transport is quantified using post-treatment image analysis of the distribution of a dye-labeled macromolecule, while cavitation activity is quantified by analyzing passively recorded acoustic emissions. The introduction of exogenous cavitation nuclei into the acoustic field is found to dramatically enhance both cavitation activity and convective transport. The strong correlation between inertial cavitation activity and drug delivery in this study suggested both a mechanism of action and the clinical potential for non-invasive treatment monitoring. Next, a flexible and efficient method to simulate numerically the microstreaming fields instigated by cavitating microbubbles is developed. The technique is applied to the problem of quantifying convective transport of a scalar quantity in the vicinity of acoustically cavitating microbubbles of various initial radii subject to a range of sonication parameters, yielding insight regarding treatment parameter choice. Finally, in vitro and in vivo models are used to explore the effect of HIFU on delivery and expression of a biologically active adenovirus. The role of cavitation in improving the distribution of adenovirus in porous media is established, as well as the critical role of certain sonication parameters in sustaining cavitation activity in vivo. It is shown that following intratumoral or intravenous co-injection of ultrasound contrast agents and adenovirus, both the distribution and expression of viral transgenes are enhanced in the presence of inertial cavitation. This ultrasound-based drug delivery system has the potential to be applied in conjunction with a broad range of macromolecular therapeutics to augment their bioavailability for cancer treatment. In order to reach this objective, further developmental work is recommended, directed towards improving therapeutic transducer design, using transducer arrays for treatment monitoring and mapping, and continuing the development of functionalized monodisperse cavitation nuclei.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Haghighi, Roozbeh. „Towards understanding multicomponent chemistry interaction using direct numerical simulation“. Phd thesis, 2015. https://tuprints.ulb.tu-darmstadt.de/5137/1/dissertation%20-%20final%20inkl%20lebenslauf%282%29.pdf.

Der volle Inhalt der Quelle
Annotation:
In recent years the automobile and aviation industries, as well as other energy intensive branches, have moved towards downsizing as a new trend. Downsizing essentially means to reduce the dimensions of the device while power and efficiency is kept at least the same. The immediate consequence of downsizing for combustion devices is a higher area-to-volume ratio in which case the near-wall combustion attains greater influence on the overall behavior of the device. Although flame and wall do not interact everywhere in the combustor, their interaction has always significant influence on the combustion process especially in terms of unwanted byproducts such as unburnt hydrocarbons and NOx. On these accounts, the near-wall combustion and the flame-wall interaction becomes an increasingly attractive area of scientific investigation. There is especial interest in quenching the flame at the wall in researching flame-wall interaction(FWI). Based on orientation of the flame relative to the wall, one can think of two types of configurations for flame-wall interaction in laminar or turbulent flows, namely - the flame front being parallel to the wall, that is also the flame propagating normal to the wall, which is called head-on quenching, and - the flame front being normal to the wall, i.e. the flow runs parallel to the wall, which is called side-wall quenching. Numerous experimental studies have been conducted in this area, but they have serious limitations. Flame-wall interaction has very short time scales and small characteristic lengths which require high precision measurements, overly sophisticated setups, and costly material. And yet the results are not up to par, because of technical limitations at this time. The flame quenching distance at atmospheric pressure is of the order of < 100 micrometer. There are not many reliable methods to capture flame-wall interaction at such small scales. This leaves CFD as the only feasible option which has only become possible by the arrival of the recent computing technologies. Although there is vast theoretical, experimental, and numerical body of research on the head-on configuration of the flame-wall interaction, there are few works concerning the side-wall configuration, either in laminar or in turbulent flame regimes. The main reason for that is the cost of computation. While the HOQ simulations can be performed using one-dimensional finite difference code, the side-wall quenching has to be simulated using two-dimensional domain, for the laminar combustion regime and three-dimensional domain, for turbulent flame-wall interactions. Improved computation capacity of research centers has made it possible to delve into this topic and better understanding the SWQ configuration. Gruber et al. have recently performed a three-dimensional direct numerical simulation to investigate the interaction between the wall and a turbulent hydrogen-air flame using multi-component reaction mechanism. Prior to his work there have been experimental and theoretical studies mostly, and numerically simulations were performed based on simple chemistry only. Lack of sufficient theoretical and numerical works in this area makes it more difficult to acquire enough information to characterize the flame-wall interaction in side-wall quenching. The aforementioned points are enough motivating to perform the sophisticated direct numerical simulations of combustion in the near-wall region using detailed chemistry. The goal of this work is first to investigate the flame-wall interaction in the head-on quenching configuration concerning different phases of simulation where the flame behaves differently from the transient phase exactly after simulation until after quenching. Secondly, the amount of produced carbon monoxide in different distances from the wall will be examined and compared to experiments. The last step is to gain better insight into the side-wall quenching of stoichiometric methane-air mixtures in configurations where the wall is parallel to the flow. The near wall combustion or the so called flame-wall interaction will be explored using two completely different possible quenching configurations and next, their similarity and differences will be studied. In order to make this possible, two quasi three-dimensional simulations for Side-Wall quenching were performed. To realize this, the readily available code (FASTEST-3D) was modified to calculate the mixture averaged flow properties of a gas mixture. The code was then extended to include mass fraction and energy transport equations for a multi-component gas mixture. In the simulations for both SWQ- and HOQ-configurations, the Smooke multi-component mechanism and the mix averaged transport coefficients calculation recommended by Hirschfelder are used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Collins, Benjamin Forster. „Understanding the Solar System with Numerical Simulations and Lévy Flights“. Thesis, 2009. https://thesis.library.caltech.edu/2275/1/thesis.pdf.

Der volle Inhalt der Quelle
Annotation:

This thesis presents several investigations into the formation of planetary systems and the dynamical evolution of the small bodies left over from this process.

We develop a numerical integration scheme with several optimizations for studying the late stages of planet formation: adaptive time steps, accurate treatment of close encounters between particles, and the ability to add non-conservative forces. Using this code, we simulate the phase of planet formation known as "oligarchic growth." We find that when the dynamical friction from planetesimals is strong, the annular feeding zones of the protoplanets are inhabited by not one but several oligarchs on nearly the same semimajor axis. We systematically determine the number of co-orbital protoplanets sharing a feeding zone and the width of these zones as a function of the strength of dynamical friction and the total mass of the disk. The increased number of surviving protoplanets at the end of this phase qualitatively affects their subsequent evolution into full-sized planets.

We also investigate the distribution of the eccentricities of the protoplanets in the runaway growth phase of planet formation. Using a Boltzmann equation, we find a simple analytic solution for the distribution function followed by the eccentricity. We show that this function is self-similar: it has a constant shape while the scale is set by the balance between mutual excitation and dynamical friction. The type of evolution described by this distribution function is known as a Levy flight.

We use the Boltzmann equation framework to study the nearly circular orbits of Kuiper Belt binaries and the nearly radial orbits of comets during the formation of the Oort cloud. We calculate the distribution function of the eccentricity of Kuiper belt systems, like the moons of Pluto, given the stochastic perturbations caused by close encounters with other Kuiper belt objects. For Oort cloud comets, we find the distribution function of the angular momentum as it is excited by perturbations from passing stars in the Galaxy. Both systems evolve as Levy flights. This work unifies the effects of stochastic stellar encounters and the smooth torque from the Galactic potential on Oort cloud comets.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Hwang, Jihyun. „Children's Understanding of Compositionality of Complex Numerals“. 2021. https://scholarworks.umass.edu/masters_theses_2/1015.

Der volle Inhalt der Quelle
Annotation:
Counting is the first formal exposure for children to learn numerals, which are constructed with a set of syntactic rules. Young children undergo many stages of rote-memorization of the sequence and eventually count through 100. What core knowledge is necessary to expand their number knowledge to higher numbers? The compositionality of numerals is a key to understanding the natural number system as in learning languages. Higher numbers (e.g., two hundred five) are constructed with the lexical items such as earlier numbers (e.g., one to nine) and multipliers. If children develop their understanding of the compositionality of numerals, they might comprehend complex numerals far beyond their count list. In a novel task, the Number Word Comparison task, we tested whether children’s skill to compare the ones (e.g., five versus eight) can extend to complex numerals (e.g., two hundred five versus two hundred eight). Sixty-eight preschoolers completed three tasks, which measured counting fluency, number word comparison skills, and their cardinal principle knowledge. Children who were capable of comparing the ones performed above chance on average in comparing complex numerals. The performance in comparing complex numerals was strongly associated with their counting fluency. Based on these empirical results, we discuss a linguistic account of number acquisition in early childhood, proposing a link between learning the syntax of numerals and understanding the meaning behind them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Mao, Xiaolin. „Understanding Lithosphere and Mantle Dynamics with Numerical Models Constrained by Observations“. Thesis, 2019. https://thesis.library.caltech.edu/11097/34/xiaolin-mao-2019-final.pdf.

Der volle Inhalt der Quelle
Annotation:

Numerical studies play an important role in understanding lithospheric and mantle dynamics. In this thesis, we first develop and use multiphysics geodynamic models to study the evolution of subduction. Our geodynamic models are constrained by different geological and geophysical observations, including topography. We then use 3D numerical simulations of dynamic rupture with off-fault inelastic deformation to study the scaling between damage zone thickness and fault width. Finally, we study the mechanical strength and anisotropy in the continental collision region with flexural models and gravity and topography data.

Topography is valuable data for investigating lithosphere and mantle dynamics and constraining numerical studies. Topography prediction with forward models is well established at plate interiors, while it is still difficult to predict realistic topography at subduction zones. We use multiphysics geodynamic models to tackle this problem. Our models incorporate a true free surface, phase changes, and elasto-visco-plastic rheology. We also include surface processes, water migration and water weakening. We study the influences of different geophysical, petrological, and geochemical processes on topography and subduction zone evolution and show that surface geometry, surface processes, elasticity, and oceanic crust all strongly influence the stress state and deformation within plates, water weakening decouples the overriding plate and the subducting slab at the mantle wedge region and contributes to the initiation of overriding plate failure, and oceanic crust has a similar effect with sediments lubricating the subduction interface. Free slip surface topography and free surface topography have substantial differences, and free surface topography is influenced by different processes by adjusting the force balance. Application to the New Hebrides subduction zone suggests that deformation within a detached slab segment caused by the impact of the slab segment on the strong lower mantle explains the origin of the isolated deep earthquakes in the transition zone beneath the North Fiji Basin, and the difference in the seismic intensities between northern and southern deep earthquake clusters is caused by transition from strong deformation to weak deformation after the impact.

We apply our multiphysics approach to investigate the influence of inherited lithospheric heterogeneity on subduction initiation at the Puysegur Incipient Subduction Zone (PISZ) south of New Zealand. Our predictions fit the morphology of the Puysegur Trench and Ridge and the deformation history on the overriding plate. We show how a new thrust fault forms and evolves into a smooth subduction interface, and how a preexisting weak zone can become a vertical fault inboard of the thrust fault during subduction initiation, consistent with two-fault system at PISZ. The model suggests that the PISZ may not yet be self-sustaining. We propose that the Snares Zone (or Snares Trough) is caused by plate coupling differences between shallower and deeper parts, that the tectonic sliver between two faults experiences strong rotation, and that low density material accumulates beneath the Snares Zone.

We then turn to the scaling between damage zone thickness and fault width. Field observations indicate that damage zone thickness scales with accumulated fault displacement at short displacements but saturates at a few hundred meters for displacements larger than a few kilometers. To explain this transition of scaling behavior, we conduct 3D numerical simulations of dynamic rupture with off-fault inelastic deformation on long strike-slip faults. We find that the distribution of coseismic inelastic strain is controlled by the transition from crack-like to pulse-like rupture propagation associated with saturation of the seismogenic depth. The yielding zone reaches its maximum thickness when the rupture becomes a stable pulse-like rupture. Considering fracture mechanics theory, we show that seismogenic depth controls the upper bound of damage zone thickness on mature faults by limiting the efficiency of stress concentration near earthquake rupture fronts. We obtain a quantitative relation between limiting damage zone thickness, background stress, dynamic fault strength, off-fault yield strength, and seismogenic depth, which agrees with first-order field observations. Our results help link dynamic rupture processes with field observations and contribute to a fundamental understanding of damage zone properties.

Finally, we investigate the interactions between mechanical strength and lithospheric deformations. Variation of lithospheric strength controls the distribution of stress and strain within plates and at plate boundaries. Simultaneously, deformation caused by localized stress and strain reduces the lithospheric strength. We calculate the effective elastic thickness, Te, which is a proxy of lithospheric strength, and its anisotropy at the Zagros-Himalaya belt and surrounding regions. Te varies from < 5 km to over 100 km, and shows good correlations with geological boundaries. Along plate boundaries, mountain belts, and major faults, Te is usually smaller than 30 km. In basins, Te is between 30 - 60 km. In stable cratons, Te is larger than 60 km. In the regions with low and intermediate strength (Te < 60 km), the extent of Te anisotropy is usually large, and the weak direction of Te anisotropy agrees well with the directions of GPS data and crustal stress. In stable cratons, the extent of Te anisotropy is usually small. Our results suggest that mechanical weakening is the dominant mechanism to reduce the lithospheric strength in regions where Te is smaller than 60 km. In stable cratons, the effects of mechanical weakening can be ignored, and only thermal weakening resulting from mantle processes can modify the lithospheric strength substantially.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Gao, Fan. „Children's understanding of cardinal equivalence in large discrete sets /“. 2000. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:9978027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie