Dissertations / Theses on the topic 'Lagrangian particle tracking'

To see the other types of publications on this topic, follow the link: Lagrangian particle tracking.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Lagrangian particle tracking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Elmasdotter, Ajla. "An Interactive Eye-tracking based Adaptive Lagrangian Water Simulation Using Smoothed Particle Hydrodynamics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281978.

Full text
Abstract:
Many water animations and simulations usually depend on time consuming algorithms that create realistic water movement and visualization. However, the intrigue for realistic, real-time and interactive simulations is steadily growing for, among others, the game and Virtual Reality industry. A common method used for particle based water simulations is the Smoothed Particle Hydrodynamics, which also allows for refinement and adaptivity that focuses the computational power on the parts of the simulation that require it the most. This study suggests an eye-tracking based adaptive method for water simulations using Smoothed Particle Hydrodynamics, which is based on where a user is looking, with the assumption that what a user cannot see nor perceive is not of a greater importance. Its performance is evaluated by comparing the suggested method to a surface based adaptive method, by measuring frames per second, the amount of particles in the simulation, and the execution time . It is concluded that the eye-tracking based adaptive method performs better than the surface based adaptive method in four out of five scenarios and should hence be considered a method to further evaluate and possibly use when creating applications or simulations requiring real-time water simulations, with the restriction that eye-tracking hardware would be necessary for the method to work.
Flertalet vattensimuleringar samt animeringar brukar ofta vara beroende av tidskrävande algoritmer som skapar realistiskt utséende och realistiska rörelser. Däremot har intresset för realistiska, interaktiva realtidssimuleringar och liknande applikationer börjat växa inom, bland annat, spel- och virtual-realityindustrin. Smoothed Particle Hydrodynamics är en vanlig metod som används inom partikelbaserade vattensimuleringar, som även tillåter adaptivitet vilket fokuserar resurserna i datorn på de delar av simuleringen som kräver dem mest. Denna studie föreslår en eye-trackingbaserad adaptiv metod för vattensimuleringar som använder sig av Smoothed Particle Hydrodynamics, som fokuserar adaptiviteten där användaren tittar i simuleringen med antagandet att det en användare inte kan uppfatta eller se inte är av relevans. Metodens prestanda evalueras genom jämförelse mot en adaptiv method som fokuserar adaptiviteten på vattnets yta och objekt runt vattnet, genom att mäta antalet renderade bilder per sekund, antalet partiklar i simulationen, samt exikveringstiden. Slutsatsen är att den eye-trackingbaserade adaptiva metoden presterar bättre än metoden som fokuserar adaptiviteten på vattnets yta i fyra av fem scenarion, och bör därför ses som en metod som har potential att utforskas vidare samt en metod som kan användas vi realtidssimuleringar av vatten, med begränsningen att hårdvara för eye-tracking behövs.
APA, Harvard, Vancouver, ISO, and other styles
2

Huck, Peter Dearborn. "Particle dynamics in turbulence : from the role of inhomogeneity and anisotropy to collective effects." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN073/document.

Full text
Abstract:
La turbulence est connue pour sa capacité à disperser efficacement de la matière, que ce soit des polluantes dans les océans ou du carburant dans les moteurs à combustion. Deux considérations essentielles s’imposent lorsqu’on considère de telles situations. Primo, l’écoulement sous-jacente pourrait avoir une influence non-négligeable sur le comportement des particules. Secundo, la concentration locale de la matière pourrait empêcher le transport ou l’augmenter. Pour répondre à ces deux problématiques distinctes, deux dispositifs expérimentaux ont été étudiés au cours de cette thèse. Un premier dispositif a été mis en place pour étudier l’écoulement de von Kàrmàn, qui consiste en une enceinte fermé avec de l’eau forcé par deux disques en contra-rotation. Cette écoulement est connu pour être très turbulent, inhomogène, et anisotrope. Deux caméras rapides ont facilité le suivi Lagrangien des particules isodenses avec l’eau et petites par rapport aux échelles de la turbulence. Ceci a permis une étude du bilan d’énergie cinétique turbulente qui est directement relié aux propriétés de transport. Des particules plus lourdes que l’eau ont aussi été étudiées et montrent le rôle de l’anisotropie de l’écoulement dans la dispersion des particules inertielles. Un deuxième dispositif, un écoulement de soufflerie ensemencé avec des gouttelettes d’eau micrométriques a permis une étude de l’effet de la concentration locale de l’eau sur la vitesse de chute des gouttelettes grâce à une montage préexistant. Un modèle basé sur des méthodes théorique d'écoulements multiphasiques a été élaboré enfin de prendre en compte les effets collectifs de ces particules sedimentant dans un écoulement turbulent. Les résultats théoriques et expérimentaux mettent en évidence le rôle de la polydispersité et du couplage entre les deux phases dans l’augmentation de la sédimentation des gouttelettes
Turbulence is well known for its ability to efficiently disperse matter, whether it be atmospheric pollutants or gasoline in combustion motors. Two considerations are fundamental when considering such situations. First, the underlying flow may have a strong influence of the behavior of the dispersed particles. Second, the local concentration of particles may enhance or impede the transport properties of turbulence. This dissertation addresses these points separately through the experimental study of two different turbulent flows. The first experimental device used is the so-called von K\'arm\'an flow which consists of an enclosed vessel filled with water that is forced by two counter rotating disks creating a strongly inhomogeneous and anisotropic turbulence. Two high-speed cameras permitted the creation a trajectory data base particles that were both isodense and heavier than water but were smaller than the smallest turbulent scales. The trajectories of this data base permitted a study of the turbulent kinetic energy budget which was shown to directly related to the transport properties of the turbulent flow. The heavy particles illustrate the role of flow anisotropy in the dispersive dynamics of particles dominated by effects related to their inertia. The second flow studied was a wind tunnel seeded with micrometer sized water droplets which was used to study the effects of local concentration of the settling velocities of these particles. A model based on theoretical multi-phase methods was developed in order to take into account the role of collective effects on sedimentation in a turbulent flow. The theoretical results emphasize the role of coupling between the underlying flow and the dispersed phase
APA, Harvard, Vancouver, ISO, and other styles
3

Heide, Jakob. "Numerical analysis of Urea-SCR sprays under cross-flow conditions." Thesis, KTH, Mekanik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194497.

Full text
Abstract:
The mixing and evaporation of Diesel Exhaust Fluid (DEF) inside an Urea Selective Catalyst Reduction (SCR) chamber has been numerically investigated. The first task in this work has been to first look into the numerical framework and assess the models available in a commercial CFD software (ANSYS Fluent 14.5). Secondly the knowledge inherited from the model sensitivity analysis will be applied on the practical case of an Urea-SCR mixing chamber. Mass flow rate and temperature effects of the exhaust gas on the mixing and evaporation of the DEF spray has been investigated. The results indicate that evaporation rates inside the mixing chamber are dependent on the mass flow rate of the exhaust gas but not on the temperature due to compressibility effects of the exhaust gas. For a constant mass flow rate an increase in temperature decreases the residence time of droplets (due to compressibility) with a similar order of magnitude as the individual droplet evaporation rate increases (due to higher temperature) thus the two effects balances each other. The results could potentially contribute to the development and optimization of current SCR systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Szwaykowska, Klementyna. "Controlled Lagrangian particle tracking: analyzing the predictability of trajectories of autonomous agents in ocean flows." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50357.

Full text
Abstract:
Use of model-based path planning and navigation is a common strategy in mobile robotics. However, navigation performance may degrade in complex, time-varying environments under model uncertainty because of loss of prediction ability for the robot state over time. Exploration and monitoring of ocean regions using autonomous marine robots is a prime example of an application where use of environmental models can have great benefits in navigation capability. Yet, in spite of recent improvements in ocean modeling, errors in model-based flow forecasts can still significantly affect the accuracy of predictions of robot positions over time, leading to impaired path-following performance. In developing new autonomous navigation strategies, it is important to have a quantitative understanding of error in predicted robot position under different flow conditions and control strategies. The main contributions of this thesis include development of an analytical model for the growth of error in predicted robot position over time and theoretical derivation of bounds on the error growth, where error can be attributed to drift caused by unmodeled components of ocean flow. Unlike most previous works, this work explicitly includes spatial structure of unmodeled flow components in the proposed error growth model. It is shown that, for a robot operating under flow-canceling control in a static flow field with stochastic errors in flow values returned at ocean model gridpoints, the error growth is initially rapid, but slows when it reaches a value of approximately twice the ocean model gridsize. Theoretical values for mean and variance of error over time under a station-keeping feedback control strategy and time-varying flow fields are computed. Growth of error in predicted vehicle position is modeled for ocean models whose flow forecasts include errors with large spatial scales. Results are verified using data from several extended field deployments of Slocum autonomous underwater gliders, in Monterey Bay, CA in 2006, and in Long Bay, SC in 2012 and 2013.
APA, Harvard, Vancouver, ISO, and other styles
5

Dimou, Konstantina. "3-D hybrid Eulerian-Lagrangian / particle tracking model for simulating mass transport in coastal water bodies." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/28011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Küchler, Christian [Verfasser]. "Measurements of Turbulence at High Reynolds Numbers : From Eulerian Statistics Towards Lagrangian Particle Tracking / Christian Küchler." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1230138072/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Caraghiaur, Garrido Diana. "Experimental Study and Modelling of Spacer Grid Influence on Flow in Nuclear Fuel Assemblies." Licentiate thesis, KTH, Physics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9983.

Full text
Abstract:

The work is focused on experimental study and modelling of spacer grid influence on single- and two-phase flow. In the experimental study a mock-up of a realistic fuel bundle with five spacer grids of thin plate spring construction was investigated. A special pressure measuring technique was used to measure pressure distribution inside the spacer. Five pressure taps were drilled in one of the rods, which could exchange position with other rods, in this way providing a large degree of freedom. Laser Doppler Velocimetry was used to measure mean local axial velocity and its fluctuating component upstream and downstream of the spacer in several subchannels with differing spacer part. The experimental study revealed an interesting behaviour. Subchannels from the interior part of the bundle display a different effect on the flow downstream of the spacer compared to subchannels close to the box wall, even if the spacer part is the same. This behaviour is not reflected in modern correlations. The modelling part, first, consisted in comparing the present experimental data to Computational Fluid Dynamics calculations. It was shown that stand-alone subchannel models could predict the local velocity, but are unreliable in prediction of turbulence enhancement due to spacer. The second part of the modelling consisted in developing a deposition model for increase due to spacer. In this study Lagrangian Particle Tracking (LPT) coupled to Discrete Random Walk (DRW) technique was used to model droplet movements through turbulent flow. The LPT technique has an advantage to model the influence of turbulence structure effect on droplet deposition, in this way presenting a generalized model in view of spacer geometry change. The verification of the applicability of LPT DRW method to model deposition in annular flow at Boiling Water Reactor conditions proved that the method is unreliable in its present state. The model calculations compare reasonably well to air-water deposition data, but display a wrong trend if the fluids have a different density ratio than air-water.

APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Ho Jun. "Theoretical and numerical studies of chaotic mixing." Diss., Texas A&M University, 2008. http://hdl.handle.net/1969.1/85940.

Full text
Abstract:
Theoretical and numerical studies of chaotic mixing are performed to circumvent the difficulties of efficient mixing, which come from the lack of turbulence in microfluidic devices. In order to carry out efficient and accurate parametric studies and to identify a fully chaotic state, a spectral element algorithm for solution of the incompressible Navier-Stokes and species transport equations is developed. Using Taylor series expansions in time marching, the new algorithm employs an algebraic factorization scheme on multi-dimensional staggered spectral element grids, and extends classical conforming Galerkin formulations to nonconforming spectral elements. Lagrangian particle tracking methods are utilized to study particle dispersion in the mixing device using spectral element and fourth order Runge-Kutta discretizations in space and time, respectively. Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in microfluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. These are the stirring index based on the box counting method, Poincare sections, finite time Lyapunov exponents, the probability density function of the stretching field, and mixing index inverse, based on the standard deviation of scalar species distribution. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing length (lm) is characterized as function of the Pe number, and lm ∝ ln(Pe) scaling is demonstrated for fully chaotic cases. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified in a zeta potential patterned straight micro channel, where a continuous flow is generated by superposition of a steady pressure driven flow and time periodic electroosmotic flow induced by a stream-wise AC electric field. Finally, it is shown that the invariant manifold of hyperbolic periodic point determines the geometry of fast mixing zones in oscillatory flows in two-dimensional cavity.
APA, Harvard, Vancouver, ISO, and other styles
9

Grabel, Michael Z. "A Lagrangian/Eulerian Approach for Capturing Topological Changes in Moving Interface Problems." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563527241172213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Gaurav. "Direct numerical simulation of particle-laden turbulence in a straight square duct." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/155.

Full text
Abstract:
Particle-laden turbulent flow through a straight square duct at Reτ = 300 is studied using direct numerical simulation (DNS) and Lagrangian particle tracking. A parallelized 3-D particle tracking direct numerical simulation code has been developed to perform the large-scale turbulent particle transport computations reported in this thesis. The DNS code is validated after demonstrating good agreement with the published DNS results for the same flow and Reynolds number. Lagrangian particle transport computations are carried out using a large ensemble of passive tracers and finite-inertia particles and the assumption of one-way fluid-particle coupling. Using four different types of initial particle distributions, Lagrangian particle dispersion, concentration and deposition are studied in the turbulent straight square duct. Particles are released in a uniform distribution on a cross-sectional plane at the duct inlet, released as particle pairs in the core region of the duct, distributed randomly in the domain or distributed uniformly in planes at certain heights above the walls. One- and two-particle dispersion statistics are computed and discussed for the low Reynolds number inhomogeneous turbulence present in a straight square duct. New detailed statistics on particle number concentration and deposition are also obtained and discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Chiti, Fabio. "Lagrangian studies of turbulent mixing in a vessel agitated by a Rushton turbine : positron emission particle tracking (PEPT) and computational fluid dynamics (CFD)." Thesis, University of Birmingham, 2008. http://etheses.bham.ac.uk//id/eprint/1607/.

Full text
Abstract:
Stirred vessels are used in a wide variety of process industries such as fine chemicals, pharmaceuticals, polymers and foods. In order to design efficient mixing vessels, a deep understanding of the blending processes is required. In cases where the fluid is not completely transparent, traditional optical laser based techniques are ineffective. One of the most promising techniques to study opaque systems is based on the detection of a tracer that emits gamma rays. Positron Emission Particle Tracking (PEPT) has been developed at the University of Birmingham and has been used in a wide range of applications including stirred tanks. However, for agitated vessels, any attempt of validation of the PEPT technique versus other techniques cannot be found. Hence, this work aims to validate and explore the potential of Lagrangian data in a well known mixing system such as a standard baffled vessel stirred by a Rushton turbine. As part of the validation, comparison with Eulerian PIV/LDA data has been also undertaken and some underestimation of the high velocities in the system was found in the impeller region. By using a selective interpolation algorithm of the tracer locations, this problem was greatly reduced although a perfect match with optical technique is not feasible. As further contribution to Lagrangian studies of mixing processes, Computational Fluid Dynamics (CFD) simulations have been undertaken to give both Eulerian and Lagrangian velocities and particle paths. However, it has been shown that traditional approaches to Lagrangian numerical simulation are unable to produce good trajectories that can be compared to experimental data. A novel three-step approach was suggested and implemented in order to achieve good paths, which then have been compared to the experimental trajectories. Qualitative and quantitative analysis of experimental Lagrangian data showed that the trajectories are erratic and follow random paths; furthermore, frequency analysis applied to portions of trajectories does not reveal any dominant low frequency in the system. Finally, circulation studies were undertaken in order to characterise mixing processes. This focused on tracking the tracer every time it leaves and returns a control volume proving the value of analysing time and return length distributions, since it was possible to compare the circulation times achieved in PEPT with published work. The trajectography approach used in this work is the first attempt at using trajectories from PEPT as a tool to characterise mixing performance rather than only using the data to find Eulerian velocities and vector plots.
APA, Harvard, Vancouver, ISO, and other styles
12

Majal, Ghulam. "On the Agglomeration of Particles in Exhaust Gases." Licentiate thesis, KTH, MWL Marcus Wallenberg Laboratoriet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235704.

Full text
Abstract:
Particulate emissions from road transportation are known to have an adverse impact on human health as well the environment. As the effects become more palpable, stricter legislation have been proposed by regulating bodies. This puts forward a challenge for the automotive industry to develop after treatment technologies to fulfil the progressively stricter legislation. At present, the most common after-treatment technologies used for particulates are the diesel and gasoline particulate filters. The typical size distribution of the particles is such that the smallest particles in terms of size are in numbers the largest, although they are not influencing the total particle mass significantly. The most recent legislation have included restrictions on the particle number as well as particle mass. In this thesis numerical tools for studying the transport and interaction of particles in an exhaust flow are evaluated. The specific application is particle agglomeration as a mean to reduce the number of particles and manipulate the size distribution. As particles agglomerate the particle number distribution is shifted and larger sized agglomerates of particles are created reducing the number of ultra-fine particles. The particle agglomeration is obtained by forcing sudden acceleration and deceleration of the host gas carrying the particles by variations in the cross sectional areas of the geometry it is passing through. Initially, a simplified one dimensional model is utilized to assess the governing parameters of particle grouping. Grouping here means that the particles form and are transported in groups, thus increasing the probability for agglomeration. The lessons learned from the 1D-model are also used to design the three dimensional geometry: an axisymmetric corrugated pipe. Two different geometries are studied, they both have the same main pipe diameter but different diameter on the corrugations. The purpose is to find the potential onset of flow instabilities and the influence of 3D-effects such as recirculation on the agglomeration. The CFD simulations are performed using DES methodology. First the simulations are run without particles in a non pulsatile flow scenario. Later particles are added to the setup in a one way coupled approach (no particle-particle interaction). The main results were: 1) An additional criterion for grouping to the ones given in previous work on the 1D model is proposed. It is found that grouping is more likely if the combination of the pulse frequency and geometric wavelength is large. Furthermore, smooth pulse forms (modelling the modulation in the flow due to the geometry) yielded more grouping than other more abrupt pulse shapes. However, idealised inlet pulses underestimate the extent of grouping compared to actual engine pulses. 2) For the geometry with larger maximum cross sectional area stronger flow separation was observed along with higher turbulent kinetic energy. 3) Particles were added in the flow field and a reduction in the particle count was observed in the initial simulations for particles going from the first corrugated segment to the last. Natural extensions of the present work would be to consider pulsatile flow scenarios, particle-particle interaction and a polydisperse setup for the particles

QC 20181008

APA, Harvard, Vancouver, ISO, and other styles
13

Shah, Anant Pankaj. "Development and application of a dispersed two-phase flow capability in a general multi-block Navier Stokes solver." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/36101.

Full text
Abstract:
Gas turbines for military applications, when operating in harsh environments like deserts often encounter unexpected operation faults. Such performance deterioration of the gas turbine decreases the mission readiness of the Air Force and simultaneously increases the maintenance costs. Some of the major factors responsible for the reduced performance are ingestion of debris during take off and landing, distorted intake flows during low altitude maneuvers, and hot gas ingestion during artillery firing. The focus of this thesis is to study ingestion of debris; specifically sand. The region of interest being the internal cooling ribbed duct of the turbine blade. The presence of serpentine passages and strong localized cross flow components makes this region prone to deposition, erosion, and corrosion (DEC) by sand particles. A Lagrangian particle tracking technique was implemented in a generalized coordinate multi-block Navier-Stokes solver in a distributed parallel framework. The developed algorithm was validated by comparing the computed particle statistics for 28 microns lycopodium, 50 microns glass, and 70 microns copper with available data [2] for a turbulent channel flow at Ret=180. Computations were performed for a particle-laden turbulent flow through a stationary ribbed square duct (rib pitch / rib height = 10, rib height / hydraulic diameter = 0.1) using an Eulerian-Lagrangian framework. Particle sizes of 10, 50, and 100 microns with response times (normalized by friction velocity and hydraulic diameter) of 0.06875, 1.71875, and 6.875 respectively are considered. The calculations are performed for a nominal bulk Reynolds number of 20,000 under fully developed conditions. The carrier phase was solved using Large Eddy Simulation (LES) with Dynamic Smagorinsky Model [1]. Due to low volume fraction of the particles, one-way fluid-particle coupling was assumed. It is found that at any given instant in time about 40% of the total number of 10 micron particles are concentrated in the vicinity (within 0.05 Dh) of the duct surfaces, compared to 26% of the 50 and 100 micron particles. The 10 micron particles are more sensitive to the flow features and are prone to preferential concentration more so than the larger particles. At the side walls of the duct, the 10 micron particles exhibit a high potential to erode the region in the vicinity of the rib due to secondary flow impingement. The larger particles are more prone to eroding the area between the ribs and towards the center of the duct. At the ribbed walls, while the 10 micron particles exhibit a fairly uniform propensity for erosion, the 100 micron particles show a much higher tendency to erode the surface in the vicinity of the reattachment region. The rib face facing the flow is by far the most susceptible to erosion and deposition for all particle sizes. While the top of the rib does not exhibit a large propensity to be eroded, the back of the rib is as susceptible as the other duct surfaces because of particles which are entrained into the recirculation zone behind the rib.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Ikardouchene, Syphax. "Analyses expérimentale et numérique de l'interaction departicules avec un jet d'air plan impactant une surface.Application au confinement particulaire." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1046.

Full text
Abstract:
La thèse vise à qualifier les performances de confinement de rideaux d’air vis-à-vis de pollution particulaire. Plus précisément, elle vise à mettre en place, caractériser et améliorer des barrières de confinement particulaire par des jets d'air plans placés en périphérie de machines tournantes abrasives utilisées pour décaper les surfaces amiantées
The thesis aims to qualify the containment barriers for particles. Specifically, it aims to develop, characterize and improve particulate confinement barriers by jets of air placed at the periphery of abrasive rotating machines used to scour the surfaces containing asbestos
APA, Harvard, Vancouver, ISO, and other styles
15

Gnanaselvam, Pritheesh. "Modeling Turbulent Dispersion and Deposition of Airborne Particles in High Temperature Pipe Flows." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1598016744932462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pachler, Klaus, Thomas Frank, and Klaus Bernert. "Simulation of Unsteady Gas-Particle Flows including Two-way and Four-way Coupling on a MIMD Computer Architectur." Universitätsbibliothek Chemnitz, 2002. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200200352.

Full text
Abstract:
The transport or the separation of solid particles or droplets suspended in a fluid flow is a common task in mechanical and process engineering. To improve machinery and physical processes (e.g. for coal combustion, reduction of NO_x and soot) an optimization of complex phenomena by simulation applying the fundamental conservation equations is required. Fluid-particle flows are characterized by the ratio of density of the two phases gamma=rho_P/rho_F, by the Stokes number St=tau_P/tau_F and by the loading in terms of void and mass fraction. Those numbers (Stokes number, gamma) define the flow regime and which relevant forces are acting on the particle. Dependent on the geometrical configuration the particle-wall interaction might have a heavy impact on the mean flow structure. The occurrence of particle-particle collisions becomes also more and more important with the increase of the local void fraction of the particulate phase. With increase of the particle loading the interaction with the fluid phase can not been neglected and 2-way or even 4-way coupling between the continous and disperse phases has to be taken into account. For dilute to moderate dense particle flows the Euler-Lagrange method is capable to resolve the main flow mechanism. An accurate computation needs unfortunately a high number of numerical particles (1,...,10^7) to get the reliable statistics for the underlying modelling correlations. Due to the fact that a Lagrangian algorithm cannot be vectorized for complex meshes the only way to finish those simulations in a reasonable time is the parallization applying the message passing paradigma. Frank et al. describes the basic ideas for a parallel Eulererian-Lagrangian solver, which uses multigrid for acceleration of the flow equations. The performance figures are quite good, though only steady problems are tackled. The presented paper is aimed to the numerical prediction of time-dependend fluid-particle flows using the simultanous particle tracking approach based on the Eulerian-Lagrangian and the particle-source-in-cell (PSI-Cell) approach. It is shown in the paper that for the unsteady flow prediction efficiency and load balancing of the parallel numerical simulation is an even more pronounced problem in comparison with the steady flow calculations, because the time steps for the time integration along one particle trajectory are very small per one time step of fluid flow integration and so the floating point workload on a single processor node is usualy rather low. Much time is spent for communication and waiting time of the processors, because for cold flow particle convection not very extensive calculations are necessary. One remedy might be a highspeed switch like Myrinet or Dolphin PCI/SCI (500 MByte/s), which could balance the relative high floating point performance of INTEL PIII processors and the weak capacity of the Fast-Ethernet communication network (100 Mbit/s) of the Chemnitz Linux Cluster (CLIC) used for the presented calculations. Corresponding to the discussed examples calculation times and parallel performance will be presented. Another point is the communication of many small packages, which should be summed up to bigger messages, because each message requires a startup time independently of its size. Summarising the potential of such a parallel algorithm, it will be shown that a Beowulf-type cluster computer is a highly competitve alternative to the classical main frame computer for the investigated Eulerian-Lagrangian simultanous particle tracking approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Bonnier, Florent. "Algorithmes parallèles pour le suivi de particules." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV080/document.

Full text
Abstract:
Les méthodes de suivi de particules sont couramment utilisées en mécanique des fluides de par leur propriété unique de reconstruire de longues trajectoires avec une haute résolution spatiale et temporelle. De fait, de nombreuses applications industrielles mettant en jeu des écoulements gaz-particules, comme les turbines aéronautiques utilisent un formalisme Euler-Lagrange. L’augmentation rapide de la puissance de calcul des machines massivement parallèles et l’arrivée des machines atteignant le petaflops ouvrent une nouvelle voie pour des simulations qui étaient prohibitives il y a encore une décennie. La mise en oeuvre d’un code parallèle efficace pour maintenir une bonne performance sur un grand nombre de processeurs devra être étudié. On s’attachera en particuliers à conserver un bon équilibre des charges sur les processeurs. De plus, une attention particulière aux structures de données devra être fait afin de conserver une certaine simplicité et la portabilité et l’adaptabilité du code pour différentes architectures et différents problèmes utilisant une approche Lagrangienne. Ainsi, certains algorithmes sont à repenser pour tenir compte de ces contraintes. La puissance de calcul permettant de résoudre ces problèmes est offerte par des nouvelles architectures distribuées avec un nombre important de coeurs. Cependant, l’exploitation efficace de ces architectures est une tâche très délicate nécessitant une maîtrise des architectures ciblées, des modèles de programmation associés et des applications visées. La complexité de ces nouvelles générations des architectures distribuées est essentiellement due à un très grand nombre de noeuds multi-coeurs. Ces noeuds ou une partie d’entre eux peuvent être hétérogènes et parfois distants. L’approche de la plupart des bibliothèques parallèles (PBLAS, ScalAPACK, P_ARPACK) consiste à mettre en oeuvre la version distribuée de ses opérations de base, ce qui signifie que les sous-programmes de ces bibliothèques ne peuvent pas adapter leurs comportements aux types de données. Ces sous programmes doivent être définis une fois pour l’utilisation dans le cas séquentiel et une autre fois pour le cas parallèle. L’approche par composants permet la modularité et l’extensibilité de certaines bibliothèques numériques (comme par exemple PETSc) tout en offrant la réutilisation de code séquentiel et parallèle. Cette approche récente pour modéliser des bibliothèques numériques séquentielles/parallèles est très prometteuse grâce à ses possibilités de réutilisation et son moindre coût de maintenance. Dans les applications industrielles, le besoin de l’emploi des techniques du génie logiciel pour le calcul scientifique dont la réutilisabilité est un des éléments des plus importants, est de plus en plus mis en évidence. Cependant, ces techniques ne sont pas encore maÃotrisées et les modèles ne sont pas encore bien définis. La recherche de méthodologies afin de concevoir et réaliser des bibliothèques réutilisables est motivée, entre autres, par les besoins du monde industriel dans ce domaine. L’objectif principal de ce projet de thèse est de définir des stratégies de conception d’une bibliothèque numérique parallèle pour le suivi lagrangien en utilisant une approche par composants. Ces stratégies devront permettre la réutilisation du code séquentiel dans les versions parallèles tout en permettant l’optimisation des performances. L’étude devra être basée sur une séparation entre le flux de contrôle et la gestion des flux de données. Elle devra s’étendre aux modèles de parallélisme permettant l’exploitation d’un grand nombre de coeurs en mémoire partagée et distribuée
The complexity of these new generations of distributed architectures is essencially due to a high number of multi-core nodes. Most of the nodes can be heterogeneous and sometimes remote. Today, nor the high number of nodes, nor the processes that compose the nodes are exploited by most of applications and numerical libraries. The approach of most of parallel libraries (PBLAS, ScalAPACK, P_ARPACK) consists in implementing the distributed version of its base operations, which means that the subroutines of these libraries can not adapt their behaviors to the data types. These subroutines must be defined once for use in the sequential case and again for the parallel case. The object-oriented approach allows the modularity and scalability of some digital libraries (such as PETSc) and the reusability of sequential and parallel code. This modern approach to modelize sequential/parallel libraries is very promising because of its reusability and low maintenance cost. In industrial applications, the need for the use of software engineering techniques for scientific computation, whose reusability is one of the most important elements, is increasingly highlighted. However, these techniques are not yet well defined. The search for methodologies for designing and producing reusable libraries is motivated by the needs of the industries in this field. The main objective of this thesis is to define strategies for designing a parallel library for Lagrangian particle tracking using a component approach. These strategies should allow the reuse of the sequential code in the parallel versions while allowing the optimization of the performances. The study should be based on a separation between the control flow and the data flow management. It should extend to models of parallelism allowing the exploitation of a large number of cores in shared and distributed memory
APA, Harvard, Vancouver, ISO, and other styles
18

Domingues, Catia Motta, and Catia Domingues@csiro au. "Kinematics and Heat Budget of the Leeuwin Current." Flinders University. SOCPES, 2006. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20060612.211358.

Full text
Abstract:
This study investigates the upper ocean circulation along the west Australian coast, based on recent observations (WOCE ICM6, 1994/96) and numerical output from the 1/6 degree Parallel Ocean Program model (POP11B 1993/97). Particularly, we identify the source regions of the Leeuwin Current, quantify its mean and seasonal variability in terms of volume, heat and salt transports, and examine its heat balance (cooling mechanism). This also leads to further understanding of the regional circulation associated with the Leeuwin Undercurrent, the Eastern Gyral Current and the southeast Indian Subtropical Gyre. The tropical and subtropical sources of the Leeuwin Current are understood from an online numerical particle tracking. Some of the new findings are the Tropical Indian Ocean source of the Leeuwin Current (in addition to the Indonesian Throughflow/Pacific); the Eastern Gyral Current as a recirculation of the South Equatorial Current; the subtropical source of the Leeuwin Current fed by relatively narrow subsurface-intensified eastward jets in the Subtropical Gyre, which are also a major source for the Subtropical Water (salinity maximum) as observed in the Leeuwin Undercurrent along the ICM6 section at 22 degrees S. The ICM6 current meter array reveals a rich vertical current structure near North West Cape (22 degrees S). The coastal part of the Leeuwin Current has dominant synoptic variability and occasionally contains large spikes in its transport time series arising from the passage of tropical cyclones. On the mean, it is weaker and shallower compared to further downstream, and it only transports Tropical Water, of a variable content. The Leeuwin Undercurrent carries Subtropical Water, South Indian Central Water and Antarctic Intermediate Water equatorward between 150/250 to 500/750 m. There is a poleward flow just below the undercurrent which advects a mixed Intermediate Water, partially associated with outflows from the Red Sea and Persian Gulf. Narrow bottom-intensified currents are also observed. The 5-year mean model Leeuwin Current is a year-round poleward flow between 22 degrees S and 34 degrees S. It progressively deepens, from 150 to 300 m depth. Latitudinal variations in its volume transport are a response to lateral inflows/outflows. It has double the transport at 34 degrees S (-2.2 Sv) compared to at 22 degrees S (-1.2 Sv). These model estimates, however, may underestimate the transport of the Leeuwin Current by 50%. Along its path, the current becomes cooler (6 degrees C), saltier (0.6 psu) and denser (2 kg m -3). At seasonal scales, a stronger poleward flow in May-June advects the warmest and freshest waters along the west Australian coast. This advection is apparently spun up by the arrival of a poleward Kelvin wave in April, and reinforced by a minimum in the equatorward wind stress during July. In the model heat balance, the Leeuwin Current is significantly cooled by the eddy heat flux divergence (4 degrees C out of 6 degrees C), associated with mechanisms operating at submonthly time scales. However, exactly which mechanisms it is not yet clear. Air-sea fluxes only account for ~30% of the cooling and seasonal rectification is negligible. The eddy heat divergence, originating over a narrow region along the outer edge of the Leeuwin Current, is responsible for a considerable warming of a vast area of the adjacent ocean interior, which is then associated with strong heat losses to the atmosphere. The model westward eddy heat flux estimates are considerably larger than those associated with long lived warm core eddies detaching from the Leeuwin Current and moving offshore. This suggests that these mesoscale features are not the main mechanism responsible for the cooling of the Leeuwin Current. We suspect instead that short lived warm core eddies might play an important role.
APA, Harvard, Vancouver, ISO, and other styles
19

Wredh, Simon. "Neural Network Based Model Predictive Control of Turbulent Gas-Solid Corner Flow." Thesis, Uppsala universitet, Signaler och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420056.

Full text
Abstract:
Over the past decades, attention has been brought to the importance of indoor air quality and the serious threat of bio-aerosol contamination towards human health. A novel idea to transport hazardous particles away from sensitive areas is to automatically control bio-aerosol concentrations, by utilising airflows from ventilation systems. Regarding this, computational fluid dynamics (CFD) may be employed to investigate the dynamical behaviour of airborne particles, and data-driven methods may be used to estimate and control the complex flow simulations. This thesis presents a methodology for machine-learning based control of particle concentrations in turbulent gas-solid flow. The aim is to reduce concentration levels at a 90 degree corner, through systematic manipulation of underlying two-phase flow dynamics, where an energy constrained inlet airflow rate is used as control variable. A CFD experiment of turbulent gas-solid flow in a two-dimensional corner geometry is simulated using the SST k-omega turbulence model for the gas phase, and drag force based discrete random walk for the solid phase. Validation of the two-phase methodology is performed against a backwards facing step experiment, with a 12.2% error correspondence in maximum negative particle velocity downstream the step. Based on simulation data from the CFD experiment, a linear auto-regressive with exogenous inputs (ARX) model and a non-linear ARX based neural network (NN) is used to identify the temporal relationship between inlet flow rate and corner particle concentration. The results suggest that NN is the preferred approach for output predictions of the two-phase system, with roughly four times higher simulation accuracy compared to ARX. The identified NN model is used in a model predictive control (MPC) framework with linearisation in each time step. It is found that the output concentration can be minimised together with the input energy consumption, by means of tracking specified target trajectories. Control signals from NN-MPC also show good performance in controlling the full CFD model, with improved particle removal capabilities, compared to randomly generated signals. In terms of maximal reduction of particle concentration, the NN-MPC scheme is however outperformed by a manually constructed sine signal. In conclusion, CFD based NN-MPC is a feasible methodology for efficient reduction of particle concentrations in a corner area; particularly, a novel application for removal of indoor bio-aerosols is presented. More generally, the results show that NN-MPC may be a promising approach to turbulent multi-phase flow control.
APA, Harvard, Vancouver, ISO, and other styles
20

Dépée, Alexis. "Etude expérimentale et théorique des mécanismes microphysiques mis en jeu dans la capture des aérosols radioactifs par les nuages." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC057.

Full text
Abstract:
Les particules atmosphériques sont un sujet d’importance dans plusieurs couches de la société. Leur présence dans l’atmosphère est aussi bien une problématique météorologique et climatique qu’un enjeu de santé publique, notamment de par l’accroissement des maladies cardiovasculaires. En particulier, les particules radioactives émises dans l’atmosphère à la suite d’un accident nucléaire peuvent polluer les écosystèmes durant plusieurs années. Le récent accident du Centre Nucléaire de Production d’Électricité de Fukushima Daiichi en 2011 nous rappelle que, même aujourd’hui, le risque zéro n’existe pas. À la suite d’une émission dans l’atmosphère, les particules nanométriques diffusent et s’agglomèrent alors que les particules de plusieurs micromètres sédimentent. Les tailles intermédiaires vont, quant à elles, pouvoir être transportées à l’échelle globale dont le mécanisme principal de rabattement au sol provient des interactions avec les nuages et les précipitations. Afin d’améliorer la connaissance de la contamination des sols consécutive à de tels accidents, la compréhension de la capture des aérosols par les nuages est alors essentielle. Dans ce but, un modèle microphysique est implémenté dans ce travail, considérant les mécanismes microphysiques qui interviennent dans la capture des aérosols par des gouttes de nuage, notamment les forces électrostatiques dès lors que les radionucléides ont pour propriété de fortement se charger. Des mesures en laboratoire sont alors réalisées à l’aide de In-CASE (In-Cloud Aerosols Scavenging Experiment), expérience conçue dans ce travail, afin de comparer le modèle développé aux observations, et ce, toujours à une échelle microphysique où les paramètres d’influence régissant la capture au sein du nuage sont contrôlés. Par ailleurs, des systèmes de charge des particules et des gouttes sont conçus pour soigneusement maîtriser les charges électriques, tandis que l’humidité relative est précisément pilotée. Les nouvelles connaissances de la capture des particules par des gouttes de nuage qui en découlent, concernant entre autres les effets électrostatiques, sont ensuite incorporées au modèle de nuage convectif DESCAM (Detailed SCAvenging Model). Ce modèle à microphysique détaillée décrit un nuage de sa formation jusqu’aux précipitations, permettant d’étudier l’impact des nouvelles données sur le rabattement des particules à méso-échelle. De plus, des modifications sont apportées à DESCAM pour élargir l’étude aux nuages stratiformes qui constituent en France, la majorité des précipitations. À terme, cette étude ouvre la voie à l’amélioration de la modélisation du rabattement atmosphérique des particules, et notamment à la contamination des sols dans les modèles de crise de l’Institut de Radioprotection et de Sûreté Nucléaire
Atmospheric particles are a key topic in many social issues. Their presence in this atmosphere is a meteorological and climatic subject, as well as a public health concern since these particles are correlated with the increase of cardiovascular diseases. Specially, radioactive particles emitted as a result of a nuclear accident can jeopardise ecosystems for decades. The recent accident at the Fukushima Daiichi’s nuclear power plant in 2011 reminds us that the risk, even extremely unlikely, exists.After a release of nuclear material in the atmosphere, nanometric particles diffuse and coagulate, while micrometric particles settle due to gravity. Nevertheless, the intermediate size particles can be transported at a global scale when the main mechanism involved in their scavenging comes from the interaction with clouds and their precipitations. To enhance the ground contamination knowledge after such accidental releases, the understanding of the particle in-cloud collection is thus essential. For this purpose, a microphysical model is implemented in this work, including the whole microphysical mechanisms acting on the particle collection by cloud droplets like the electrostatic forces since radionuclides are well-known to become significantly charged. Laboratory measurements are then conducted through In-CASE (In-Cloud Aerosols Scavenging Experiment), a novel experiment built in this work, to get comparisons between modelling and observations, once again at a microphysical scale where every parameter influencing the particle in-cloud collection is controlled. Furthermore, two systems to electrically charge particles and droplets are constructed to set the electric charges carefully while the relative humidity level is also regulated. These new research results related to the particle collection by cloud droplets following the electrostatic forces, among others effects, are thus incorporated into the convective cloud model DESCAM (Detailed SCAvenging Model). This detailed microphysical model describes a cloud from its formation to the precipitations, allowing the study at a meso-scale of the impact of the new data on the particle scavenging. Moreover, some changes are made in DESCAM to expand the study to stratiform clouds since the major part of the French precipitations come from the stratiform ones. Finally, this work paves the way for the enhancement of the atmospheric particle scavenging modelling, including the ground contamination in the crisis model used by the French Institute in Radiological Protection and Nuclear Safety
APA, Harvard, Vancouver, ISO, and other styles
21

More, Colin. "The role of North Atlantic Current water in exchanges across the Greenland-Scotland Ridge from the Nordic Seas." Master's thesis, 2011. http://hdl.handle.net/10048/1681.

Full text
Abstract:
The circulation and gradual transformation in properties of oceanic water masses is a matter of great interest for short-term weather and biological forecasting, as well as long-term climate change. It is usually agreed that the Nordic Seas between Greenland and Norway are key to these transformations since they are an important producer of dense water, a process central to the theory of the global thermohaline circulation. In this study, one component of this deep water is examined – that formed in the Nordic Seas themselves from the inflowing North Atlantic Current. Using Lagrangian particle tracking applied to a 50-year global ocean hindcast simulation, it is concluded that only about 6% of the inflowing North Atlantic Current is thus transformed, and that most of these transformations occur in boundary currents. Furthermore, it is found that the densified North Atlantic water attains only medium depths instead of joining the deep overflows. The model’s poor representation of vertical mixing, however, limits the applicability of this study to deep water formation.
APA, Harvard, Vancouver, ISO, and other styles
22

Katta, Ajay. "Particle Trajectories in Wall-Normal and Tangential Rocket Chambers." 2011. http://trace.tennessee.edu/utk_gradthes/989.

Full text
Abstract:
The focus of this study is the prediction of trajectories of solid particles injected into either a cylindrically- shaped solid rocket motor (SRM) or a bidirectional vortex chamber (BV). The Lagrangian particle trajectory is assumed to be governed by drag, virtual mass, Magnus, Saffman lift, and gravity forces in a Stokes flow regime. For the conditions in a solid rocket motor, it is determined that either the drag or gravity forces will dominate depending on whether the sidewall injection velocity is high (drag) or low (gravity). Using a one-way coupling paradigm in a solid rocket motor, the effects of particle size, sidewall injection velocity, and particle-to-gas density ratio are examined. The particle size and sidewall injection velocity are found to have a greater impact on particle trajectories than the density ratio. Similarly, for conditions associated with a bidirectional vortex engine, it is determined that the drag force dominates. Using a one-way particle tracking Lagrangian model, the effects of particle size, geometric inlet parameter, particle-to-gas density ratio, and initial particle velocity are examined. All but the initial particle velocity are found to have a significant impact on particle trajectories. The proposed models can assist in reducing slag retention and identifying fuel injection configurations that will ensure proper confinement of combusting droplets to the inner vortex in solid rocket motors and bidirectional vortex engines, respectively.
APA, Harvard, Vancouver, ISO, and other styles
23

Di, Lorenzo Fabio. "Scale-dependent Response of Fluid Turbulence under Variation of the Large-scale Forcing." Doctoral thesis, 2015. http://hdl.handle.net/11858/00-1735-0000-0028-86B1-C.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mori, M. "Modelling oceanic transport of planktonic species in the Southern Ocean." Thesis, 2018. https://eprints.utas.edu.au/30045/1/Mori_whole_thesis.pdf.

Full text
Abstract:
The Southern Ocean ecosystem is unique on a global scale because of complex ocean dynamics (e.g. fronts, eddies and their interaction with topography), seasonal sea ice advance and retreat, and biochemical cycles relating to all these physical factors. These ecosystems have been a↵ected by climate change over at least the last three decades and the assessment and evaluation of these changes - both at present, and for the future - are urgently required tasks. A key element in understanding and managing Southern Ocean ecosystem change is a better understanding of the physical processes that influence oceanic transport and spatial patterns of recruitment. Simulation models provide a powerful approach to address questions regarding transport processes, including testing alternative theories and predicting conditions under future climate change scenarios. This thesis develops methods for dispersal modelling to investigate transport of larval - juvenile stages of two important species: Patagonian toothfish and Antarctic krill. These approaches treat larval - juvenile stages as passive particles whose transport by ocean currents can be modelled using Lagrangian particle tracking. Key datasets used to inform this work are: i) remotely sensed surface geostrophic velocity data, ii) ocean model output and iii) physical parameters such as oceanic front position, topography and sea ice concentration. This thesis is composed of general introduction (Chapter 1), three main dispersal modelling works (Chapter 2–4) and discussion & conclusion (Chapter 5). The second chapter focuses on egg and larval transport of Patagonian toothfish on the Kerguelen Plateau, with results suggesting that successful spawning grounds and transport patterns are coincident with in situ observed data. The second and third chapters focus on Antarctic krill. In particular, Chapter 3 develops a biologically relevant measure of retention time and the results indicate a significant relationship between the length of retention time and observed krill population size. Here retention time is defined as the time that a particle remains in a particular region. Chapter 4 develops a model to test hypotheses for the formation of the observed phenomenon of surface krill patches. Such surface patches are composed mainly of young krill. Results from this chapter support the hypothesis that surfaces patches can form as a result of the release and transport of juvenile krill from the sea ice edge zone during spring and that the distribution of these patches may have shifted to the south due to reductions in sea ice extent over the last 80 years. The tools and methods developed in this thesis have broader applications for other biological particles and will be made available for use by other researchers as an R package.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography