Dissertations / Theses on the topic 'Particle-based method'

To see the other types of publications on this topic, follow the link: Particle-based method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Particle-based method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shahadat, Sharif. "Improving a Particle Swarm Optimization-based Clustering Method." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2357.

Full text
Abstract:
This thesis discusses clustering related works with emphasis on Particle Swarm Optimization (PSO) principles. Specifically, we review in detail the PSO clustering algorithm proposed by Van Der Merwe & Engelbrecht, the particle swarm clustering (PSC) algorithm proposed by Cohen & de Castro, Szabo’s modified PSC (mPSC), and Georgieva & Engelbrecht’s Cooperative-Multi-Population PSO (CMPSO). In this thesis, an improvement over Van Der Merwe & Engelbrecht’s PSO clustering has been proposed and tested for standard datasets. The improvements observed in those experiments vary from slight to moderate, both in terms of minimizing the cost function, and in terms of run time.
APA, Harvard, Vancouver, ISO, and other styles
2

NAKAMURA, FABIO ISSAO. "FLUID INTERACTIVE ANIMATION BASED ON PARTICLE SYSTEM USING SPH METHOD." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10087@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Neste trabalho foi feito um estudo investigativo sobre animação de fluidos utilizando sistemas de partículas. Baseado nas propostas apresentadas por Muller et al., esta dissertação objetiva investigar e compreender o uso do método Lagrangeano baseado em partículas, conhecido como Smoothed Particle Hydrodynamics (SPH), para simulação de fluidos. A validação do método foi feita através da implementação de uma biblioteca capaz de animar fluidos a taxas interativas. Para testar a eficácia e eficiência do método, a biblioteca desenvolvida permite a instanciação de diferentes configurações, incluindo o tratamento de colisões do fluido com obstáculos, o tratamento da interação entre dois fluidos distintos e o tratamento de forças externas exercidas pelo usuário via um mecanismo de interação.
This work investigates the use of particle-based system for fluid animation. Based on proposals presented by Müller et al., the goal of this dissertation is to investigate and fully understand the use of a Lagrangian method known as Smoothed Particle Hydrodynamics (SPH) for fluid simulations. A library has been implemented in order to validate the method for fluid animation at interactive rate. To demonstrate the method effectiveness and efficiency, the resulting library allows the instantiation of different configurations, including the treatment of fluid-obstacle collisions, interaction between two distinct fluids, and fluid-user interaction.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Ting. "Color-Based Fingertip Tracking Using Modified Dynamic Model Particle Filtering Method." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306863054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ramli, Muhammad Zahir Bin. "A particle based method for flow simulations in hydrodynamics and hydroelasticity." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/412639/.

Full text
Abstract:
Seakeeping analysis involving violent flows is still quite challenging because the conventional Reynolds Averaged Navier-Stokes (RANS) approaches are not effective for such flow simulations. Different techniques and numerical tools are required to obtain approximate solutions. This research aims to apply Smoothed Particle Hydrodynamics (SPH), a fully Lagrangian meshless method to investigate the behaviour of ships in realistic waves. SPH has been used in a wide variety of hydrodynamic problems overcoming the limitation of finite volume or element type methods. This makes it a suitable alternative for simulating a range of hydrodynamic problems, especially those involving severe flow discontinuities, such as deformable boundary, wave breaking and fluid fragmentation, around complex hull shapes. The main goal of this research is to investigate the possibility of implementing SPH in 3-dimensional problems for the seakeeping analysis of ships treated as rigid and flexible bodies operating in reasonably rough seas. The outcomes of the research will focus on predicting wave-induced motions, distortions and loads with particular references to response in waves of reasonably large amplitude. The initial work deals with modifying standard Incompressible SPH (ISPH) formulation in generating free surface waves. It was observed that the kernel summation of standard ISPH formulation is not sufficiently accurate in obtaining the velocity and pressure fields. Therefore, a range of solutions were proposed to improve the prediction and the following were considered: i) employing collision control, ii) shifting technique to maintain uniform particle distribution, iii) improving the accuracy of gradient estimations up to 2nd order with kernel renormalization technique, iv) applying an artificial free surface viscosity and v) adapting new arc method for accurate free surface recognition. In addition, the weakly compressible SPH (WCSPH) from DualSPHysics was also applied to similar problems. It was found that WCSPH performed better in accuracy and was then adopted further in the analysis of hydrodynamic and hydroelasticity. The research studies were extended to investigate 2-D problems of radiation, diffraction and wave-induced motion. Comparisons were made with available potential flow solutions, numerical results and experimental data. Overall, a satisfactory agreement has been achieved in determining i) added mass and damping coefficients and ii) responses of fixed and floating body in waves. Convergence studies were carried out for particle density influences, as well as sensitivity and stability of implemented parameters. In the extension of the model to 3-D framework, both cases of floating rigid and flexible barges in regular waves were modelled. For this particular case, vertical bending moment (VBM) was obtained using the one-way coupling approach. Comparisons to two other numerical methods and experimental data in the prediction of RAOs, motion responses and vertical bending moments have shown consistent performance of WCSPH. Finally, the success of WCSPH was highlighted by solving the hydrodynamic coefficients for a 3-D flexible structure oscillating in rigid body motions of heave and pitch, as well as 2-node and 3-node of distortion modes.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Hao. "Numerical investigation of particle-fluid interaction system based on discrete element method." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284833.

Full text
Abstract:
This thesis focuses on the numerical investigation of the particle-fluid systems based on the Discrete Element Method (DEM). The whole thesis consists of three parts, in each part we have coupled the DEM with different schemes/solvers on the fluid phase. In the first part, we have coupled DEM with Direct Numerical Simulation (DNS) to study the particle-laden turbulent flow. The effect of collisions on the particle behavior in fully developed turbulent flow in a straight square duct was numerically investigated. Three sizes of particles were considered with diameters equal to 50 µm, 100 µm and 500 µm. Firstly, the particle transportation by turbulent flow was studied in the absence of the gravitational effect. Then, the particle deposition was studied under the effect of the wall-normal gravity force in which the influence of collisions on the particle resuspension rate and the final stage of particle distribution on the duct floor were discussed, respectively. In the second part, we have coupled DEM with Lattice Boltzmann Method (LBM) to study the particle sedimentation in Newtonian laminar flow. A novel combined LBM-IBM-DEM scheme was presented with its application to model the sedimentation of two dimensional circular particles in incompressible Newtonian flows. Case studies of single sphere settling in a cavity, and two particles settling in a channel were carried out, the velocity characteristics of the particle during settling and near the bottom were examined. At last, a numerical example of sedimentation involving 504 particles was finally presented to demonstrate the capability of the combined scheme. Furthermore, a Particulate Immersed Boundary Method (PIBM) for simulating the fluid-particle multiphase flow was presented and assessed in both two and three-dimensional applications. Compared with the conventional IBM, dozens of times speedup in two-dimensional simulation and hundreds of times in three-dimensional simulation can be expected under the same particle and mesh number. Numerical simulations of particle sedimentation in the Newtonian flows were conducted based on a combined LBM - PIBM - DEM showing that the PIBM could capture the feature of the particulate flows in fluid and was indeed a promising scheme for the solution of the fluid-particle interaction problems. In the last part, we have coupled DEM with averaged Navier-Stokes equations (NS) to study the particle transportation and wear process on the pipe wall. A case of pneumatic conveying was utilized to demonstrate the capability of the coupling model. The concrete pumping process was then simulated, where the hydraulic pressure and velocity distribution of the fluid phase were obtained. The frequency of the particles impacting on the bended pipe was monitored, a new time average collision intensity model based on impact force was proposed to investigate the wear process of the elbow. The location of maximum erosive wear damage in elbow was predicted. Furthermore, the influences of slurry velocity, bend orientation and angle of elbow on the puncture point location were discussed.
Esta tesis se centra en la investigación numérica de sistemas partícula-líquido basado en la técnica Discrete Element Method (DEM). La tesis consta de tres partes, en cada una de las cuales se ha acoplado el método DEM con diferentes esquemas/solucionadores en la fase fluida. En la primera parte, hemos acoplado los métodos DEM con Direct Numerical Simulation (DNS) para estudiar casos de "particle-laden turbulent flow". Se investigó numéricamente el efecto de las colisiones en el comportamiento de las partículas en el flujo turbulento completamente desarrollado en un conducto cuadrado recto. Tres tamaños de partículas se consideraron con diámetros de 50, 100 y 500 micrometros. En primer lugar, el transporte de partículas por el flujo turbulento se estudió en la ausencia del efecto gravitacional. Entonces, la deposición de partículas se estudió bajo el efecto de la fuerza de gravedad normal a la pared, en el que se discutieron la influencia de la tasa de colisiones en re-suspensión de las partículas y la fase final de la distribución de partículas en el suelo del conducto, respectivamente. En la segunda parte, se ha acoplado los métodos DEM con Lattice Boltzmann Method (LBM) para estudiar la sedimentación de partículas en flujo laminar newtoniano. Un nuevo metodo combinado LBM-IBM-DEM se presentó y ha sido aplicado para modelar la sedimentación de dos partículas circulares bi-dimensionales en flujos Newtonianos incompresibles. Se estudiaron casos de sedimentación en una cavidad de una sola esfera, y sedimentación de dos partículas en un canal, las características de la velocidad de la partícula durante la sedimentación y cerca de la base fueron también examinados. En el último caso, un ejemplo numérico de sedimentación de 504 partículas fue finalmente presentado para demostrar la capacidad del método combinado. Además, se ha presentado un método "Particulate Immersed Boundary Method" (PIBM) para la simulación de flujos multifásicos partícula-fluido y ha sido evaluado en dos y tres dimensiones. En comparación con el método IBM convencional, se puede esperar con el mismo número de partículas y de malla un SpeedUp docenas de veces superior en la simulación bidimensional y cientos de veces en la simulación en tres dimensiones. Se llevaron a cabo simulaciones numéricas de la sedimentación de partículas en los flujos newtonianos basados en una combinación LBM - PIBM - DEM, mostrando que el PIBM podría capturar las características de los flujos de partículas en el líquido y fue en efecto un esquema prometedor para la solución de problemas de interacción fluido-partícula. En la última parte, se ha acoplado el método DEM con las ecuaciones promediadas de Navier-Stokes (NS) para estudiar el transporte de partículas y el proceso de desgaste en la pared de una tubería. Se utilizó un caso de transporte neumático para demostrar la capacidad del modelo acoplado. Entonces se simuló el proceso de bombeo de hormigón, de donde se obtuvo la presión hidráulica y la distribución de la velocidad de la fase fluida. Se monitoreó la frecuencia de impacto de las partículas en la tubería doblada, se propuso un nuevo modelo de intensidad de colisión promediado en tiempo para investigar el proceso de desgaste del codo basado en la fuerza de impacto. Se predijo la ubicación del daño máximo desgaste por erosión en el codo. Además, se examinaron las influencias de la velocidad de pulpa, la orientación y el ángulo de curvatura del codo en la ubicación del punto de punción.
APA, Harvard, Vancouver, ISO, and other styles
6

Kulasegaram, S. "Development of particle based meshless method with applications in metal forming simulations." Thesis, Swansea University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637828.

Full text
Abstract:
Finite element formulations dealing with geometric and material non-linearities have been well developed and a significant amount of work has been accomplished for the numerical simulation of metal forming processes. Nevertheless, standard finite element approaches can be sometimes ineffective in handling bulk material deformation owing to severe mesh distortion or mesh entanglement. In the past, some finite element methods such as the Arbitrary Lagrangian Eulerian (ALE) method have been introduced to allow continuous remeshing during computation. Though rather effective in handling large deformation and keeping track of moving boundaries, these methods required extensive computational effort. In this thesis an attempt is made to address the aforementioned problems by using particle based Lagrangian techniques in the numerical simulation of large deformation metal forming processes. For this purpose a particle method called Corrected Smooth Particle Hydrodynamics (CSPH) is considered in the present work. CSPH method is developed from Smooth Particle Hydrodynamics (SPH) techniques which originated twenty years ago. Like most of the particle methods the CSPH also requires no explicit mesh for the computation and therefore avoids mesh direction difficulties in large deformation analysis. In addition, CSPH can achieve similar order of accuracy as any other modern mesh-less methods while retaining the simplicity of the original SPH technique. The simplicity and robustness of SPH method are demonstrated in the first few chapters of this thesis. As a first step of the present research, the SPH method is studied for evaluating its consistency, accuracy and other characteristics. As a consequence of these analyses various correction procedures are introduced in the original SPH method to enhance its performance. The resulting method is referred to here as the Corrected SPH technique. The CSPH is then used to formulate the viscoplastic forming problems with the aid of flow formulation technique.
APA, Harvard, Vancouver, ISO, and other styles
7

Sarakini, Timon. "Image-based characterization of small papermakting particles - method development and particle classification." Thesis, KTH, Tillämpad fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-181778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahlman, Björn. "Coarse-Graining Fields in Particle-Based Soil Models." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173534.

Full text
Abstract:
In soil, where trees and crops grow, heavy vehicles shear and compact the soil, leading to reduced plant growth and diminished nutrient recycling. Computer simulations offer the possibility to improve the understanding of these undesired phenomena. In this thesis, soils were modelled as large collections of contacting spherical particles using the Discrete Element Method (DEM) and the physics engine AGX Dynamics, and these entities were analyzed. In the first part of the thesis, soils, which were considered to be continua, were subjected to various controlled deformations and fields for quantities such as stress and strain were visualized using coarse graining (CG). These fields were then compared against analytical solutions. The main goal of the thesis was to evaluate the usefulness, accuracy, and precision of this plotting technique when applied to DEM-soils. The general behaviour of most fields agreed well with analytical or expected behaviour. Moreover, the fields presented valuable information about phenomena in the soils. Relative errors varied from 1.2 to 27 %. The errors were believed to arise chiefly from non-uniform displacement (due to the inherent granularity in the technique), and unintended uneven particle distribution. The most prominent drawback with the technique was found to be the unreliability of the plots near the boundaries. This is significant, since the behaviour of a soil at the surface where it is in contact with e.g. a vehicle tyre is of interest. In the second part of the thesis, a vehicle traversed a soil and fields were visualized using the same technique. Following a limited analysis, it was found that the stress in the soil can be crudely approximated as the stress in a linear elastic solid.
APA, Harvard, Vancouver, ISO, and other styles
9

Tilki, Umut. "Imitation Of Human Body Poses And Hand Gestures Using A Particle Based Fluidics Method." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615140/index.pdf.

Full text
Abstract:
In this thesis, a new approach is developed, avoiding the correspondence problem caused by the difference in embodiment between imitator and demonstrator in imitation learning. In our work, the imitator is a fluidic system of dynamics totally different than the imitatee, which is a human performing hand gestures and human body postures. The fluidic system is composed of fluid particles, which are used for the discretization of the problem domain. In this work, we demonstrate the fluidics formation control so as to imitate by observation initially given human body poses and hand gestures. Our fluidic formation control is based on setting suitable parameters of Smoothed Particle Hydrodynamics (SPH), which is a particle based Lagrangian method, according to imitation learning. In the controller part, we developed three approaches: In the first one, we used Artificial Neural Networks (ANN) for training of the input-output pairs on the fluidic imitation system. We extracted shape based feature vectors for human hand gestures as inputs of the system and for output we took the fluid dynamics parameters. In the second approach, we employed the Principal Component Analysis (PCA) method for human hand gesture and human body pose classification and imitation. Lastly, we developed a region based controller which assigns the fluid parameters according to the human body poses and hand gestures. In this controller, our algorithm determines the best fitting ellipses on human body regions and human hand finger positions and maps ellipse parameters to the fluid parameters. The fluid parameters adjusted by the fluidics imitation controller are body force (f), density, stiffness coefficient and velocity of particles (V) so as to lead formations of fluidic swarms to human body poses and hand gestures.
APA, Harvard, Vancouver, ISO, and other styles
10

Velmurugan, Rajbabu. "Implementation Strategies for Particle Filter based Target Tracking." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14611.

Full text
Abstract:
This thesis contributes new algorithms and implementations for particle filter-based target tracking. From an algorithmic perspective, modifications that improve a batch-based acoustic direction-of-arrival (DOA), multi-target, particle filter tracker are presented. The main improvements are reduced execution time and increased robustness to target maneuvers. The key feature of the batch-based tracker is an image template-matching approach that handles data association and clutter in measurements. The particle filter tracker is compared to an extended Kalman filter~(EKF) and a Laplacian filter and is shown to perform better for maneuvering targets. Using an approach similar to the acoustic tracker, a radar range-only tracker is also developed. This includes developing the state update and observation models, and proving observability for a batch of range measurements. From an implementation perspective, this thesis provides new low-power and real-time implementations for particle filters. First, to achieve a very low-power implementation, two mixed-mode implementation strategies that use analog and digital components are developed. The mixed-mode implementations use analog, multiple-input translinear element (MITE) networks to realize nonlinear functions. The power dissipated in the mixed-mode implementation of a particle filter-based, bearings-only tracker is compared to a digital implementation that uses the CORDIC algorithm to realize the nonlinear functions. The mixed-mode method that uses predominantly analog components is shown to provide a factor of twenty improvement in power savings compared to a digital implementation. Next, real-time implementation strategies for the batch-based acoustic DOA tracker are developed. The characteristics of the digital implementation of the tracker are quantified using digital signal processor (DSP) and field-programmable gate array (FPGA) implementations. The FPGA implementation uses a soft-core or hard-core processor to implement the Newton search in the particle proposal stage. A MITE implementation of the nonlinear DOA update function in the tracker is also presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Scott, Stephen John. "A PDF based method for modelling polysized particle laden turbulent flows without size-class discretisation." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Orchard, Marcos Eduardo. "A Particle Filtering-based Framework for On-line Fault Diagnosis and Failure Prognosis." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19752.

Full text
Abstract:
This thesis presents an on-line particle-filtering-based framework for fault diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology assumes the definition of a set of fault indicators, which are appropriate for monitoring purposes, the availability of real-time process measurements, and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. The incorporation of particle-filtering (PF) techniques in the proposed scheme not only allows for the implementation of real time algorithms, but also provides a solid theoretical framework to handle the problem of fault detection and isolation (FDI), fault identification, and failure prognosis. Founded on the concept of sequential importance sampling (SIS) and Bayesian theory, PF approximates the conditional state probability distribution by a swarm of points called particles and a set of weights representing discrete probability masses. Particles can be easily generated and recursively updated in real time, given a nonlinear process dynamic model and a measurement model that relates the states of the system with the observed fault indicators. Two autonomous modules have been considered in this research. On one hand, the fault diagnosis module uses a hybrid state-space model of the plant and a particle-filtering algorithm to (1) calculate the probability of any given fault condition in real time, (2) estimate the probability density function (pdf) of the continuous-valued states in the monitored system, and (3) provide information about type I and type II detection errors, as well as other critical statistics. Among the advantages offered by this diagnosis approach is the fact that the pdf state estimate may be used as the initial condition in prognostic modules after a particular fault mode is isolated, hence allowing swift transitions between FDI and prognostic routines. The failure prognosis module, on the other hand, computes (in real time) the pdf of the remaining useful life (RUL) of the faulty subsystem using a particle-filtering-based algorithm. This algorithm consecutively updates the current state estimate for a nonlinear state-space model (with unknown time-varying parameters) and predicts the evolution in time of the fault indicator pdf. The outcome of the prognosis module provides information about the precision and accuracy of long-term predictions, RUL expectations, 95% confidence intervals, and other hypothesis tests for the failure condition under study. Finally, inner and outer correction loops (learning schemes) are used to periodically improve the parameters that characterize the performance of FDI and/or prognosis algorithms. Illustrative theoretical examples and data from a seeded fault test for a UH-60 planetary carrier plate are used to validate all proposed approaches. Contributions of this research include: (1) the establishment of a general methodology for real time FDI and failure prognosis in nonlinear processes with unknown model parameters, (2) the definition of appropriate procedures to generate dependable statistics about fault conditions, and (3) a description of specific ways to utilize information from real time measurements to improve the precision and accuracy of the predictions for the state probability density function (pdf).
APA, Harvard, Vancouver, ISO, and other styles
13

Winer, Michael Hubert. "A three-dimensional (3D) defocusing-based particle tracking method and applications to inertial focusing in microfluidic devices." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50194.

Full text
Abstract:
Three-dimensional analysis of particles in flows within microfluidic devices is a necessary technique in the majority of current microfluidics research. One method that allows for accurate determination of particle positions in channels is defocusing-based optical detection. This thesis investigates the use of the defocusing method for particles ranging in size from 2-18 μm without the use of a three-hole aperture. Using a calibration-based analysis motivated by previous work, we were able to relate the particle position in space to its apparent size in an image. This defocusing method was then employed in several studies in order to validate its effectiveness in a wide range of particle/flow profiles. An initial study of gravitational effects on particles in low Reynolds number flows was conducted, showing that the method is accurate for particles with sizes equal to or greater than approximately 2 μm. We also found that the resolution of particle position accuracy was within 1 μm of expected theoretical results. Further studies were conducted in inertial focusing conditions, where viscous drag and inertial lift forces balance to create unique particle focusing positions in straight channels. Steady-state inertial studies in both rectangular and cylindrical channel geometries showed focusing of particles to positions similar to previous work, further verifying the defocusing method. A new regime of inertial focusing, coined transient flow, was also investigated with the use of the 3D defocusing method. This study established new regimes of particle focusing due to the effects of a transient flow on inertial forces. Within the transient study, the effects of fluid and particle density on particle focusing positions were also investigated. Finally, we provide recommendations for future work on the defocusing method and transient flows, including potential applications.
Applied Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

Agagliate, Jacopo. "A Mie-based flow cytometric size and real refractive index determination method for natural marine particle populations." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28674.

Full text
Abstract:
Following the path of combining Mie theory and flow cytometry to assign size and refractive index to suspended particles in the steps of Ackleson & Spinrad (1988) and, more recently, Green et al. (2003a, 2003b), a Mie-based flow cytometry (FC) method was developed to retrieve particle size distributions (PSDs) and real refractive index (rRI) information in natural waters. The need for a technique capable of directly assessing both size and real refractive index of the particles was first established by carrying out a sensitivity analysis of the effect a spectrally complex refractive index and log-normal variations to commonly employed PSD models have on the optical behaviour of the particle population. The Mie-based FC method proper was then developed and tested, initially against standard particles of known diameter and rRI and secondly on two datasets, one of algal culture samples (AC dataset) and one of natural seawater samples collected in UK coastal waters (UKCW dataset).The method retrieved PSDs and real refractive index distributions (PRIDs) for both datasets. FC PSDs were validated against known algal sizes for AC samples and against independent PSDs measured via laser diffractometry for UKCW samples. PRIDs were then combined with FC PSDs and fed into Mie-based forward optical modelling to reconstruct bulk IOPs. These achieved broad agreement with independent IOP measurements, lending further support to the results of the FC method and to the employment of Mie theory within the context of optical modelling of natural particle populations. Furthermore, the unique insight offered by the FC method in terms of PSD and PRID determination allowed for the assessment of the individual contribution of particle subpopulations to the bulk IOPs, both by size (small/large particle fractions) and by particle type (inorganic/organic/fluorescent fractions). Lastly, PSDs and PRIDs were combined with literature-derived models of particle density, cell organic carbon and chlorophyll-A content, in an effort to explore the biogeochemical properties of the particle populations within the UKCW dataset. The models successfully estimated independent measurements of particulate suspended matter and (after an optimisation procedure) of organic carbon and chlorophyll-A content.
APA, Harvard, Vancouver, ISO, and other styles
15

Frisch, Adam Arthur. "Development, test and application of a new method of particle shape analyses based on the concept of the fractal dimension." W&M ScholarWorks, 1988. https://scholarworks.wm.edu/etd/1539616650.

Full text
Abstract:
Shape analysis methods based on the concept of the fractal dimension are emerging as a new method of quantifying complex particle shapes. The concept of the fractal dimension stems from the approximately linear relationship between the logarithm of the particle perimeter and the logarithm of the step length or unit of measure. as step length is shortened, the resulting particle perimeter is lengthened. The fractal dimension (D) is defined as 1 {dollar}-{dollar} b, where b is the slope of the log/log plot of perimeter against step length. Using two dimensional projected particle outlines measured by a video image digitizing system, the fractal dimensions of at least 400 randomly chosen quartz particles are calculated and used to characterize a specific sediment sample. The shape population from which these samples have been drawn is characterized via a fractal dimension density histogram. These histograms are used to make statistical comparisons of particle shape information contained in different samples. The fractal shape method has several advantages over the more widely used Fourier shape method. The fractal method has greater precision and sensitivity with particle shape discriminative power approximately 4.5 times that of the Fourier method. The fractal and Fourier methods were used in two applied shape analysis studies in order to compare shape method performance and recognize potential interpretive differences arising from the use of one method over the other. The first application investigated contrasting sediment sources and their distribution in Twofold Bay, New South Wales, Australia. These results correspond well with the conclusions of other independent sedimentological investigations of Twofold Bay. The Fourier method did not distinguish clearly between the terrestrial and marine compartments. The second application tested the resolution of shape change which occurs during the process of abrasion. The fractal method proved to be highly sensitive to small scale changes in particle roughness that occur initially as the result of particle to particle collisions, whereas the Fourier method did not. The fractal method detected far greater changes in overall shape than the Fourier method which appeared insensitive to fine-scale changes in particle shape. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO, and other styles
16

Nestler, Franziska. "Efficient Computation of Electrostatic Interactions in Particle Systems Based on Nonequispaced Fast Fourier Transforms." Universitätsverlag der Technischen Universität Chemnitz, 2017. https://monarch.qucosa.de/id/qucosa%3A23376.

Full text
Abstract:
The present thesis is dedicated to the efficient computation of electrostatic interactions in particle systems, which is of great importance in the field of molecular dynamics simulations. In order to compute the therefor required physical quantities with only O(N log N) arithmetic operations, so called particle-mesh methods make use of the well-known Ewald summation approach and the fast Fourier transform (FFT). Typically, such methods are able to handle systems of point charges subject to periodic boundary conditions in all spatial directions. However, periodicity is not always desired in all three dimensions and, moreover, also interactions to dipoles play an important role in many applications. Within the scope of the present work, we consider the particle-particle NFFT method (P²NFFT), a particle-mesh approach based on the fast Fourier transform for nonequispaced data (NFFT). An extension of this method for mixed periodic as well as open boundary conditions is presented. Furthermore, the method is appropriately modified in order to treat particle systems containing both charges and dipoles. Consequently, an efficient algorithm for mixed charge-dipole systems, that additionally allows a unified handling of various types of periodic boundary conditions, is presented for the first time. Appropriate error estimates as well as parameter tuning strategies are developed and verified by numerical examples.
Die vorliegende Arbeit widmet sich der Berechnung elektrostatischer Wechselwirkungen in Partikelsystemen, was beispielsweise im Bereich der molekulardynamischen Simulationen eine zentrale Rolle spielt. Um die dafür benötigten physikalischen Größen mit lediglich O(N log N) arithmetischen Operationen zu berechnen, nutzen sogenannte Teilchen-Gitter-Methoden die Ewald-Summation sowie die schnelle Fourier-Transformation (FFT). Typischerweise können derartige Verfahren Systeme von Punktladungen unter periodischen Randbedingungen in allen Raumrichtungen handhaben. Periodizität ist jedoch nicht immer bezüglich aller drei Dimensionen erwünscht. Des Weiteren spielen auch Wechselwirkungen zu Dipolen in vielen Anwendungen eine wichtige Rolle. Zentraler Gegenstand dieser Arbeit ist die Partikel-Partikel-NFFT Methode (P²NFFT), ein Teilchen-Gitter-Verfahren, welches auf der schnellen Fouriertransformation für nichtäquidistante Daten (NFFT) basiert. Eine Erweiterung dieses Verfahrens auf gemischt periodische sowie offene Randbedingungen wird vorgestellt. Außerdem wird die Methode für die Behandlung von Partikelsystemen, in denen sowohl Ladungen als auch Dipole vorliegen, angepasst. Somit wird erstmalig ein effizienter Algorithmus für gemischte Ladungs-Dipol-Systeme präsentiert, der zusätzlich die Behandlung sämtlicher Arten von Randbedingungen mit einem einheitlichen Zugang erlaubt. Entsprechende Fehlerabschätzungen sowie Strategien für die Parameterwahl werden entwickelt und anhand numerischer Beispiele verifiziert.
APA, Harvard, Vancouver, ISO, and other styles
17

Patsora, Iryna, Dmytro Tatarchuk, Henning Heuer, and Susanne Hillmann. "Study of a Particle Based Films Cure Process by High-Frequency Eddy Current Spectroscopy." Multidisciplinary Digital Publishing Institute, 2016. https://tud.qucosa.de/id/qucosa%3A30206.

Full text
Abstract:
Particle-based films are today an important part of various designs and they are implemented in structures as conductive parts, i.e., conductive paste printing in the manufacture of Li-ion batteries, solar cells or resistive paste printing in IC. Recently, particle based films were also implemented in the 3D printing technique, and are particularly important for use in aircraft, wind power, and the automotive industry when incorporated onto the surface of composite structures for protection against damages caused by a lightning strike. A crucial issue for the lightning protection area is to realize films with high homogeneity of electrical resistance where an in-situ noninvasive method has to be elaborated for quality monitoring to avoid undesirable financial and time costs. In this work the drying process of particle based films was investigated by high-frequency eddy current (HFEC) spectroscopy in order to work out an automated in-situ quality monitoring method with a focus on the electrical resistance of the films. Different types of particle based films deposited on dielectric and carbon fiber reinforced plastic substrates were investigated in the present study and results show that the HFEC method offers a good opportunity to monitor the overall drying process of particle based films. Based on that, an algorithm was developed, allowing prediction of the final electrical resistance of the particle based films throughout the drying process, and was successfully implemented in a prototype system based on the EddyCus® HFEC device platform presented in this work. This prototype is the first solution for a portable system allowing HFEC measurement on huge and uneven surfaces.
APA, Harvard, Vancouver, ISO, and other styles
18

Patsora, Iryna, Dmytro Tatarchuk, Henning Heuer, and Susanne Hillmann. "Study of a Particle Based Films Cure Process by High-Frequency Eddy Current Spectroscopy." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-220609.

Full text
Abstract:
Particle-based films are today an important part of various designs and they are implemented in structures as conductive parts, i.e., conductive paste printing in the manufacture of Li-ion batteries, solar cells or resistive paste printing in IC. Recently, particle based films were also implemented in the 3D printing technique, and are particularly important for use in aircraft, wind power, and the automotive industry when incorporated onto the surface of composite structures for protection against damages caused by a lightning strike. A crucial issue for the lightning protection area is to realize films with high homogeneity of electrical resistance where an in-situ noninvasive method has to be elaborated for quality monitoring to avoid undesirable financial and time costs. In this work the drying process of particle based films was investigated by high-frequency eddy current (HFEC) spectroscopy in order to work out an automated in-situ quality monitoring method with a focus on the electrical resistance of the films. Different types of particle based films deposited on dielectric and carbon fiber reinforced plastic substrates were investigated in the present study and results show that the HFEC method offers a good opportunity to monitor the overall drying process of particle based films. Based on that, an algorithm was developed, allowing prediction of the final electrical resistance of the particle based films throughout the drying process, and was successfully implemented in a prototype system based on the EddyCus® HFEC device platform presented in this work. This prototype is the first solution for a portable system allowing HFEC measurement on huge and uneven surfaces.
APA, Harvard, Vancouver, ISO, and other styles
19

Avancini, Giovane. "Análise numérica bidimensional de interação fluido-estrutura: uma formulação posicional baseada em elementos finitos e partículas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-23042018-103653/.

Full text
Abstract:
Problemas envolvendo interação entre fluido e estrutura são desafiadores para a engenharia e, ao mesmo tempo em que abrangem dois meios com características físicas distintas, demandam uma descrição matemática para cada um deles que seja compatível, de forma a permitir o acoplamento. Assim, este trabalho apresenta uma formulação em descrição Lagrangeana para análises dinâmicas de sólidos, fluidos incompressíveis e interação fluido-estrutura (IFE). Nos problemas de IFE é comum a estrutura apresentar grandes deslocamentos, o que torna imprescindível considerar o efeito da não-linearidade geométrica. Levando isso em consideração, é empregada uma formulação do método dos elementos finitos (MEF) baseada em posições, cuja aplicação em análises dinâmicas de estruturas em regime de grandes deslocamentos vem se mostrando bastante robusta. Já no âmbito da dinâmica dos fluidos, sabe-se que uma descrição Lagrangeana acaba por eliminar os termos convectivos das equações de Navier-Stokes, dispensando o uso de métodos estabilizantes nessas equações. Por outro lado, a dificuldade é então transferida para o uso de técnicas eficientes de remesh, preservação da qualidade da malha e de identificação do contorno, uma vez que os fluidos podem deformar-se indefinidamente quando submetidos a forças de cisalhamento. Assim, uma combinação do método dos elementos finitos e do método de partículas é utilizada, onde as forças de interação entre as partículas de fluido são calculadas por meio de uma malha de elementos finitos que é renovada para cada passo de tempo. Por meio de técnicas que reconstroem automaticamente o contorno, é possível simular problemas de superfície livre que sofram severas alterações e, até mesmo, uma eventual separação de partículas do domínio inicial, representando, por exemplo, a formação de gotas. Por fim, o sistema de acoplamento entre o fluido e o sólido é simplificado devido a ambos os domínios serem descritos através de um referencial Lagrangeano, não necessitando de métodos para a adaptação da malha do fluido de modo a acompanhar o movimento da estrutura.
Problems involving fluid-structure interaction are challenging for engineering and, while involving two different materials with distinct physical properties, they require a compatible mathematical description for both solid and fluid domain in order to allow the coupling. Thus, this work introduces a formulation, under Lagrangian description, for the solution of solid, incompressible fluid dynamics and fluid-structure interaction (FSI). In FSI problems, the structure usually presents large displacements thus making mandatory a geometric non-linear analysis. Considering it, we adopt a position based formulation of the finite element method (FEM) which has been shown to be very robust when applied to large displacement solid dynamics. For the fluid mechanics problem it is well known that a Lagrangian description eliminates the convective terms from the Navier-Stokes equations and thus, no stabilization technique is required. However, the difficulty is then transferred to the need of efficient re-meshing, mesh quality and external boundary identification techniques, since the fluid presents no resistance to shear stresses and may deform indefinitely. In this sense, we employ a combination of finite element and particle methods in which the particle interaction forces are computed by mean of a finite element mesh which is re-constructed at every time step. Free surface flows are simulated by a boundary recognition technique enabling large domain distortions or even the particles separation from the main domain, representing for instance a water drop. Finally, the fluid-structure coupling is simplified due to the Lagrangian description adopted for both materials, with no need for extra adaptive mesh-moving technique for the fluid computational domain to follow the structure motion.
APA, Harvard, Vancouver, ISO, and other styles
20

Nasar, Abouzied. "Eulerian and Lagrangian smoothed particle hydrodynamics as models for the interaction of fluids and flexible structures in biomedical flows." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/eulerian-and-lagrangian-smoothed-particle-hydrodynamics-as-models-for-the-interaction-of-fluids-and-flexible-structures-in-biomedical-flows(507cd0db-0116-4258-81f2-8d242e8984fa).html.

Full text
Abstract:
Fluid-structure interaction (FSI), occurrent in many areas of engineering and in the natural world, has been the subject of much research using a wide range of modelling strategies. However, problems with high levels of structural deformation are difficult to resolve and this is particularly the case for biomedical flows. A Lagrangian flow model coupled with a robust model for nonlinear structural mechanics seems a natural candidate since large distortion of the computational geometry is expected. Smoothed particle Hydrodynamics (SPH) has been widely applied for nonlinear interface modelling and this approach is investigated here. Biomedical applications often involve thin flexible structures and a consistent approach for modelling the interaction of fluids with such structures is also required. The Lagrangian weakly compressible SPH method is investigated in its recent delta-SPH form utilising inter-particle density fluxes to improve stability. Particle shifting is also used to maintain particle distributions sufficiently close to uniform to enable stable computation. The use of artificial viscosity is avoided since it introduces unphysical dissipation. First, solid boundary conditions are studied using a channel flow test. Results show that when the particle distribution is allowed to evolve naturally instabilities are observed and deviations are noted from the expected order of accuracy. A parallel development in the SPH group at Manchester has considered SPH in Eulerian form (for different applications). The Eulerian form is applied to the channel flow test resulting in improved accuracy and stability due to the maintenance of a uniform particle distribution. A higher-order accurate boundary model is developed and applied for the Eulerian SPH tests and third-order convergence is achieved. The well documented case of flow past a thin plate is then considered. The immersed boundary method (IBM) is now a natural candidate for the solid boundary. Again, it quickly becomes apparent that the Lagrangian SPH form has limitations in terms of numerical noise arising from anisotropic particle distributions. This corrupts the predicted flow structures for moderate Reynolds numbers (O(102)). Eulerian weakly compressible SPH is applied to the problem with the IBM and is found to give accurate and convergent results without any numerical stability problems (given the time step limitation defined by the Courant condition). Modelling highly flexible structures using the discrete element model is investigated where granular structures are represented as bonded particles. A novel vector-based form (the V-Model) is identified as an attractive approach and developed further for application to solid structures. This is shown to give accurate results for quasi-static and dynamic structural deformation tests. The V-model is applied to the decay of structural vibration in a still fluid modelled using Eulerian SPH with no artificial stabilising techniques. Again, results are in good agreement with predictions of other numerical models. A more demanding case representative of pulsatile flow through a deep leg vein valve is also modelled using the same form of Eulerian SPH. The results are free of numerical noise and complex FSI features are captured such as vortex shedding and non-linear structural deflection. Reasonable agreement is achieved with direct in-vivo observations despite the simplified two-dimensional numerical geometry. A robust, accurate and convergent method has thus been developed, at present for laminar two-dimensional low Reynolds number flows but this may be generalised. In summary a novel robust and convergent FSI model has been established based on Eulerian SPH coupled to the V-Model for large structural deformation. While these developments are in two dimensions the method is readily extendible to three-dimensional, laminar and turbulent flows for a wide range of applications in engineering and the natural world.
APA, Harvard, Vancouver, ISO, and other styles
21

Mudiyanselage, Charith Malinga Rathnayaka. "Meshfree-based numerical modelling of three-dimensional (3-D) microscale deformations of plant food cells during drying." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/118069/1/Charith_Malinga_Rathnayaka_Mudiyanselage_Thesis.pdf.

Full text
Abstract:
Numerical modelling has been a helpful tool for analysing plant cellular structure and associated dynamics. It generally consumes less time, money and other resources compared to experimenting with real plant structures. In this context, investigating the morphological changes that take place in the plant cellular structure under different circumstances has recently been an important application. Drying is one of the most common and cost effective techniques for extending the shelf life of food-plant materials (for instance, fruits and vegetables). During the drying process, food-plant cellular structure undergoes structural deformations that influence drying operations in terms of performance as well as food quality. To engineer effective and efficient food drying processes, it is important to establish a good understanding of cell morphological changes and underlying mechanisms. Grid-based approaches and meshfree methods are the two main categories of numerical modelling techniques used to analyse food-plant drying phenomena. Grid-based methods encounter drawbacks in some applications due to the inherent 'grid' behaviour and subsequent inability to successfully model problems with large deformations and multiphase phenomena. To overcome these drawbacks, meshfree (or meshless) based numerical modelling and simulation methods have been developed. There are recently reported efforts to numerically model the micro mechanics of food-plant matter using coupled Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM)-based approaches. Some of these studies focus only on fresh plant cellular structures and their behaviour under external mechanical loading. There are other studies considering both fresh and dried plant cellular structures in two dimensions (2-D) along with their morphological characteristics. The overall computational approach in those investigations show a promising capacity to be further extended towards more realistic scales. However, it is difficult to describe a truly 3-D phenomenon like cellular scale drying phenomena by means of a 2-D approach. Thus, in order to approximate the morphological changes of cellular scale food-plant drying phenomena in a more detailed manner, there is a requirement to extend that approach into the 3-D level. In addition, there are conceptual constraints in using the Discrete Element Method (DEM) to represent the cell wall membrane in a completely meshfree numerical model. The literature suggests that conceptually, a Coarse-Grained (CG) approach could be more suited for this application, as there is a stronger conceptual and fundamental matching in an SPH-CG coupling than in an SPH-DEM coupling. Within this background, this investigation aimed to develop a 3-D Smoothed Particle Hydrodynamics (SPH) and Coarse Grained method (CG) coupled numerical model, which could successfully approximate the morphological behaviour of foodplant cells during drying. Initially, the fundamentals of microscale plant cellular drying phenomena were studied. The applicability of a coupled SPH-CG 3-D approach was evaluated through a basic 3-D plant cell drying model. Next, an experimental investigation was carried out to observe the real morphological changes taking place in plant cellular structure during drying. Through the learning gleaned from both the basic numerical and experimental studies, an improved 3-D SPH-CG cell drying model was developed. The 3-D nature of this model allows it to predict the morphological changes on a more realistic scale compared to the previous 2-D models developed using a SPH-DEM coupling. The numerical results are found to be well comparable, both qualitatively and quantitatively, with the experimental findings. As the next step, the developed 3-D numerical approach was successfully applied to model different types of food-plant cells (e.g. apple, potato, grape and carrot). The agreement between the model predictions and the experimental findings was found to be favourable for all four food-plant categories selected. The 3-D SPH-CG numerical model investigated in this study can successfully model dryness states of food-plant cells in a larger moisture content range with stable results compared to the recently reported Finite Element Modelling (FEM)-based and meshfree-based plant cell drying models. The computational accuracy of the numerical modelling scheme has been maintained at a high value through limiting the percentage model consistency error to less than 1%. This developed 3-D model will provide a source of guidance for industrial practitioners to optimise food drying operations in terms of final product quality, nutritious value and overall process performance. In addition, the developed computational framework has potential future applications in modelling a wide range of plant cells and animal cells.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Ningning. "A micromechanical study of the Standard Penetration Test." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/668841.

Full text
Abstract:
This thesis explores the potential of models based on the discrete element method (DEM) to study dynamic probing of granular materials, considering realistic particle-scale properties. The virtual calibration chamber technique, based on the discrete element method, is applied to study the standard penetration test (SPT). A macro-element approach is used to represent a rod driven with an impact like those applied to perform SPT. The rod is driven into a chamber filled with a scaled discrete analogue of a quartz sand. The contact properties of the discrete analogue are calibrated simulating two low-pressure triaxial tests. The rod is driven changing input energy and controlling initial density and confinement stress. Energy-based blowcount normalization is shown to be effective. Results obtained are in good quantitative agreement with well-accepted experimentally-based relations between blowcount, density and overburden. A comprehensive energetic balance of the virtual calibration chamber is conducted. Energy balance is applied separately to the driven rod and the chamber system, giving a detailed account of all the different energy terms. The characterization of the evolution and distribution of each energy component is investigated. It appears that the SPT test input energy is mainly dissipated in friction. The energy-based interpretation of SPT dynamic response proposed by Schnaid et al. (2017) is then validated in comparisons between static and dynamic penetration results. Moreover, microscale investigation provides important information on energy dissipation mechanisms. A well-established DEM crushing contact model and a rough Hertzian contact model are combined to incorporate both effects in a single contact model. The efficient user defined contact model (UDCM) technique is used for the contact model implementation. Parametric studies explore the effect of particle roughness on single particle crushing event. The model is then used to recalibrate the contact properties of the quartz sand, being able to use realistic contact properties and then correctly capture both load-unload behaviour and particle size distribution evolution. The calibration chamber results are exploited to investigate the relation between static and dynamic penetration test. This is done first for unbreakable materials and later for crushable and rough-crushable ones. It is shown that the tip resistance measured under impact dynamic penetration conditions is very close to that under constant velocity conditions, hence supporting recent proposals to relate CPT and SPT results. It is also shown that penetration resistance reduces if particles are allowed to break, particularly when roughness is also considered.
Esta tesis explora el potencial de los modelos basados en el método de elementos discretos (DEM) para estudiar el sondeo dinámico de materiales granulares, considerando propiedades realistas a escala de partículas. La técnica de cámara de calibración virtual, basada en el método de elemento discreto, se aplica para estudiar la prueba de penetración estándar (SPT). Se utiliza un enfoque de macroelemento para representar una barra impulsada con un impacto como los aplicados para realizar SPT. La varilla se introduce en una cámara llena de un análogo discreto escalado de arena de cuarzo. Las propiedades de contacto del análogo discreto se calibran simulando dos pruebas triaxiales de baja presión. La varilla se acciona cambiando la energía de entrada y controlando la densidad inicial y el estrés de confinamiento. La normalización del recuento de golpes basado en energía se muestra efectiva. Los resultados obtenidos están en buen acuerdo cuantitativo con relaciones basadas en experimentos bien aceptadas entre recuento de golpes, densidad y sobrecarga. Se realiza un balance energético integral de la cámara de calibración virtual. El balance de energía se aplica por separado a la varilla impulsada y al sistema de cámara, dando una descripción detallada de todos los diferentes términos de energía. Se investiga la caracterización de la evolución y distribución de cada componente energético. Parece que la energía de entrada de prueba SPT se disipa principalmente en fricción. La interpretación basada en la energía de la respuesta dinámica SPT propuesta por Schnaid et al. (2017) luego se valida en comparaciones entre los resultados de penetración estática y dinámica. Además, la investigación en microescala proporciona información importante sobre los mecanismos de disipación de energía. Un modelo de contacto de trituración DEM bien establecido y un modelo de contacto hertziano aproximado se combinan para incorporar ambos efectos en un modelo de contacto único. La técnica eficiente de modelo de contacto definido por el usuario (UDCM) se utiliza para la implementación del modelo de contacto. Los estudios paramétricos exploran el efecto de la rugosidad de las partículas en el evento de trituración de partículas individuales. El modelo se usa para recalibrar las propiedades de contacto de la arena de cuarzo, pudiendo usar propiedades de contacto realistas y luego capturar correctamente el comportamiento de carga y descarga y la evolución de la distribución del tamaño de partícula. Los resultados de la cámara de calibración se explotan para investigar la relación entre la prueba de penetración estática y dinámica. Esto se hace primero para materiales irrompibles y luego para materiales triturables y desmenuzables. Se muestra que la resistencia de la punta medida en condiciones de penetración dinámica de impacto es muy cercana a la de condiciones de velocidad constante, por lo tanto, respalda propuestas recientes para relacionar los resultados de CPT y SPT. También se muestra que la resistencia a la penetración se reduce si se permite que las partículas se rompan, particularmente cuando también se considera la aspereza.
APA, Harvard, Vancouver, ISO, and other styles
23

Situ, Peter D. "Voxel based beta particle dosimetry methods in mice." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/5897.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 14, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
24

Barla-Szabo, Daniel. "A study of gradient based particle swarm optimisers." Diss., University of Pretoria, 2010. http://hdl.handle.net/2263/29927.

Full text
Abstract:
Gradient-based optimisers are a natural way to solve optimisation problems, and have long been used for their efficacy in exploiting the search space. Particle swarm optimisers (PSOs), when using reasonable algorithm parameters, are considered to have good exploration characteristics. This thesis proposes a specific way of constructing hybrid gradient PSOs. Heterogeneous, hybrid gradient PSOs are constructed by allowing the gradient algorithm to optimise local best particles, while the PSO algorithm governs the behaviour of the rest of the swarm. This approach allows the distinct algorithms to concentrate on performing the separate tasks of exploration and exploitation. Two new PSOs, the Gradient Descent PSO, which combines the Gradient Descent and PSO algorithms, and the LeapFrog PSO, which combines the LeapFrog and PSO algorithms, are introduced. The GDPSO represents arguably the simplest hybrid gradient PSO possible, while the LeapFrog PSO incorporates the more sophisticated LFOP1(b) algorithm, exhibiting a heuristic algorithm design and dynamic time step adjustment mechanism. The strong tendency of these hybrids to prematurely converge is examined, and it is shown that by modifying algorithm parameters and delaying the introduction of gradient information, it is possible to retain strong exploration capabilities of the original PSO algorithm while also benefiting from the exploitation of the gradient algorithms.
Dissertation (MSc)--University of Pretoria, 2010.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
25

Fernandez, Comesana Daniel. "Scan-based sound visualisation methods using sound pressure and particle velocity." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/366935/.

Full text
Abstract:
Sound visualisation techniques have played a key role in the development of acoustics throughout history. Progress in measurement apparatus and the techniques used to display sound and vibration phenomena has provided excellent tools for understanding specific acoustic problems. Traditional methods, however, such as step-by-step measurements or simultaneous multichannel systems, require a significant trade-off between time requirements, flexibility, and cost. This thesis explores the foundations of a novel sound field mapping procedure. The proposed technique, Scan and Paint, is based on the acquisition of sound pressure and particle velocity by manually moving a p-u probe (pressure-particle velocity sensor) across a sound field, whilst filming the event with a camera. The sensor position is extracted by applying automatic colour tracking to each frame of the recorded video. It is then possible to directly visualise sound variations across the space in terms of sound pressure, particle velocity or acoustic intensity. The high flexibility, high resolution, and low cost characteristics of the proposed measurement methodology, along with its short time requirements, define Scan and Paint as an efficient sound visualisation technique for stationary sound fields. A wide range of specialised applications have been studied, proving that the measurement technique is not only suitable for near-field source localisation purposes but also for vibro-acoustic problems, panel noise contribution analysis, source radiation assessment, intensity vector field mapping and far field localisation.
APA, Harvard, Vancouver, ISO, and other styles
26

Borovies, Drew A. "Particle filter based tracking in a detection sparse discrete event simulation environment." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FBorovies.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environment, and Simulation (MOVES))--Naval Postgraduate School, March 2007.
Thesis Advisor(s): Christian Darken. "March 2007." Includes bibliographical references (p. 115). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
27

Copplestone, Stephen [Verfasser]. "Particle-Based Numerical Methods for the Simulation of Electromagnetic Plasma Interactions / Stephen Copplestone." München : Verlag Dr. Hut, 2019. http://d-nb.info/1202169473/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

ALVES, JOAO FELIPE BARBOSA. "A COMPARATIVE ANALYSIS OF THE MAIN PARTICLE-BASED METHODS USED FOR FLOW SIMULATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12970@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Neste trabalho, foi realizado um estudo comparativo de eficiência e acurácia dos métodos de partículas Moving Particle Semi-implicit Method (MPS) e Smoothed Particle Hydrodynamics (SPH). A acurácia dos métodos de partículas foi determinada tomando-se como referência os métodos dos Volumes Finitos e Volume of Fluid (VOF). A comparação de acurácia entre os métodos MPS e SPH foi realizada através da simulação dos problemas de quebra de barragem e de descarga de água. Além disso, o problema de escoamento laminar em uma cavidade quadrada e o problema do tubo de choque foram simulados com sucesso pelo método SPH. A análise de eficiência foi realizada pela determinação do tempo total de processamento em função do número de partículas. Adicionalmente, uma análise da influencia do número de partículas na solução foi realizada. Os resultados obtidos mostram que ambos os métodos podem ser considerados como boas ferramentas para a simulação de fluidos.
This work comprises a comparative study of the particle methods Moving Particle Semi-implicit (MPS) and Smoothed Particle Hydrodynamics (SPH) in terms of their efficiency and accuracy. The methods of Finite Volume and Volume of Fluid (VOF) were used as reference for determining the accuracy of the particle methods. The methods MPS and SPH were compared with each other by means of simulations of the problems of dam collapse and water discharge. On top of that, the problems of shear driven cavity and shock tube were successfully simulated using SPH. In order to analyze the methods` efficiency, the total processing time as a function of the number of particles was calculated. Finally, an analysis of the influence of the number of particles in solution was performed. The results obtained in this work show that both the MPS and SPH methods can be considered as good tools for fluid simulation.
APA, Harvard, Vancouver, ISO, and other styles
29

Yildirim, Berkin. "A Comparative Evaluation Of Conventional And Particle Filter Based Radar Target Tracking." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609043/index.pdf.

Full text
Abstract:
In this thesis the radar target tracking problem in Bayesian estimation framework is studied. Traditionally, linear or linearized models, where the uncertainty in the system and measurement models is typically represented by Gaussian densities, are used in this area. Therefore, classical sub-optimal Bayesian methods based on linearized Kalman filters can be used. The sequential Monte Carlo methods, i.e. particle filters, make it possible to utilize the inherent non-linear state relations and non-Gaussian noise models. Given the sufficient computational power, the particle filter can provide better results than Kalman filter based methods in many cases. A survey over relevant radar tracking literature is presented including aspects as estimation and target modeling. In various target tracking related estimation applications, particle filtering algorithms are presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Boukef, Hela. "Sur l’ordonnancement d’ateliers job-shop flexibles et flow-shop en industries pharmaceutiques : optimisation par algorithmes génétiques et essaims particulaires." Thesis, Ecole centrale de Lille, 2009. http://www.theses.fr/2009ECLI0007/document.

Full text
Abstract:
Pour la résolution de problèmes d’ordonnancement d’ateliers de type flow-shop en industries pharmaceutiques et d’ateliers de type job-shop flexible, deux méthodes d’optimisation ont été développées : une méthode utilisant les algorithmes génétiques dotés d’un nouveau codage proposé et une méthode d’optimisation par essaim particulaire modifiée pour être exploitée dans le cas discret. Les critères retenus dans le cas de lignes de conditionnement considérées sont la minimisation des coûts de production ainsi que des coûts de non utilisation des machines pour les problèmes multi-objectifs relatifs aux industries pharmaceutiques et la minimisation du Makespan pour les problèmes mono-objectif des ateliers job-shop flexibles.Ces méthodes ont été appliquées à divers exemples d’ateliers de complexités distinctes pour illustrer leur mise en œuvre. L’étude comparative des résultats ainsi obtenus a montré que la méthode basée sur l’optimisation par essaim particulaire est plus efficace que celle des algorithmes génétiques, en termes de rapidité de la convergence et de l’approche de la solution optimale
For flexible job-shop and pharmaceutical flow-shop scheduling problems resolution, two optimization methods are considered: a genetic algorithm one using a new proposed coding and a particle swarm optimization one modified in order to be used in discrete cases.The criteria retained for the considered packaging lines in pharmaceutical industries multi-objective problems are production cost minimization and total stopping cost minimization. For the flexible job-shop scheduling problems treated, the criterion taken into account is Makespan minimization.These two methods have been applied to various work-shops with distinct complexities to show their efficiency.After comparison of these methods, the obtained results allowed us to notice the efficiency of the based particle swarm optimization method in terms of convergence and reaching optimal solution
APA, Harvard, Vancouver, ISO, and other styles
31

Plinke, Burkhard. "Größenanalyse an nicht separierten Holzpartikeln mit regionenbildenden Algorithmen am Beispiel von OSB-Strands." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-98518.

Full text
Abstract:
Bei strukturorientierten, aus relativ großen Holzpartikeln aufgebauten Holzwerkstoffen wie z.B. OSB (oriented strand board) addieren sich die gerichteten Festigkeiten der einzelnen Lagen je nach Orientierung der Partikel und der Verteilung ihrer Größenparameter. Wünschenswert wäre eine Messung der Partikelgeometrie und Orientierung möglichst im Prozess, z.B. am Formstrang vor der Presse direkt durch den „Blick auf das Vlies“. Bisher sind regelmäßige on-line-Messungen der Spangeometrie aber nicht möglich, und Einzelspanmessungen werden nicht vorgenommen, weil sie zu aufwändig wären. Um die Partikelkonturen zunächst hinreichend für die Vermessung zu restaurieren und dann zu vermessen, muss ein mehrstufiges Verfahren angewendet werden, das eine Szene mit Strands und mehr oder weniger deutlichen Kanten zunächst als „Grauwertgebirge“ auffasst. Zur Segmentierung reicht ein Watershed-Algorithmus nicht aus. Auch ein zweistufiger Kantendetektor nach Canny liefert allein noch kein ausreichendes Ergebnis, weil sich keine geschlossenen Objektkonturen ergeben. Hinreichend dagegen ist ein komplexes Verfahren auf der Grundlage der Höhenschichtzerlegung und nachfolgenden Synthese: Nach einer Transformation der Grauwerte des Bildes in eine reduzierte, gleichverteilte Anzahl von Höhenschichten werden zwischen diesen die lokalen morphologischen Gradienten berechnet und herangezogen für die Rekonstruktion der ursprünglichen Spankonturen. Diese werden aus den Höhenschichten aufaddiert, wobei allerdings nur Teilflächen innerhalb eines für die gesuchten Spangrößen plausiblen Größenintervalls einbezogen werden, um Störungen zu unterdrücken. Das Ergebnis der Rekonstruktion wird zusätzlich verknüpft mit den bereits durch einen Canny-Operator im Originalbild detektierten deutlichen Kanten und morphologisch bereinigt. Diese erweiterte Höhenschichtanalyse ergibt ausreichend segmentierte Bilder, in denen die Objektgrenzen weitgehend den Spankonturen entsprechen. Bei der nachfolgenden Vermessung der Objekte werden Standard-Algorithmen eingesetzt, wobei sich die Approximation von Spankonturen durch momentengleiche Ellipsen als sinnvoll erwies. Verbliebene Fehldetektionen können bei der Vermessung unterdrückt werden durch Formfaktoren und zusätzliche Größenintervalle. Zur Darstellung und Charakterisierung der Größenverteilungen für die Länge und die Breite wurden die nach der Objektfläche gewichtete, linear skalierte Verteilungsdichte (q2-Verteilung), die Verteilungssumme und verschiedene Quantile verwendet. Zur Umsetzung und Demonstration des Zusammenwirkens der verschiedenen Algorithmen wurde auf der Basis von MATLAB das Demonstrationsprogramm „SizeBulk“ entwickelt, das Bildfolgen verarbeiten kann und mit dem die verschiedenen Varianten der Bildaufbereitung und Parametrierung durchgespielt werden können. Das Ergebnis des Detektionsverfahrens enthält allerdings nur die vollständigen Konturen der ganz oben liegenden Objekte; Objekte unterhalb der Außenlage sind teilweise verdeckt und können daher nur unvollständig vermessen werden. Zum Test wurden daher synthetische Bilder mit vereinzelten und überlagerten Objekten bekannter Größenverteilung erzeugt und dem Detektions- und Messverfahren unterworfen. Dabei zeigte sich, dass die Größenstatistiken durch den Überlagerungseffekt und auch die Spanorientierung zwar beeinflusst werden, dass aber zumindest die Modalwerte der wichtigsten Größenparameter Länge und Breite meist erkennbar bleiben. Als Versuchsmaterial dienten außer den synthetischen Bildern verschiedene Sortimente von OSB-Strands aus Industrie- und Laborproduktion. Sie wurden sowohl manuell vereinzelt als auch zu einem Vlies arrangiert vermessen. Auch bei realen Strands zeigten sich gleiche Einflüsse der Überlagerung auf die Größenverteilungen wie in der Simulation. Es gilt aber auch hier, dass die Charakteristika verschiedener Spankontingente bei gleichen Aufnahmebedingungen und Auswerteparametern gut messbar sind bzw. dass Änderungen in der gemessenen Größenverteilung eindeutig den geometrischen Eigenschaften der Späne zugeordnet werden können. Die Eignung der Verarbeitungsfolge zur Charakterisierung von Spangrößenverteilungen bestätigte sich auch an Bildern, die ausschließlich am Vlies auf einem Formstrang aufgenommen wurden. Zusätzlich wurde nachgewiesen, dass mit der erweiterten Höhenschichtanalyse auch Bilder von Spanplattenoberflächen ausgewertet werden könnten und daraus auf die Größenverteilung der eingesetzten Deckschichtspäne geschlossen werden kann. Das vorgestellte Verfahren ist daher eine gute und neuartige Möglichkeit, prozessnah an Teilflächen von OSB-Vliesen anhand von Grauwertbildern die Größenverteilungen der Strands zu charakterisieren und eignet sich grundsätzlich für den industriellen Einsatz. Geeignete Verfahren waren zumindest für Holzpartikel bisher nicht bekannt. Diese Möglichkeit, Trends in der Spangrößenverteilung automatisch zu erkennen, eröffnet daher neue Perspektiven für die Prozessüberwachung
The strength of wood-based materials made of several layers of big and oriented particles like OSB (oriented strand board) is a superposition of the strengths of the layers according to the orientation of the particles and depending from their size distribution. It would be desirable to measure particle geometry and orientation close to the production process, e.g. with a “view onto the mat”. Currently, continuous on-line measurements of the particle geometry are not possible, while measurements of separated particles would be too costly and time-consuming. Before measuring particle shapes they have to be reconstructed in a multi-stage procedure which considers an image scene with strands as “gray value mountains”. Segmentation using a watershed algorithm is not sufficient. Also a two-step edge detector according to Canny does not yield closed object shapes. A multi-step procedure based on threshold decomposition and recombination however is successful: The gray values in the image are transformed into a reduced and uniformly distributed set of threshold levels. The local morphological gradients between these levels are used to re-build the original particle shapes by adding the threshold levels. Only shapes with a plausible size corresponding to real particle shapes are included in order to suppress noise. The result of the reconstruction from threshold levels is then matched with the result of the strong edges in the original image, which had been detected using a Canny operator, and is finally cleaned with morphological operators. This extended threshold analysis produces sufficiently segmented images with object shapes corresponding extensively to the particle shapes. Standard algorithms are used to measure geometric features of the objects. An approximation of particle shapes with ellipses of equal moments of inertia is useful. Remaining incorrectly detected objects are removed by form factors and size intervals. Size distributions for the parameters length and width are presented and characterized as density distribution histograms, weighted by the object area and linearly scaled (q2 distribution), as well as the cumulated distribution and different quantiles. A demonstration software “SizeBulk” based on MATLAB has been developed to demonstrate the computation and the interaction of algorithms. Image sequences can be processed and different variations of image preprocessing and parametrization can be tested. However, the detection procedure yields complete shapes only for those particles in the top layer. Objects in lower layers are partially hidden and cannot be measured completely. Artificial images with separated and with overlaid objects with a known size distribution were generated to study this effect. It was shown that size distributions are influenced by this covering effect and also by the strand orientation, but that at least the modes of the most important size parameters length and width remain in evidence. Artificial images and several samples with OSB strands from industrial and laboratory production were used for testing. They were measured as single strands as well as arrangements similar to an OSB mat. For real strands, the same covering effects to the size distributions revealed as in the simulation. Under stable image acquisition conditions and using similar processing parameters the characteristics of these samples can well be measured, and changes in the size distributions are definitely due to the geometric properties of the strands. The suitability of the processing procedure for the characterization of strand size distributions could also be confirmed for images acquired from OSB mats in a production line. Moreover, it could be shown that the extended threshold analysis is also suitable to evaluate images of particle board surfaces and to draw conclusions about the size distribution of the top layer particles. Therefore, the method presented here is a novel possibility to measure size distributions of OSB strands through the evaluation of partial gray value images of the mat surface. In principle, this method is suitable to be transferred to an industrial application. So far, methods that address the problem of detecting trends of the strand size distribution were not known, and this work shows new perspectives for process monitoring
APA, Harvard, Vancouver, ISO, and other styles
32

Weissmann, Simon [Verfasser], and Claudia [Akademischer Betreuer] Schillings. "Particle based sampling and optimization methods for inverse problems / Simon Weissmann ; Betreuer: Claudia Schillings." Mannheim : Universitätsbibliothek Mannheim, 2021. http://d-nb.info/122529522X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Weissmann, Simon Verfasser], and Claudia [Akademischer Betreuer] [Schillings. "Particle based sampling and optimization methods for inverse problems / Simon Weissmann ; Betreuer: Claudia Schillings." Mannheim : Universitätsbibliothek Mannheim, 2021. http://d-nb.info/122529522X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Staib, Joachim [Verfasser], Stefan [Gutachter] Gumhold, and Gerik [Gutachter] Scheuermann. "Focus and Context Methods for Particle-Based Data / Joachim Staib ; Gutachter: Stefan Gumhold, Gerik Scheuermann." Dresden : Technische Universität Dresden, 2019. http://d-nb.info/1226946348/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ge, Xiaowei. "Nonlinear Microscopy Based on Femtosecond Fiber Laser." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1556914609069399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

ALONGI, Francesco. "River flow monitoring: LS-PIV technique, an image-based method to assess discharge." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/575301.

Full text
Abstract:
The measurement of the river discharge within a natural ort artificial channel is still one of the most challenging tasks for hydrologists and the scientific community. Although discharge is a physical quantity that theoretically can be measured with very high accuracy, since the volume of water flows in a well-defined domain, there are numerous critical issues in obtaining a reliable value. Discharge cannot be measured directly, so its value is obtained by coupling a measurement of a quantity related to the volume of flowing water and the area of a channel cross-section. Direct measurements of current velocity are made, traditionally with instruments such as current meters. Although measurements with current meters are sufficiently accurate and even if there are universally recognized standards for the current application of such instruments, they are often unusable under specific flow conditions. In flood conditions, for example, due to the need for personnel to dive into the watercourse, it is impossible to ensure adequate safety conditions to operators for carrying out flow measures. Critical issue arising from the use of current meters has been partially addressed thanks to technological development and the adoption of acoustic sensors. In particular, with the advent of Acoustic Doppler Current Profilers (ADCPs), flow measurements can take place without personnel having direct contact with the flow, performing measurements either from the bridge or from the banks. This made it possible to extend the available range of discharge measurements. However, the flood conditions of a watercourse also limit the technology of ADCPs. The introduction of the instrument into the current with high velocities and turbulence would put the instrument itself at serious risk, making it vulnerable and exposed to damage. In the most critical case, the instrument could be torn away by the turbulent current. On the other hand, considering smaller discharges, both current meters and ADCPs are technologically limited in their measurement as there are no adequate water levels for the use of the devices. The difficulty in obtaining information on the lowest and highest values of discharge has important implications on how to define the relationships linking flows to water levels. The stage-discharge relationship is one of the tools through which it is possible to monitor the flow in a specific section of a watercourse. Through this curve, a discharge value can be obtained from knowing the water stage. Curves are site-specific and must be continuously updated to account for changes in geometry that the sections for which they are defined may experience over time. They are determined by making simultaneous discharge and stage measurements. Since instruments such as current meters and ADCPs are traditionally used, stage-discharge curves suffer from instrumental limitations. So, rating curves are usually obtained by interpolation of field-measured data and by extrapolate them for the highest and the lowest discharge values, with a consequent reduction in accuracy. This thesis aims to identify a valid alternative to traditional flow measurements and to show the advantages of using new methods of monitoring to support traditional techniques, or to replace them. Optical techniques represent the best solution for overcoming the difficulties arising from the adoption of a traditional approach to flow measurement. Among these, the most widely used techniques are the Large-Scale Particle Image Velocimetry (LS-PIV) and the Large-Scale Particle Tracking Velocimetry. They are able to estimate the surface velocity fields by processing images representing a moving tracer, suitably dispersed on the liquid surface. By coupling velocity data obtained from optical techniques with geometry of a cross-section, a discharge value can easily be calculated. In this thesis, the study of the LS-PIV technique was deepened, analysing the performance of the technique, and studying the physical and environmental parameters and factors on which the optical results depend. As the LS-PIV technique is relatively new, there are no recognized standards available for the proper application of the technique. A preliminary numerical analysis was conducted to identify the factors on which the technique is significantly dependent. The results of these analyses enabled the development of specific guidelines through which the LS-PIV technique could subsequently be applied in open field during flow measurement campaigns in Sicily. In this way it was possible to observe experimentally the criticalities involved in applying the technique on real cases. These measurement campaigns provided the opportunity to carry out analyses on field case studies and structure an automatic procedure for optimising the LS-PIV technique. In all case studies it was possible to observe how the turbulence phenomenon is a worsening factor in the output results of the LS-PIV technique. A final numerical analysis was therefore performed to understand the influence of turbulence factor on the performance of the technique. The results obtained represent an important step for future development of the topic.
APA, Harvard, Vancouver, ISO, and other styles
37

Sabat, Macole. "Modèles euleriens et méthodes numériques pour la description des sprays polydisperses turbulents." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC086.

Full text
Abstract:
De nos jours, la simulation des écoulements diphasiques a de plus en plus d’importance dans les chambres de combustion aéronautiques en tant qu’un des éléments requis pour analyser et maîtriser le processus complet de combustion, afin d’améliorer la performance du moteur et de mieux prédire les émissions polluantes. Dans les applications industrielles, la modélisation du combustible liquide trouvé en aval de l’injecteur sous forme de brouillard de gouttes polydisperse, appelé spray, est de préférence faite à l’aide de méthodes Eulériennes. Ce choix s’explique par les avantages qu’offrent ces méthodes par rapport aux méthodes Lagrangiennes, notamment la convergence statistique intrinsèque, le couplage aisé avec la phase gazeuse ainsi que l’efficacité pour le calcul haute performance. Dans la présente thèse, on utilise une approche Eulérienne basée sur une fermeture au niveau cinétique de type distribution Gaussienne Anisotrope (AG). L’AG résout des moments de vitesse jusqu’au deuxième ordre et permet de capter les croisements des trajectoires (PTC) à petite échelle de manière statistique. Le système d’équations obtenu est hyperbolique, le problème est bien-posé et satisfait les conditions de réalisabilité. L’AG est comparé au modèle monocinétique (MK) d’ordre 1 en vitesse. Il est approprié pour la description des particules faiblement inertielles. Il mène à un système faiblement hyperbolique qui peut générer des singularités. Plusieurs schémas numériques, utilisés pour résoudre les systèmes hyperboliques et faible- ment hyperboliques, sont évalués. Ces schémas sont classifiés selon leur capacité à traiter les singularités naturellement présentes dans les modèles Eulériens, sans perdre l’ordre global de la méthode ni rompre les conditions de réalisabilité. L’AG est testé sur un champ turbulent 3D chargé de particules dans des simulations numériques directes. Le code ASPHODELE est utilisé pour la phase gazeuse et l’AG est implémenté dans le code MUSES3D pour le spray. Les résultats sont comparés aux de simulations Lagrangiennes de référence et aux modèle MK. L’AG est validé pour des gouttes modérément inertielles à travers des résultats qualitatifs et quantitatifs. Il s’avère prometteur pour les applications complexes comprenant des PTC à petite échelle. Finalement, l’AG est étendu à la simulation aux grandes échelles nécessaire dans les cas réels turbulents dans le domaine industriel en se basant sur un filtrage au niveau cinétique. Cette stratégie aide à garantir les conditions de réalisabilités. Des résultats préliminaires sont évalués en 2D pour tester la sensibilité des résultats LES sur les paramètres des modèles de fermetures de sous mailles
In aeronautical combustion chambers, the ability to simulate two-phase flows gains increasing importance nowadays since it is one of the elements needed for the full understanding and prediction of the combustion process. This matter is motivated by the objective of improving the engine performance and better predicting the pollutant emissions. On the industrial scale, the description of the fuel spray found downstream of the injector is preferably done through Eulerian methods. This is due to the intrinsic statistical convergence of these methods, their natural coupling to the gas phase and their efficiency in terms of High Performance Computing compared to Lagrangian methods. In this thesis, the use of Kinetic-Based Moment Method with an Anisotropic Gaussian (AG) closure is investigated. By solving all velocity moments up to second order, this model reproduces statistically the main features of small scale Particles Trajectories Crossing (PTC). The resulting hyperbolic system of equations is mathematically well-posed and satisfies the realizability properties. This model is compared to the first order model in the KBMM hierarchy, the monokinetic model MK which is suitable of low inertia particles. The latter leads to a weakly hyperbolic system that can generate δ-shocks. Several schemes are compared for the resolution of the hyperbolic and weakly hyperbolic system of equations. These methods are assessed based on their ability to handle the naturally en- countered singularities due to the moment closures, especially without globally degenerating to lower order or violating the realizability constraints. The AG is evaluated for the Direct Numerical Simulation of 3D turbulent particle-laden flows by using ASPHODELE solver for the gas phase, and MUSES3D solver for the Eulerian spray in which the new model is implemented. The results are compared to the reference Lagrangian simulation as well as the MK results. Through the qualitative and quantitative results, the AG is found to be a predictive method for the description of moderately inertial particles and is a good candidate for complex simulations in realistic configurations where small scale PTC occurs. Finally, within the framework of industrial turbulence simulations a fully kinetic Large Eddy Simulation formalism is derived based on the AG model. This strategy of directly applying the filter on the kinetic level is helpful to devise realizability conditions. Preliminary results for the AG-LES model are evaluated in 2D, in order to investigate the sensitivity of the LES result on the subgrid closures
APA, Harvard, Vancouver, ISO, and other styles
38

Mucs, Daniel. "Computational methods for prediction of protein-ligand interactions." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/computational-methods-for-prediction-of-proteinligand-interactions(33ad0b24-ef7b-4dff-8e28-597a2f34e079).html.

Full text
Abstract:
This thesis contains three main sections. In the first section, we examine methodologies to discriminate Type II protein kinase inhibitors from the Type I inhibitors. We have studied the structure of 55 Type II kinase inhibitors and have notice specific descriptive geometric features. Using this information we have developed a pharmacophore and a shape based screening approach. We have found that these methods did not effectively discriminate between the two inhibitor types used independently, but when combined in a consecutive way – pharmacophore search first, then shape based screening, we have found a method that successfully filtered out all Type I molecules. The effect of protonation states and using different conformer generators were studied as well. This method was then tested on a freely available database of decoy molecules and again shown to be discriminative. In the second section of the thesis, we implement and assess swarm-based docking methods. We implement a repulsive particle swarm optimization (RPSO) based conformational search approach into Autodock 3.05. The performance of this approach with different parameters was then tested on a set of 51 protein ligand complexes. The effect of using different factoring for the cognitive, social and repulsive terms and the importance of the inertia weight were explored. We found that the RPSO method gives similar performance to the particle swarm optimization method. Compared to the genetic algorithm approach used in Autodock 3.05, our RPSO method gives better results in terms of finding lower energy conformations. In the final, third section we have implemented a Monte Carlo (MC) based conformer searching approach into Gaussian03. This enables high level quantum mechanics/molecular mechanics (QM/MM) potentials to be used in docking molecules in a protein active site. This program was tested on two Zn2+ ion-containing complexes, carbonic anhydrase II and cytidine deaminase. The effects of different QM region definitions were explored in both systems. A consecutive and a parallel docking approach were used to study the volume of the active site explored by the MC search algorithm. In case of the carbonic anhydrase II complex, we have used 1,2-difluorobenzene as a ligand to explore the favourable interactions within the binding site. With the cytidine deaminase complex, we have evaluated the ability of the approach to discriminate the native pose from other higher energy conformations during the exploration of the active site of the protein. We find from our initial calculations, that our program is able to perform a conformational search in both cases, and the effect of QM region definition is noticeable, especially in the description of the hydrophobic interactions within the carbonic anhydrase II system. Our approach is also able to find poses of the cytidine deaminase ligand within 1 Å of the native pose.
APA, Harvard, Vancouver, ISO, and other styles
39

Bähr, Steffen [Verfasser], and J. [Akademischer Betreuer] Becker. "Real-Time Trigger and online Data Reduction based on Machine Learning Methods for Particle Detector Technology / Steffen Bähr ; Betreuer: J. Becker." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1238147771/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mühlbauer, Sebastian J. [Verfasser], Thorsten [Akademischer Betreuer] Pöschel, Thorsten [Gutachter] Pöschel, and Jens [Gutachter] Harting. "Multiscale modeling of heterogeneous catalysis in porous metal foam structures using particle-based simulation methods / Sebastian Josef Mühlbauer ; Gutachter: Thorsten Pöschel, Jens Harting ; Betreuer: Thorsten Pöschel." Erlangen : FAU University Press, 2019. http://nbn-resolving.de/urn:nbn:de:bvb:29-opus4-126488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Mühlbauer, Sebastian Josef [Verfasser], Thorsten [Akademischer Betreuer] Pöschel, Thorsten [Gutachter] Pöschel, and Jens [Gutachter] Harting. "Multiscale modeling of heterogeneous catalysis in porous metal foam structures using particle-based simulation methods / Sebastian Josef Mühlbauer ; Gutachter: Thorsten Pöschel, Jens Harting ; Betreuer: Thorsten Pöschel." Erlangen : FAU University Press, 2019. http://d-nb.info/1203375018/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Raudenská, Lenka. "Metriky a kriteria pro diagnostiku sociotechnických systémů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-233879.

Full text
Abstract:
This doctoral thesis is focused on metrics and the criteria for socio-technical system diagnostics, which is a high profile topic for companies wanting to ensure the best in product quality. More and more customers are requiring suppliers to prove reliability in the production and supply quality of products according to given specifications. Consequently the ability to produce quality goods corresponding to customer requirements has become a fundamental condition in order to remain competitive. The thesis firstly lays out the basic strategies and rules which are prerequisite for a successful working company in order to ensure provision of quality goods at competitive costs. Next, methods and tools for planning are discussed. Planning is important in its impact on budget, time schedules, and necessary sourcing quantification. Risk analysis is also included to help define preventative actions, and reduce the probability of error and potential breakdown of the entire company. The next part of the thesis deals with optimisation problems, which are solved by Swarm based optimisation. Algorithms and their utilisation in industry are described, in particular the Vehicle routing problem and Travelling salesman problem, used as tools for solving specialist problems within manufacturing corporations. The final part of the thesis deals with Qualitative modelling, where solutions can be achieved with less exact quantitative information of the surveyed model. The text includes qualitative algebra descriptions, which discern only three possible values – positive, constant and negative, which are sufficient in the demonstration of trends. The results can also be conveniently represented using graph theory tools.
APA, Harvard, Vancouver, ISO, and other styles
43

Flayac, Emilien. "Coupled methods of nonlinear estimation and control applicable to terrain-aided navigation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY014/document.

Full text
Abstract:
Au cours de cette thèse, le problème général de la conception de méthodes couplées de contrôle et d'estimation pour des systèmes dynamiques non linéaires a été étudié. La cible principale était la navigation par corrélation de terrain (TAN en anglais), où le problème était de guider et d’estimer la position 3D d’un drone survolant une zone connue. Dans cette application, on suppose que les seules données disponibles sont la vitesse du système, une mesure de la différence entre l'altitude absolue du drone et l'altitude du sol survolé et une carte du sol. La TAN est un bon exemple d'application non linéaire dans laquelle le principe de séparation ne peut pas être appliqué. En réalité, la qualité des observations dépend du contrôle et plus précisément de la zone survolée par le drone. Par conséquent, il existe un besoin de méthodes couplées d'estimation et de contrôle. Il est à noter que le problème d'estimation créé par TAN est en soi difficile à analyser et à résoudre. En particulier, les sujets suivants ont été traités:• Conception d'observateur non linéaire et commande en retour de sortie pour la TAN avec des cartes au terrain analytiquesdans un cadre déterministe à temps continu.• La modélisation conjointe du filtrage optimal non linéaire et du contrôle optimal stochastique en temps discretavec des informations imparfaites.• la conception de schémas de contrôle prédictif stochastique duaux associés à un filtre particulaire et leur implémentation numérique pour la TAN
During this PhD, the general problem of designing coupled control and estimation methods for nonlinear dynamical systems has been investigated. The main target application was terrain-aided navigation (TAN), where the problem is to guide and estimate the 3D position of a drone flying over a known area. In this application, it is assumed that the only available data are the speed of the system, a measurement of the difference between the absolute altitude of the drone and the altitude of the ground flied over and a map of the ground. TAN is a good example of a nonlinear application where the separation principle cannot be applied. Actually, the quality of the observations depends on the control and more precisely on the area that is flied over by the drone. Therefore, there is a need for coupled estimation and control methods. It is to be noted that the estimation problem created by TAN is in itself difficult to analyse and solve. In particular, the following topics have been treated:• Nonlinear observer design and outputfeedback control for TAN with analytical ground mapsin a deterministic continuous-time framework.• The joint modelling of nonlinear optimal filtering and discrete-time stochastic optimal controlwith imperfect information.• The design of output-feedback Explicit dual stochastic MPC schemes coupled with a particlefilter and their numerical implementation to TAN
APA, Harvard, Vancouver, ISO, and other styles
44

Robinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire concernent le développement de méthodes de pronostic à base de modèles. Le pronostic à base de modèles a pour but d'estimer le temps qu'il reste avant qu'un système ne soit défaillant, à partir d'un modèle physique de la dégradation du système. Ce temps de vie restant est appelé durée de résiduelle (RUL) du système.Le pronostic à base de modèle est composé de deux étapes principales : (i) estimation de l'état actuel de la dégradation et (ii) prédiction de l'état futur de la dégradation. La première étape, qui est une étape de filtrage, est réalisée à partir du modèle et des mesures disponibles. La seconde étape consiste à faire de la propagation d'incertitudes. Le principal enjeu du pronostic concerne la prise en compte des différentes sources d'incertitude pour obtenir une mesure de l'incertitude associée à la RUL prédite. Les principales sources d'incertitude sont les incertitudes de modèle, les incertitudes de mesures et les incertitudes liées aux futures conditions d'opération du système. Afin de gérer ces incertitudes et les intégrer au pronostic, des méthodes probabilistes ainsi que des méthodes ensemblistes ont été développées dans cette thèse.Dans un premier temps, un filtre de Kalman étendu ainsi qu'un filtre particulaire sont appliqués au pronostic de propagation de fissure, en utilisant la loi de Paris et des données synthétiques. Puis, une méthode combinant un filtre particulaire et un algorithme de détection (algorithme des sommes cumulatives) a été développée puis appliquée au pronostic de propagation de fissure dans un matériau composite soumis à un chargement variable. Cette fois, en plus des incertitudes de modèle et de mesures, les incertitudes liées aux futures conditions d'opération du système ont aussi été considérées. De plus, des données réelles ont été utilisées. Ensuite, deux méthodes de pronostic sont développées dans un cadre ensembliste où les erreurs sont considérées comme étant bornées. Elles utilisent notamment des méthodes d'inversion ensembliste et un observateur par intervalles pour des systèmes linéaires à temps discret. Enfin, l'application d'une méthode issue du domaine de l'analyse de fiabilité des systèmes au pronostic à base de modèles est présentée. Il s'agit de la méthode Inverse First-Order Reliability Method (Inverse FORM).Pour chaque méthode développée, des métriques d'évaluation de performance sont calculées dans le but de comparer leur efficacité. Il s'agit de l'exactitude, la précision et l'opportunité
In this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
APA, Harvard, Vancouver, ISO, and other styles
45

Ungan, Cahit Ugur. "Nonlinear Image Restoration." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606796/index.pdf.

Full text
Abstract:
This thesis analyzes the process of deblurring of degraded images generated by space-variant nonlinear image systems with Gaussian observation noise. The restoration of blurred images is performed by using two methods
a modified version of the Optimum Decoding Based Smoothing Algorithm and the Bootstrap Filter Algorithm which is a version of Particle Filtering methods. A computer software called MATLAB is used for performing the simulations of image estimation. The results of some simulations for various observation and image models are presented.
APA, Harvard, Vancouver, ISO, and other styles
46

Galindo, Muñoz Natalia. "Development of direct measurement techniques for the in-situ internal alignment of accelerating structures." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/100488.

Full text
Abstract:
Las exigentes tolerancias de alineación en los componentes de los futuros colisionadores lineales de partículas requieren el desarrollo de nuevas técnicas de alineación más precisas que las existentes. Este es el caso del Colisionador Lineal Compacto (Compact Linear Collider, CLIC), cuyos objetivos altamente restrictivos de alineamiento alcanzan los 10 um. Para poder lograr el máximo rendimiento del acelerador, es necesario que el posicionamiento de las estructuras que aceleran las partículas y de los campos que las guían cumplan las tolerancias de alineación para dirigir el haz a lo largo de la trayectoria diseñada. Dicho procedimiento consiste en relacionar la posición de los ejes de referencia de cada componente con respecto a objetos externos, o fiduciales, lo cual resulta muy tedioso y económicamente costoso. Los errores sistemáticos y aleatorios se van acumulando en cada paso del proceso y, en consecuencia, la precisión final de alineamiento es todo un desafío. En este contexto, nace el proyecto PACMAN (Particle Accelerator Components Metrology and Alignment to the Nanometre scale), subvencionado por la Unión Europea en el programa FP7 de financiación para la investigación e innovación. El objetivo principal de PACMAN es investigar, desarrollar e implementar una solución integrada alternativa que incorpore todos los pasos de alineación en una misma ubicación, con el objetivo de mejorar la precisión de alineación de los componentes de los aceleradores, en concreto: las estructuras aceleradoras, los cuadrupolos y los monitores de posición de haz. La viabilidad de las soluciones desarrolladas y la precisión de alineamiento alcanzada deben de demostrarse en un banco de pruebas utilizando componentes de CLIC. La estrategia de PACMAN para alcanzar el objetivo técnico se divide en tres pasos. El primero consiste en la fiducialización de los componentes y sus soportes. El segundo paso es el ensamblaje de los componentes en dos tipos de soporte, uno compuesto por un monitor de posición de haz y un cuadrupolo, y otro con cuatro estructuras aceleradoras, tomando como referencia su centro electromagnético. Finalmente, ambos soportes se transportan al túnel para su alineación final utilizando técnicas de hilos tensados. En esta tesis doctoral, se describe el desarrollo de una nueva técnica no destructiva para localizar los ejes electromagnéticos de estructuras aceleradoras y su validación experimental. Para ello, se ha utilizado una estructura aceleradora de CLIC conocida como TD24. Debido a la complejidad mecánica de la TD24, su difícil acceso y su diámetro medio de iris de 5.5 mm, se desarrolla una nueva técnica denominada en esta tesis como 'el método perturbativo' y se realiza una propuesta experimental de validación. El estudio de viabilidad de este método, cumpliendo con los requisitos impuestos de precisión en la medida de 10 um, ha sido realizado con una campaña extensa de simulaciones de campos electromagnéticos en tres dimensiones utilizando la herramienta de software conocida como HFSS. Los resultados de simulación han permitido el desarrollo de un algoritmo muy completo de medidas y han proporcionado las especificaciones técnicas para el diseño conceptual de un banco de pruebas para la medida de los ejes electromagnéticos de la TD24. El preciso ensamblaje del banco de pruebas y sus correspondientes calibraciones, la incorporación de nuevos tratamientos de las medidas en el algoritmo final y la caracterización de fuentes de error en la medida, favorecieron la localización del centro electromagnético en la TD24 con una precisión menor a 1 um con un error estimado menor que 8.5 um, cumplimiendo con los objetivos de precisión establecidos.
In the next generation of linear particle accelerators, challenging alignment tolerances are required in the positioning of the components focusing, accelerating and detecting the beam over the accelerator length in order to achieve the maximum machine performance. In the case of the Compact Linear Collider (CLIC), accelerating structures, beam position monitors and quadrupole magnets need to be aligned in their support with respect to their reference axes with an accuracy of 10 um. To reach such objective, the PACMAN (Particle Accelerator Components Metrology and Alignment to the Nanometer Scale) project strives for the improvement of the current alignment accuracy by developing new methods and tools, whose feasibility should be validated using the major CLIC components. This Ph.D. thesis concerns the investigation, development and implementation of a new non-destructive intracavity technique, referenced here as 'the perturbative method', to determine the electromagnetic axes of accelerating structures by means of a stretched wire, acting as a reference of alignment. Of particular importance is the experimental validation of the method through the 5.5 mm iris-mean aperture CLIC prototype known as TD24, with complex mechanical features and difficult accessibility, in a dedicated test bench. In the first chapter of this thesis, the alignment techniques in particle accelerators and the novel proposals to be implemented in the future linear colliders are introduced, and a detailed description of the PACMAN project is provided. The feasibility study of the method, carried out with extensive electromagnetic fields simulations, is described in chapter 2, giving as a result, the knowledge of the theoretical accuracy expected in the measurement of the electromagnetic axes and facilitating the development of a measurement algorithm. The conceptual design, manufacturing and calibration of the automated experimental set-up, integrating the solution developed to measure the electromagnetic axes of the TD24, are covered in chapter 3. The future lines of research and developments of the perturbative method are also explored. In chapter 4, the most significant results obtained from an extensive experimental work are presented, analysed and compared with simulations. The proof-of-principle is completed, the measurement algorithm is optimised and the electromagnetic centre is measured in the TD24 with a precision less than 1 um and an estimated error less than 8.5 um. Finally, in chapter 5, the developments undertaken along this research work are summarised, the innovative achievements accomplished within the PACMAN project are listed and its impact is analysed.
En la generació pròxima d'acceleradors de partícules lineals, desafiant toleràncies d'alineament és requerit en el posicionament dels components que enfoquen, accelerant i detectant la biga sobre la longitud d'accelerador per tal d'aconseguir l'actuació de màquina màxima. En el cas del Colisionador Compacte Lineal (CLIC), accelerant estructures, monitors de posició de fes i imants necessiten ser alineats en el seu suport amb respectar a les seves destrals de referència amb una precisió de 10 um. Per assolir tal objectiu, el PACMAN (Metrologia de Components de l'Accelerador de partícules i Alineament al Nanometer Escala) projecte s'esforça per la millora de l'actual precisió d'alineament per mètodes nous en desenvolupament i eines, la viabilitat dels quals hauria de ser validada utilitzant els components de CLIC importants. Aquesta tesi concerneix la investigació, desenvolupament i implementació d'un nou no-destructiu tècnica interna, va referenciar ací mentre 'el mètode de pertorbació' per determinar les destrals electromagnètiques d'accelerar estructures mitjançant un cable estès, actuant com a referència d'alineament. De la importància particular és la validació experimental del mètode a través del 5.5 mm iris-roí obertura prototipus de CLIC sabut com TD24, amb característiques mecàniques complexes i accessibilitat difícil, en un banc de prova dedicat. En el primer capítol d'aquesta tesi, les tècniques d'alineament en acceleradors de partícules i les propostes novelles per ser implementades en el futur colisionador lineal és introduït, i una descripció detallada del projecte PACMAN és proporcionat. L'estudi de viabilitat el mètode de pertorbació, va dur a terme amb simulacres de camps electromagnètics extensos, és descrit dins capitol 2, donant com a resultat, el coneixement de la precisió teòrica esperada en la mida de les destrals electromagnètiques i facilitant el desenvolupament d'un algoritme de mida. El disseny conceptual, fabricació i calibratge del conjunt experimental automatitzat-amunt, integrant la solució desenvolupada per mesurar les destrals electromagnètiques del TD24, és cobert dins capitol 3. Les línies futures de recerca i desenvolupaments del mètode és també va explorar. Dins capitol 4, la majoria de resultats significatius van obtenir d'una faena experimental extensa és presentada, analitzat i comparat amb simulacres. La prova-de-el principi és completat, l'algoritme de mida és optimitzat i el centre electromagnètic és mesurat en el TD24 amb una precisió menys d'1 um i un error calculat menys de 8.5 um. Finalment, dins capitol 5, els desenvolupaments empresos al llarg d'aquesta faena de recerca és resumit, les consecucions innovadores van acomplir dins del projecte PACMAN és llistat i el seu impacte és analitzat.
Galindo Muñoz, N. (2018). Development of direct measurement techniques for the in-situ internal alignment of accelerating structures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/100488
TESIS
APA, Harvard, Vancouver, ISO, and other styles
47

Klement, Nathalie. "Planification et affectation de ressources dans les réseaux de soin : analogie avec le problème du bin packing, proposition de méthodes approchées." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22517/document.

Full text
Abstract:
Les travaux de thèse présentés s’intéressent à l’optimisation des systèmes hospitaliers. Une solution existante est la mutualisation de ressources au sein d’un même territoire. Cela peut passer par différentes formes de coopération dont la Communauté Hospitalière de Territoire. Différents problèmes sont définis en fonction du niveau de décision : stratégique, tactique ou opérationnel ; et du niveau de modélisation : macroscopique, mesoscopique et microscopique. Des problèmes de dimensionnement, de planification et d’ordonnancement peuvent être considérés. Nous définissons notamment le problème de planification d’activités avec affectation de ressources. Plusieurs cas sont dissociés : soit les ressources humaines sont à capacité infinie, soit elles sont à capacité limitée et leur affectation sur site est une donnée, soit elles sont à capacité limitée et leur affectation sur site est une variable. Ces problèmes sont spécifiés et formalisés mathématiquement. Tous ces problèmes sont comparés à un problème de bin packing : le problème du bin packing de base pour le problème où les ressources humaines sont à capacité infinie, le problème du bin packing avec interdépendances dans les deux autres cas. Le problème du bin packing avec incompatibilités est ainsi défini. De nombreuses méthodes de résolution ont déjà été proposées pour le problème du bin packing. Nous faisons plusieurs propositions dont un couplage hiérarchique entre une heuristique et une métaheuristique. Des métaheuristiques basées individu et une métaheuristique basée population, l’optimisation par essaim particulaire, sont utilisées. Cette proposition nécessite un nouveau codage inspiré des problèmes de permutation d’ordonnancement. Cette méthode donne de très bons résultats sur les instances du problème du bin packing. Elle est simple à appliquer : elle couple des méthodes déjà connues. Grâce au couplage proposé, les nouvelles contraintes à considérer nécessitent d’être intégrées uniquement au niveau de l’heuristique. Le fonctionnement de la métaheuristique reste le même. Ainsi, notre méthode est facilement adaptable au problème de planification d’activités avec affectation de ressources. Pour les instances de grande taille, le solveur utilisé comme référence ne donne qu’un intervalle de solutions. Les résultats de notre méthode sont une fois encore très prometteurs : les solutions obtenues sont meilleures que la borne supérieure retournée par le solveur. Il est envisageable d’adapter notre méthode sur d’autres problèmes plus complexes par intégration dans l’heuristique des nouvelles contraintes à considérer. Il serait notamment intéressant de tester ces méthodes sur de réelles instances hospitalières afin d’évaluer leur portée
The presented work is about optimization of the hospital system. An existing solution is the pooling of resources within the same territory. This may involve different forms of cooperation between several hospitals. Various problems are defined at the decision level : strategic, tactical or operational ; and at the modeling level : macroscopic, mesoscopic and microscopic. Problems of sizing, planning and scheduling may be considered. We define the problem of activities planning with resource allocation. Several cases are dissociated : either human resources are under infinite capacity, or they are under limited capacity and their assignment on a place is given, or they are under limited capacity and their assignment is a variable. These problems are specified and mathematically formalized. All thes problems are compared to a bin packing problem : the classical problem of bin packing is used for the problem where human resources are under infinite capacity, the bin packing problem with interdependencies is used in the two other cases. The bin packing problem with incompatibilities is defined. Many resolution methods have been proposed for the bin packing problem. We make several propositions including a hierarchical coupling between heuristic and metaheuristic. Single based metaheuristics and a population based metaheuristic, the particle swarm optimization, are used. This proposition requires a new encoding inspired by permutation problems. This method gives very good results to solve instances of the bin packing problem. It is easy to apply : it combines already known methods. With the proposed coupling, the new constraints to be considered need to be integrated only on the heuristic level. The running of the metaheuristic is the same. Thus, our method is easily adaptable to the problem of activities planning with resource allocation. For big instances, the solver used as a reference returns only an interval of solutions. The results of our method are once again very promising : the obtained solutions are better than the upper limit returned by the solver. It is possible to adapt our method on more complex issues through integration into the heuristic of the new constraints to consider. It would be particularly interesting to test these methods on real hospital authorities to assess their significance
APA, Harvard, Vancouver, ISO, and other styles
48

Barbarroux, Loïc. "Contributions à la modélisation multi-échelles de la réponse immunitaire T-CD8 : construction, analyse, simulation et calibration de modèles." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC026/document.

Full text
Abstract:
Lors de l’infection par un pathogène intracellulaire, l’organisme déclenche une réponse immunitaire spécifique dont les acteurs principaux sont les lymphocytes T-CD8. Ces cellules sont responsables de l’éradication de ce type d’infections et de la constitution du répertoire immunitaire de l’individu. Les processus qui composent la réponse immunitaire se répartissent sur plusieurs échelles physiques inter-connectées (échelle intracellulaire, échelle d’une cellule, échelle de la population de cellules). La réponse immunitaire est donc un processus complexe, pour lequel il est difficile d’observer ou de mesurer les liens entre les différents phénomènes mis en jeu. Nous proposons trois modèles mathématiques multi-échelles de la réponse immunitaire, construits avec des formalismes différents mais liés par une même idée : faire dépendre le comportement des cellules TCD8 de leur contenu intracellulaire. Pour chaque modèle, nous présentons, si possible, sa construction à partir des hypothèses biologiques sélectionnées, son étude mathématique et la capacité du modèle à reproduire la réponse immunitaire au travers de simulations numériques. Les modèles que nous proposons reproduisent qualitativement et quantitativement la réponse immunitaire T-CD8 et constituent ainsi de bons outils préliminaires pour la compréhension de ce phénomène biologique
Upon infection by an intracellular pathogen, the organism triggers a specific immune response,mainly driven by the CD8 T cells. These cells are responsible for the eradication of this type of infections and the constitution of the immune repertoire of the individual. The immune response is constituted by many processes which act over several interconnected physical scales (intracellular scale, single cell scale, cell population scale). This biological phenomenon is therefore a complex process, for which it is difficult to observe or measure the links between the different processes involved. We propose three multiscale mathematical models of the CD8 immune response, built with different formalisms but related by the same idea : to make the behavior of the CD8 T cells depend on their intracellular content. For each model, we present, if possible, its construction process based on selected biological hypothesis, its mathematical study and its ability to reproduce the immune response using numerical simulations. The models we propose succesfully reproduce qualitatively and quantitatively the CD8 immune response and thus constitute useful tools to further investigate this biological phenomenon
APA, Harvard, Vancouver, ISO, and other styles
49

Chuang, Chun-Chen, and 莊淳臻. "A Particle Filter Based Tracking Method for Vehicles in Campus." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/bsbk56.

Full text
Abstract:
碩士
國立高雄科技大學
電腦與通訊工程系
107
The Internet of Things has been widely used in various areas: industrial internet, connected car, smart home, smart city, connected health, smart farming, etc. In connected car, the V2X technology is developed to improve driving safety and assist the management of traffic. The management of vehicles in a surrounding district includes the visible-range and zone tracking and detection. This thesis proposes a tracking and detection method on the regional-map for vehicles in the campus with the particle filter algorithm. In this method, the measurements of relative distance and angle of the target vehicle in visual range of the sensor at the roadside unit will be transmitted to the control center where the possible corresponding locations on the map are marked by the particles. The algorithm predicts the moving paths and velocities of the vehicle based on the displacement vectors between consecutive measures positions. Each particle updates its weight by new measurements of relative distance and angle and then a normalized procedure is performed. A resampling process eliminates particles that have small weights and to concentrate on particles with large weights. The centroid of particles at every time index is the estimated position of the vehicle. All centroids form the moving trajectory of the vehicle. Simulation results show that the proposed particle filter based tracking method can promptly track the vehicle in the campus area and obtain its moving trajectory. It has the advantages of a high tracking rate and low root-mean-squared errors (RMSE).
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Chun-Jen, and 陳俊仁. "The PID Controller Design based on Modified Particle Swarm Optimization Method." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/63176645305540303924.

Full text
Abstract:
碩士
國立高雄海洋科技大學
輪機工程研究所
99
In this research, the modified particle swarm optimization method (PSO) is investigated. The traditional particle swarm optimization method was bringing the local minimum characteristic. Consequently, the genetic algorithm (GA) is cooperating with the particle swarm optimization method to avoid the local minimum characteristic. The basic elements, including distribution, selection and mutation will count into the particle swarm optimization method based on constrict factor. There are three benchmark problems and two control systems be used to verify the proposed method. Some computer simulations are provided to illustrate the advances, better than traditional GA and PSO, of our main ideas.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography