Dissertations / Theses on the topic 'Particle Design'

To see the other types of publications on this topic, follow the link: Particle Design.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Particle Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Galagali, Nikhil. "Algorithms for particle remeshing applied to smoothed particle hydrodynamics." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55074.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-59).
This thesis outlines adaptivity schemes for particle-based methods for the simulation of nearly incompressible fluid flows. As with the remeshing schemes used in mesh and grid-based methods, there is a need to use localized refinement in particle methods to reduce computational costs. Various forms of particle refinement have been proposed for particle-based methods such as Smoothed Particle Hydrodynamics (SPH). However, none of the techniques that exist currently are able to retain the original degree of randomness among particles. Existing methods reinitialize particle positions on a regular grid. Using such a method for region localized refinement can lead to discontinuities at the interfaces between refined and unrefined particle domains. In turn, this can produce inaccurate results or solution divergence. This thesis outlines the development of new localized refinement algorithms that are capable of retaining the initial randomness of the particles, thus eliminating transition zone discontinuities. The algorithms were tested through SPH simulations of Couette Flow and Poiseuille Flow with spatially varying particle spacing. The determined velocity profiles agree well with theoretical results. In addition, the algorithms were also tested on a flow past a cylinder problem, but with a complete domain remeshing. The original and the remeshed particle distributions showed similar velocity profiles. The algorithms can be extended to 3-D flows with few changes, and allow the simulation of multi-scale flows at reduced computational costs.
by Nikhil Galagali.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Woodside, Steven Murray. "Spatial distribution of acoustic forces on particles : implications for particle separation and resonator design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0007/NQ34646.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Watson, Paul David Julian. "Geotextile filter design and particle bridge formation." Thesis, Queen Mary, University of London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yao, Wang. "Particle swarm optimization aided MIMO transceiver design." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/301206/.

Full text
Abstract:
In this treatise, we design Particle Swarm Optimization (PSO) aided MIMO transceivers. The employment of multiple antennas leads to the concept of multiple-input multiple-output (MIMO) systems, which constitute an effective way of achieving an increased capacity. When multiple antennas are employed at the Base Station (BS), it is possible to employ Multiuser Detection (MUD) in the uplink. However, in the downlink (DL), due to the size as well as power consumption constraints of mobile devices, so-called Multiuser Transmission (MUT) techniques may be employed at the BS for suppressing the multiuser interference before transmissions, provided that the DL channel to be encountered may be accurately predicted. The MUT scheme using the classic MMSE criterion is popular owing to its simplicity. However, since the BER is the ultimate system performance indicator, in this treatise we are more interested in the Minimum BER MUT (MBER-MUT) design. Unlike the MBER-MUD, the MBER-MUT design encounters a constrained nonlinear optimization problem due to the associated total transmit power constraint. Sequential Quadratic Programming (SQP) algorithms may be used to obtain the precoder’s coefficients. However, the computational complexity of the SQP based MBER-MUT solution may be excessive for high-rate systems. Hence, as an attractive design alternative, continuous-valued PSO was invoked to find the MBER-MUT’s precoder matrix in order to reduce its computational complexity. Two PSO aided MBER-MUTs were designed and explained. The first one may be referred to as a symbol-specific MBER-MUT, while the other one may be termed as the average MBER-MUT. Our simulation results showed that both of our designs achieve an improvement in comparison to conventional linear MUT schemes, while providing a reduced complexity compared to the state-of-art SQP based MBER-MUT. Later, we introduced discrete multi-valued PSO into the context of MMSE Vector Precoding (MMSEVP) to find the optimal perturbation vector. As a nonlinearMUT scheme, the VP provides an attractive BER performance. However, the computational complexity imposed during the search for optimal perturbation vector may be deemed excessive, hence it becomes necessary to find reduced-complexity algorithms while maintaining a reasonable BER performance. Lattice-Reduction-aied (LRA) VP is the most popular approach to reduce the complexity imposed. However, the LRA VP is only capable of achieving a suboptimum BER performance, although its complexity is reduced. Another drawback of LRA VP is that its complexity is fixed, which is beneficial for real-time implenebtations, but it is unable to strike a trade-off between the target BER and its required complexity. Therefore, we developed a discrete multi-valued PSO aided MMSE-VP design, which has a flexible complexity and it is capable of iteratively improving the achievable. In Chapter 5, our contributions in the field of Minimum Bit Error Rate Vector Precoding (MBER-VP) are unveiled. Zero-Forcing Vector Precoding (ZF-VP) and MMSE Vector Precoding (MMSE-VP) had already been proposed in the literature. However, to the best of our knowledge, no VP algorithm was proposed to date based on the direct minimisation of the BER. Our improved MMSE-VP design based on the MBER criterion first invokes a regularised channel inversion technique and then superimposes a discrete-valued perturbation vector for minimising the BER of the system. To further improve the system’s BER performance, an MBER-based generalised continuous-valued VP algorithm was also proposed. Assuming the knowledge of the information symbol vector and the CIR matrix, we consider the generation of the effective symbol vector to be transmitted by directly minimising the BER of the system. Our simulation results show the advantage of these two VP schemes based on the MBER criterion, especially for rank-deficient systems, where the number of BS transmit antennas is lower than the number of MSs supported. The robustness of these two designs to the CIR estimation error are also investigated. Finally, the computational complexity imposed is also quantified in this chapter. With the understanding of the BER criterion of VP schemes, we then considered a new transceiver design by combing uniform channel decomposition and MBER vector precoding, which leads to a joint transmitter and receiver design referred as the UCD-MBER-VP scheme. In our proposed UCD-MBER-VP scheme, the precoding and equalisation matrices are calculated by the UCD method, while the perturbation vector is directly chosen based on the MBER criterion. We demonstrated that the proposed algorithm outperforms the existing benchmark schemes, especially for rank-deficient systems, where the number of users supported is more than the number of transmit antennas employed. Moreover, our proposed joint design approach imposes a similar computational complexity as the existing benchmark schemes.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Chi. "Engineering of inhalation aerosols combining theophylline and budesonide." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/14072.

Full text
Abstract:
In asthma therapy, the use of theophylline to prevent bronchial spasm and glucocorticoids to decrease inflammation is widely indicated. Apart from the acute asthma attack oral theophylline is treated for chronic therapy in order to minimize inflammation and to enhance the efficiency of corticosteroids and recover steroids’ anti-inflammatory actions in COPD treatment. The preferred application route for respiratory disease treatment is by inhalation, such as dry powder inhalers (DPI) being the delivery systems of first choice. As shown recently, there is an advantageous effect if the drugs are given simultaneously which is caused by a synergistic effect at the same target cell in the lung epithelia. Therefore, it seems rational to combine both substances in one particle. This type of particle has the advantage over a combination product containing both drugs in a physical mixture which occurs rather randomly deposition leading to API segregation and non-dose-uniformity. Dry powder inhalers (DPIs) is a type of therapeutic pharmaceutical formulations usually present in the solid form. Due to the nature of the solid-state, an understanding of chemical and physical properties must be established for acquiring optimum performance of the active pharmaceutical ingredients (APIs). In recent year, generation of DPIs is a destructive procedure to meet the micron size. Such processes are inefficient and difficult to control. Moreover, according to current researches on combination APIs formulation, this type of DPIs performed a greater variability in does delivery of each active, leading to poor bioavailability and limit clinical efficient. This result suggest that combination formulations require advanced quality and functionality of particles with suitable physicochemical properties. Hence, in order to production of binary and combination DPIs products, the aim of this study was to develop the spray drying and ultrasonic process for engineering of combination drug particles that will be delivered more efficiently and independently of dose variations to the lung. Microparticles were produced by spray drying or/and ultrasonic technique. The processing parameters and addition of excipients (polymers) were optimized using a full factorial design such that microparticles were produced in a narrow size range suitable for inhalation. Employing excipients resulted in high saturation environment leading to minimized sphere particles when compared to conventional solvent. Solid state characterization of microparticles using powder x-ray diffraction and differential scanning calorimetry indicated that the particles contained crystalline but no cocrystal. The combination particles comparable to or better than micronized drug when formulated as a powder blended with lactose. It was concluded that the use of HPMC enhanced crystallinity suitable for inhalation; and combination particles improved uniform distribution on the stage of NGI.
APA, Harvard, Vancouver, ISO, and other styles
6

Salihu, A. "Design and optimisation of a spring particle sizer." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1344083/.

Full text
Abstract:
This thesis describes the results of a series of investigations examining the operational performance followed by the fundamental re-development of two analytical instruments, namely a Pneumatic Spring Particle Sizer (PSPS) and handheld Spring Particle Sizer (handheld SPS) for size distribution analysis of dry powders in the 100 – 2000 μm size range. Each instrument shares the same basic principle of operation involving the use of a closed coil helical extension spring, which is partly filled with the test powder. Particle size distribution data is obtained by stretching the spring to known lengths and measuring the mass of the discharged particles from the spring’s coils. In the case of the Handheld SPS, aimed at on the spot quality control applications, the test particles are discharged from the spring using manual shaking. In the case of the PSPS on the other hand, the particles are discharged using pulsating pressurised air. The design, development and evaluation of two methods for the in-situ measurement of the sample mass within the PSPS are discussed. These include a full-bridge strain gauge assembly and the investigation of the correlation between the minimum fluidisation velocity and the mass of the test sample within the spring. The strain gauge proved to be a successful method producing a mass resolution of ± 1 % for a total sample mass of 280 g. The second method was unsuccessful as it was found that the minimum fluidisation velocity, in most cases did not follow a clear trend with mass. Detailed investigations are conducted aimed at understanding the processes governing the mass discharge rate from the PSPS and hence the sample analysis time by studying the particle migration behaviour. A pulsating fluidised bed of similar dimensions to the spring is used to mimic the behaviour in the spring. Tests involved pulsating fluidisation followed by particle size distribution analysis at equal distances along the length of the bed containing poly-dispersed and mono-dispersed particles. It was observed that any operating or design parameter that promoted the degree of mixing, for example, increasing the fluidising air pulse frequency would reduce the test analysis time. The analysis time also increased with the sample poly-dispersity. In an attempt to reduce the sampling time, the handheld SPS was rotated using a variable speed tumbler as an alternative to manual shaking. Despite the marked reduction in the sampling time, this method resulted in the discharge of particles larger than spring coil openings thereby producing erroneous results. Calibration experiments for the same types of powders revealed a linear relationship between the discharge sample volume and its mass, independent of the particle size in the range 212 – 1000 μm. This allows in-situ measurement of the discharge sample mass in the handheld unit by reading the sample volume collected in the integrated graduated collection cylinder and reference to a previously generated calibration line.
APA, Harvard, Vancouver, ISO, and other styles
7

Sträng, Kalle. "Design of a new type of particle separator." Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-146395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kuttarath, Veettil Deepak. "Thermal Design Optimization of a Miniature Condensate Particle Counter." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1250651342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sheehy, Suzanne Lyn. "Design of a non-scaling fixed field alternating gradient accelerator for charged particle therapy." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:d9cd977c-35db-45cc-ad33-67710fc3e82f.

Full text
Abstract:
This thesis describes the design a novel type of particle accelerator for charged particle therapy. The accelerator is called a non-scaling, Fixed Field Alternating Gradient (ns-FFAG) accelerator, and will accelerate both protons and carbon ions to energies required for clinical use. The work is undertaken as part of the PAMELA project. An existing design for a ns-FFAG is taken as a starting point and analysed in terms of its ability to suit the charged particle therapy application. It is found that this design is particularly sensitive to alignment errors and would be unable to accelerate protons and carbon ions at the proposed acceleration rate due to betatron resonance crossing phenomena. To overcome this issue, a new type of non-linear ns-FFAG is developed which avoids resonance crossing and meets the requirements provided by clinical considerations. Two accelerating rings are required, one for protons up to 250 MeV and fully stripped carbon ions to 68 MeV/u, the other to accelerate the carbon ions up to 400-430 MeV/u. Detailed studies are undertaken to show that this new type of accelerator is suitable for the application. An alignment accuracy of 50 micrometers will not have a detrimental effect on the beam and the dynamic aperture for most lattice configurations is found to be greater than 50 pi.mm.mrad normalised in both the horizontal and vertical plane. Verification of the simulation code used in the PAMELA lattice design is carried out using experimental results from EMMA, the world's first ns-FFAG for 10-20 MeV electrons built at Daresbury Laboratory, UK. Finally, it is shown that the described lattice can translate into realistic designs for the individual components of the accelerator. The integration of these components into the PAMELA facility is discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Alshebaily, Khalid H. "Design of particle separator for a helicoptor engine inlet." Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Raylor, Benjamin. "Pipe design for improved particle distribution and improved wear." Thesis, University of Nottingham, 1998. http://eprints.nottingham.ac.uk/13448/.

Full text
Abstract:
This thesis describes the use of swirl-inducing pipes in water and water/mixture flows, with a particular emphasis on production of swirl before a bend. The author takes ideas for imparting swirling action to particle laden liquids which have occurred in one form or another throughout the 20th Century. The aim of the project was to reduce wear and produce better particle distribution throughout a bend. In the present investigation two methods were used in the examination of swirl-inducing pipes, namely experimental and numerical. The experimental method made use of a Swirly-flo pipe, which is normally found in marine boilers and is used to improve heat exchanger efficiency. The Swirly-flo was then placed onto an experimental test rig, which was specifically designed to provide insight into the use of swirl-inducing pipes. The numerical method came from a commercial Computational Fluid Dynamics (C.F.D.) package which allowed the author to examine various shapes for pipes and provided information on the flow fields in a swirl-inducing pipe. From the experimental results it was shown that swirling the flow before a bend produced less pressure drop across the bend than non-swirling flow. However, the Swirly-flo pipe produced a greater pressure loss across its length than the standard pipe. By swirling the particles before the bend the particles were more evenly distributed throughout the bend, which has the potential to remove the characteristic wear zones. Computational Fluid Dynamics was used to investigate various Swirly-flo designs. These studies indicated that the optimum pitch to diameter ratio was shown to be 8 for a constant pitch Swirly-flo pipe, which was consistent with previous work.
APA, Harvard, Vancouver, ISO, and other styles
12

Andreadis, Apostolos. "An optimal nephelometric model design method for particle characterisation." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/33959.

Full text
Abstract:
Scattering nephelometry is a particle characterisation method applicable to fluid suspensions containing impurities. Solutions derived by the method feature particle classification by size (diameter), volume or texture as well as continuous on-line and in-situ monitoring, The replacement of turbidimeters with nephelometers in many existing turbidity applications could result in suppression of side effects caused by limitations and uncontrolled parameter drifts and satisfaction of problem-defined constraints at virtually no change in implementation cost. A major issue of nephelometric model design is the selection of a mathematical tool suitable for the modelling of the data analysis system.
APA, Harvard, Vancouver, ISO, and other styles
13

Brown, John. "A SPACE BASED PARTICLE DAMPER DEMONSTRATOR." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/501.

Full text
Abstract:
The structure and payload of a CubeSat flight experiment that investigates the performance of particle dampers in a micro-gravity environment was designed, built, and tested, and will provide on orbit data for model validation and improved performance predictions for space applications of particle damping. A 3-D solid model of the integrated CubeSat structure and payload was created satisfying all constraints from CubeSat and the System Dynamics Department at Northrop Grumman Aerospace Systems. The model was verified using commercially available Finite Element Analysis software (FEA), and a prototype structure part was fabricated. The prototype was tested and verified the FEA. A complete subassembly ready for flight was manufactured as an engineering unit and tested to space qualification loads of both launch vibration and thermal vacuum. Two additional units were contracted out for manufactured to serve as the flight unit and backup, and are currently ready for launch.
APA, Harvard, Vancouver, ISO, and other styles
14

Vogt, Carsten. "Ultrafine particles in concrete : Influence of ultrafine particles on concrete properties and application to concrete mix design." Doctoral thesis, KTH, Betongbyggnad, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Silvestor, Ian Malcolm. "The design of the ZEUS tracking trigger and studies of b quark fragmentation." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mitra, Subhadeep. "Particle filtering with Lagrangian data in a point vortex model." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72873.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 131-138).
Particle filtering is a technique used for state estimation from noisy measurements. In fluid dynamics, a popular problem called Lagrangian data assimilation (LaDA) uses Lagrangian measurements in the form of tracer positions to learn about the changing flow field. Particle filtering can be applied to LaDA to track the flow field over a period of time. As opposed to techniques like Extended Kalman Filter (EKF) and Ensemble Kalman Filter (EnKF), particle filtering does not rely on linearization of the forward model and can provide very accurate estimates of the state, as it represents the true Bayesian posterior distribution using a large number of weighted particles. In this work, we study the performance of various particle filters for LaDA using a two-dimensional point vortex model; this is a simplified fluid dynamics model wherein the positions of vortex singularities (point vortices) define the state. We consider various parameters associated with algorithm and examine their effect on filtering performance under several vortex configurations. Further, we study the effect of different tracer release positions on filtering performance. Finally, we relate the problem of optimal tracer deployment to the Lagrangian coherent structures (LCS) of point vortex system.
by Subhadeep Mitra.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Gathmann-Hüttemann, Stefan. "Untersuchungen über objektorientierte Design-Patterns für massiv-parallele Teilchensimulationsverfahren anhand von smoothed particle hydrodynamics." [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=964104091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Broderick, David J. Hung John Y. "Particle swarm optimization applied to the design of a nonlinear control." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Spring/master's/BRODERICK_DAVID_45.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Joubert, Matthew James Stuart. "Optimal design of Orthotropic Piezoelectric membranes and plates using particle swarms." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86415.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Over the past 50 years smart materials have made their appearance in many structures. The thermopiezoelectric ceramic is one of these smart materials. When thermal e ects are considered negligible, then the materials are classified as piezo-ceramic and piezoelectric materials. These so called piezo-ceramics are used as actuator and sensor components in many structures. The use of these components with composite materials is significant due to their application in the aerospace and aeronautics fields. The interaction that the piezoelectric material has with a composite body can be improved in order to reduce the energy requirement of the material for deformation. An objective in the optimisation of composite material structures is to minimise compliance or maximise sti ness uT f, with the laminate ply orientations as design variables, where u and f are displacement and force vectors, respectively. Here, the objective is not the maximisation of sti ness but the maximisation of compliance, with typical constraints being failure criteria. These failure criteria can include theories such as the maximum principle stress, the Tsai-Hill or Tsai-Wu failure theories. The compliance is maximised to accentuate any piezoelectric movement and is for theoretical treatment only. Piezoelectric materials once polarized the materials becomes quasi-isotropic. The piezoelectric materials are isotropic in the plane normal to the direction of the voltage being applied and have altered properties normal to this plane. This change in the material properties can be exploited so that the layup can be altered in orientation to improve performance. The idea is to improve the mechanical capabilities of the structure subject to an electrical input or vice versa. In the works by both Carrera et al. and Piefort, First Order Shear Deformation Theory (FSDT) is used in finite element analysis to characterise the structural and electrical behaviour of a plate or shell. FSDT, also known as the Mindlin-Reissner theory, is a plate bending theory that assumes a transverse shear distribution through the thickness of the plate. This theory is considered an improvement on the standard theories such as the Kircho or Timoshenko theories. Many optimisation techniques exist and are classed as either being direct search or gradient based methods. Particle Swarm Optimisation (PSO) is a direct search method. It mimics the behaviour of a flock of birds or school of fish in their attempt to find food. The PSO’s mathematical statement characterises a set of initial unknown particles within a designated search space that are compared to a set of local best particles and a single global best particle. This comparison is used to update the swarm each run cycle. Regression is a procedure whereby a set of testing data is used to fit a pseudo-function that represents the form the data should take in practice. The aim of this work is to optimise the piezoelectric-composite layer interaction to improve the overall compliance of a structure. Extensive modelling is performed and tested with peer reviewed literature to demonstrate its accuracy.
AFRIKAANSE OPSOMMING: Oor die afgelope 50 jaar het slim materiale hulle verskyning gemaak in verskeie strukture. Termopiezo-elektriese keramieke is een van hierdie nuwe materiale. Wanneer termiese e ekte onbeduidend is, word hierdie materiale as piezo-elektriese materiale geklassifiseer. Hierdie sogenaamde piezo-keramieke word gebruik as aandrywers en sensoriese onderdele in verskeie strukture. Die kombinasie van hierdie onderdele met saamgestelde materiale het belangrike toepassings in die ruimte- en lugvaartkunde. Die interaksie van die piezo-elektriese materiale met die saamgestelde materiaal strukture kan verbeter word om die energie-vereistes van die materiaal vir vervorming te verminder. ’n Tipiese doel in die optimering van saamgestelde materiaalstrukture is om styfheid uT f te maksimeer met die gelamineerde laag-oriëntasies as ontwerpsveranderlikes, waar u en f onderskeidelik verplasing en kragvektor voorstel. In teenstelling met die optimering van die samestelling wat voorheen gedoen is, is die doel hier nie die maksimering van styfheid nie, maar die minimering van styfheid, met falingskriteria as tipiese beperkings. Die falingskriteria sluit die volgende in: die maksimum spanningsteorie, en die Tsai-Hill of Tsai-Wu falingsteorieë. Die styfheid word geminimeer om piezo-elektriese verplasing te versterk, maar word hierin net teoreties bekyk. Sodra piezo-elektriese materiale gepolariseer word, word hulle quasi-isotropies. Die piezoelektriese materiale is isotropies in die vlak gelyk aan die rigting van die stroomspanning wat daarop toegepas word en het ander eienskappe normaal tot die vlak. Die verandering in die materiaal se eienskappe kan gebruik word sodat beide die saamgestelde materiaal en die piezoelektriese laag se oriëntasie aangepas kan word vir verbeterde werkverrigting. Die idee is om die meganiese vermoëns te verbeter van ’n struktuur wat onderwerp word aan ’n elektriese inset of vice versa. In die literatuur van beide Carrera et al. en Piefort word Eerste Orde Skuifvervormings Teorie (EOST) gebruik in eindige element analises om die strukturele en elektriese gedrag van ’n plaat of dop te karakteriseer. EOST, ook bekend as Mindlin-Reissner teorie, is ’n plaat buigings-teorie wat ’n dwarsvervormingverspreiding aanneem deur die dikte van die plaat. Hierdie teorie word gesien as ’n verbetering op die standaard teorieë soos bv. Kircho of Timoshenko se teorieë. Daar bestaan baie optimeringstegnieke wat geklassifiseer word as ’direkte soek’ of ’hellinggebaseerde’ metodes. Partikel swerm-optimering (PSO) is ’n direkte soekmetode. Dit boots die gedrag van ’n swerm voëls of ’n skool visse in hulle poging om kos te vind, na. PSO se wiskundige stelling karakteriseer ’n aanvanklike stel onbekende partikels binne ’n afgebakende soekgebied wat vergelyk word met ’n stel van die beste plaaslike partikels sowel as ’n enkele beste globale partikel. Die vergelykings word gebruik om die swerm met elke siklus op te dateer. Regressie is ’n metode waarin toetsdata gebruik word om ’n benaderde funksie te konstrueer wat ongeveer voorspel hoe die regte funksie lyk. Die doel van hierdie werk is om die piezoelektriese saamgestelde laag te optimeer en die interaksie van die totale gedrag van die struktuur te verbeter. Uitgebreide modellering word uitgevoer en getoets met eweknie-beoordeelde literatuur om die akkuraatheid en korrektheid te bewys.
APA, Harvard, Vancouver, ISO, and other styles
20

Djerafi, Rania. "Particle design and coating of pharmaceutical ingredients using supercritical fluid techniques." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0141.

Full text
Abstract:
Ce travail de thèse a été dédié à l'élaboration de formulations de médicaments par procédé Supercritique Anti-Solvant (SAS). L'étude a été divisée en deux sections : la production de co-précipités de médicament / polymère en utilisant le procédé SAS et l’enrobage de particules de taille micrométrique en utilisant un lit fluidisé couplé au procédé SAS. L'éthyl cellulose a été choisi comme polymère biocompatible pour la préparation des deux systèmes. La micronisation de l'éthyl cellulose par procédé SAS a été réalisée avec succès; des particules submicroniques ayant une taille moyenne de 300 nm ont été obtenues. Les formulations composites micronisées, la quercétine et la rifampicine avec de l'éthyl cellulose ont été élaborées par co-précipitation à pression et température modérées (10 MPa et 35 °C). L'étude de faisabilité d'une nouvelle méthode d’enrobage par lit fluidisé en milieu supercritique couplé à l'utilisation du procédé SAS a été réalisée. Ce procédé vert alternatif permet d’enrober des particules micrométriques avec peu d'agglomération et une bonne qualité du film d’enrobage. Des expériences d’enrobage de billes de verre ont été effectuées dans des conditions opératoires variées pour deux configurations d'injection différentes, pulvérisation par le haut ou par le bas de l’autoclave. De meilleurs résultats ont été obtenus dans les expériences de pulvérisation par le haut, notamment en termes de qualité de revêtement. Ce travail de thèse apporte des éléments nouveaux et pertinents pour un meilleur contrôle des procédés d’enrobage en milieu supercritique
The elaboration of drug formulations using supercritical anti-solvent process was the subject of this thesis. The study was divided in two sections: production of drug/polymer co-precipitates using SAS process and coating of micron-sized particles using a fluidized bed coupled with SAS process. Ethyl cellulose was chosen as biocompatible polymer for preparing the two systems. The micronization of ethyl cellulose using GRAS ethyl acetate solvent through a supercritical anti-solvent process was successfully performed; submicron particles with mean size of 300 nm were obtained. Micronized drug composites of quercetin or rifampicin with ethyl cellulose at moderate pressure and temperature (10 MPa and 35 °C) were produced by co-precipitation using supercritical anti-solvent process. Depending on the operating conditions, different particle size, particle size distribution and morphology of the product, but also crystallinity and drug loading were observed.The feasibility study of a novel coating method using fluidization in supercritical medium coupled with SAS process has been demonstrated to be a good alternative green process to elaborate micron-sized coated particles with few agglomeration and good coating quality. Coating experiments of glass beads were carried out at different conditions for two different injection configurations, top and bottom spray. Better results were achieved in the top spray experiments in term of coating quality. This Ph. D. work brings new and relevant elements for a better control of the coating processes in supercritical medium
APA, Harvard, Vancouver, ISO, and other styles
21

Burge, R. "RF control of the M9 separator at TRIUMF." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29463.

Full text
Abstract:
High voltage RF systems are used to accelerate proton beams for nuclear physics experiments. The acceleration process shapes the proton beam into a train of narrow pulses with the same period as the RF. This bunched beam structure is used to separate and identify secondary particles that are produced when the proton beam is directed at a "target". An RF controller for a system that separates secondary particles was built. Control of high power RF cavities that operate near resonance is discussed. The emphasis is on developing a control model for resonant systems and building a control system based on hardware and software modules that can be easily configured for different RF systems.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
22

Silwal, Shrawani. "A Dynamic Taxi Ride Sharing System Using Particle Swarm Optimization." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1588198872893409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Liyu. "Design and fabrication of microfluidic/microelectronic devices from nano particle based composites /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?NSNT%202008%20LIU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bhat, Siddharth. "Design and characterization in depleted CMOS technology for particle physics pixel detector." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0267.

Full text
Abstract:
L’expérience ATLAS commencera à fonctionner avec l’accélérateur LHC à hauteluminosité (HL-LHC) en 2026 pour augmenter la probabilité de nouvelles découvertes. La technologie du détecteur de pixels monolithiques en "depleted" CMOS a été une des options envisagées pour la couche externe du détecteur pixel d’ATLAS mis a niveau et est une technologie à fort potentiel pour les futurs détecteurs a pixels. Dans cette thèse, plusieurs prototypes ont été développés utilisant différentes technologies CMOS appauvries, par exemple, LFoundry (LF) 150 nm, TowerJazz (TJ) à 180 nm et Austriamicrosystems AG (AMS) à 180 nm. Dans un environnement à haute énergie, tel que HL-LHC, les événements uniques (SEU), qui deviennent un sujet de préoccupation pour le bon fonctionnement des circuits. Plusieurs puces de test dans les technologies AMS, TowerJazz et LFoundry avec différentes structures tolérantes au SEU ont été prototypées et testées. Un schéma d’alimentation alternatif appelé schéma d’alimentation série est prévu pour le futur détecteur Inner Tracker (ITk) de l’expérience ATLAS. Pour répondre aux exigences de l’expérience ATLAS concernant l’environnement d’une couche pixélisée dans un environnement de collisionneur à rayonnement élevé, de nouveaux développements avec des capteurs "depleted" CMOS ont été développés dans le régulateur Shunt-LDO et la polarisation de capteur conçus dans la technologie d’imagerie CMOS TowerJazz 0,18 um modifiée. Dans le processus TowerJazz modifié, deux niveaux de tension différents sont utilisés pour l'épuisement du capteur. Polarisation les tensions sont générées à l'aide d'un circuit de pompe à charge négative
The ATLAS experiment will start operating at the High Luminosity LHC accelerator (HL-LHC) in 2026 to increase the probability of new discoveries. Depleted CMOS monolithic pixel detector technology has been one of the options considered for the outer layer of an upgraded ATLAS pixel detector and is a high potential technology for future pixel detectors. In this thesis, several prototypes have been developed using different depleted CMOS technologies, for instance, LFoundry (LF) 150 nm, TowerJazz (TJ) 180 nm and austriamicrosystems AG (AMS) 180 nm. In a high-energy environment like HL-LHC, Single Event Upsets (SEU), which become of concern for reliable circuit operation. Several test-chips in AMS, TowerJazz and LFoundry technologies with different SEU tolerant structures have been prototyped and tested. The SEU tolerant structures were designed with appropriate electronics simulations using Computer Aided Design (CAD) tools in order to study the sensitivity of injected charge to upset a memory state. An alternative powering scheme named Serial Powering scheme is foreseen for the future Inner Tracker (ITk) detector of the ATLAS experiment. To meet the requirements ofthe ATLAS experiment to the environment of a pixelated layer in a high radiation collider environment, new developments with depleted CMOS sensors have been made in Shunt-LDO regulator and sensor biasing which are designed in modified TowerJazz 180 nm CMOS imaging technology. In the TowerJazz modified process, two different voltage levels are used for the purpose of sensor depletion. The bias voltages are generated by using a negative charge pump circuit
APA, Harvard, Vancouver, ISO, and other styles
25

Fenning, Richard. "Novel FFAG gantry and transport line designs for charged particle therapy." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6860.

Full text
Abstract:
This thesis describes the design of novel magnetic lattices for the transport line and gantry of a charged particle therapy complex. The designs use non-scaling Fixed Field Alternating Gradient (ns-FFAG) magnets and were made as part of the PAMELA project. The main contributions in this thesis are the near-perfect FFAG dispersion suppression design process and the designs of the transport line and the gantry lattices. The primary challenge when designing an FFAG gantry is that particles with different momenta take up different lateral positions within the magnets. This is called dispersion and causes problems at three points: the entrance to the gantry, which must be rotated without distortion of the beam; at the end of the gantry where reduced dispersion is required for entry to the scanning system; and a third of the way through the gantry, where a switch in curvature of the magnets is required. Due to their non-linear fields, dispersion suppression in conventional FFAGs is never perfect. However, as this thesis shows, a solution can be found through manipulation of the field components, meaning near-perfect dispersion suppression can be achieved using ns-FFAG magnets (although at a cost of irregular optics). The design process for an FFAG dispersion suppressor shown in this thesis is a novel solution to a previously unsolved problem. Other challenges in the gantry lattice design, such as height and the control of the optics, are tackled and a final gantry design presented and discussed. The starting point for the transport line is a straight FFAG lattice design. This is optimised and matched to a 45o bend. Fixed field solutions to the problem of extracting to the treatment room are discussed, but a time variable field solution is decided on for practical and patient safety reasons. A matching scheme into the gantry room is then designed and presented.
APA, Harvard, Vancouver, ISO, and other styles
26

Love, Christina Elena. "Design and Analysis for the DarkSide-10 Two-Phase Argon Time Projection Chamber." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214821.

Full text
Abstract:
Physics
Ph.D.
Astounding evidence for invisible "dark" matter has been found from galaxy clusters, cosmic and stellar gas motion, gravitational lensing studies, cosmic microwave background analysis, and large scale galaxy surveys. Although all studies indicate that there is a dominant presence of non-luminous matter in the universe (about 22 percent of the total energy density with 5 times more dark matter than baryonic matter), its identity and its "direct" detection (through non-gravitational effects) has not yet been achieved. Dark matter in the form of massive, weakly interacting particles (WIMPs) could be detected through their collisions with target nuclei. This requires detectors to be sensitive to very low-energy (less than 100 keV) nuclear recoils with very low expected rates (a few interactions per year per ton of target). Reducing the background in a direct dark matter detector is the biggest challenge. A detector capable of seeing such low-energy nuclear recoils is difficult to build because of the necessary size and the radio- and chemical- purity. Therefore it is imperative to first construct small-scale prototypes to develop the necessary technology and systems, before attempting to deploy large-scale detectors in underground laboratories. Our collaboration, the DarkSide Collaboration, utilizes argon in two-phase time projection chambers (TPCs). We have designed, built, and commissioned DarkSide-10, a 10 kg prototype detector, and are designing and building DarkSide-50, a 50 kg dark matter detector. The present work is an account of my contribution to these efforts. The two-phase argon TPC technology allows powerful discrimination between dark matter nuclear recoils and background events. Presented here are simulations, designs, and analyses involving the electroluminescence in the gas phase from extracted ionization charge for both DarkSide-10 and DarkSide-50. This work involves the design of the HHV systems, including field cages, that are responsible for producing the electric fields that drift, accelerate, and extract ionization electrons. Detecting the ionization electrons is an essential element of the background discrimination and gives event location using position reconstruction. Based on using COMSOL multiphysics software, the TPC electric fields were simulated. For DarkSide-10 the maximum radial displacement a drifting electron would undergo was found to be 0.2 mm and 1 mm for DarkSide-50. Using the electroluminescence signal from an optical Monte Carlo, position reconstruction in these two-phase argon TPCs was studied. Using principal component analysis paired with a multidimensional fit, position reconstruction resolution for DarkSide-10 was found to be less than 0.5 cm and less than 2.5 cm for DarkSide-50 for events occurring near the walls. DarkSide-10 is fully built and has gone through several campaigns of operation and upgrading both at Princeton University and in an underground laboratory (Gran Sasso National Laboratory in Assergi, Italy). Key DarkSide two-phase argon TPC technologies, such as a successful HHV system, have been demonstrated. Specific studies from DarkSide-10 data including analysis of the field homogeneity and the field dependence on the electroluminescence signal are reported here.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Lu. "Design and numerical simulation of the real-time particle charge and size analyser." Thesis, University of South Wales, 2010. https://pure.southwales.ac.uk/en/studentthesis/design-and-numerical-simulation-of-the-realtime-particle-charge-and-size-analyser(fcbc8f66-9758-4c24-abbd-ccc14a28307f).html.

Full text
Abstract:
The electrostatic charge and size distribution of aerosol particles play a very important role in many industrial applications. Due to the complexity and the probabilistic nature of the different charging mechanisms often acting simultaneously, it is difficult to theoretically predict the charge distribution of aerosol particles or even estimate the relative effect of the different mechanisms. Therefore, it is necessary to measure the size and also the bipolar charge distribution on aerosol particles. The main aim of this research project was to design, implement and simulate a signal processing system for novel, fully functional measurement instrument capable of simultaneously measuring in real time the bipolar charge and size distribution of medical aerosols. The Particle Size and Charge Analyser (PSCA), investigated in this thesis, uses Phase Doppler Anemometry (PDA) technique. The PDA system was used to track the motion of charged particles in the presence of an electric field. By solving the equation of particle motion in a viscous medium combined with the simultaneous measurement of its size and velocity, the magnitude as well as the polarity of the particle charge can be obtained. Different signal processing systems in different excitation fields have been designed and implemented. These systems include: velocity estimation system using spectral analysis in DC excitation field, velocity estimation system based on Phase Locked Loop (PLL) technique working in DC as well as sine-wave excitation fields, velocity estimation system based on Quadrature Demodulation (QD) technique under sine-wave excitation method, velocity estimation system using spectral analysis in square-wave excitation field and phase shift estimation based on Hilbert transformation and correlation technique in both sine-wave and square-wave excitation fields. The performances of these systems were evaluated using Monte Carlo (MC) simulations obtained from the synthesized Doppler burst signals generated from the mathematical models implemented in MATLAB. The synthesized Doppler Burst Signal (DBS) was subsequently corrupted with the added Gaussian noise. Cross validation of the results was performed using hardware signal processing system employing Arbitrary Waveform Generator and also NASA simulator to further confirm the validity of the estimation. It was concluded that the velocity estimation system based on spectral analysis in square-wave excitation field offers the best overall performance in terms of the working range, noise sensitivity and particle capture efficiency. The main reasons for the superiority of the square-wave excitation over the sine-wave excitation system are as follows: Firstly, in the square-wave field particles attain higher velocities and greater amplitudes of displacement, which increases their probability of crossing the measurement volume from various injection points. Secondly, the sine-wave excitation requires that the particle residence time in the measurement volume is at least equal to one period of the excitation, which effectively eliminates shorter and discontinuous burst. Thirdly, the signal processing based on FFT is less demanding in terms of the quality of DBS, which increases the likelihood of the detected particles to be successfully processed.
APA, Harvard, Vancouver, ISO, and other styles
28

Hiltl, Stephanie Gisela Verfasser], Alexander [Akademischer Betreuer] Böker, and Andrij Z. [Akademischer Betreuer] [Pich. "Wrinkle-assisted particle assembly and design / Stephanie Gisela Hiltl ; Alexander Böker, Andrij Pich." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/112823162X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Patiño, Padial Tania. "Unravelling cell-particle interactions for the design of new micro- and nanoengineered systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/321118.

Full text
Abstract:
Els micromaterials i nanomaterials presenten unes propietats fisicoquímiques úniques i controlables, les quals han permès la creació d’eines innovadores per tal d’afrontar alguns dels reptes que presenten diferents branques de la ciència i tecnologia, incloent la medicina moderna. El creixement exponencial de la microtecnologia i nanotecnologia en els últims anys ha generat la necessitat de conèixer millor el comportament dels micromaterials i nanomaterials en ambients fisiològics, especialment a nivell cel·lular, per tal d’assegurar un desenvolupament segur i eficient. En aquest context, l’objectiu principal d’aquesta tesi ha estat estudiar la interacció entre diferents tipus de micropartícules i cèl·lules. Amb aquesta finalitat, es van dur a terme tres treballs diferents. Primer, s’ha avaluat l’impacte del recobriment de micropartícules amb lípids i polímers catiònics en la seva internalització en cèl·lules no fagocítiques HeLa. En concret, s’ha observat que la utilització de PEI a una concentració de 0.05 mM presenta el millor balanç entre eficiència d’internalització i citotoxicitat. En segon lloc, s’ha analitzat si la modificació de la superfície de les micropartícules té un efecte diferent segons la línia cel·lular utilitzada. En concret, s’ha observat que les cèl·lules epitelials mamàries tumorals (SKBR-3) i no tumorals (MCF-10A) mostraven diferències significatives, tant pel que fa a la eficiència d’internalització de micropartícules com als mecanismes d’endocitosi involucrats. Finalment, s’ha demostrat que els xips intracel·lulars compostos de polisilici, crom i or, són bons candidats per a la seva utilització en aplicacions biològiques, ja que presenten una alta biocompatibilitat i són capaços de desenvolupar diferents funcions mitjançant la seva bifuncionalització. En conjunt, els resultats d’aquesta tesi posen de manifest la importància de determinar la citotoxicitat, l’eficiència d’internalització, la localització intracel·lular i l’efecte de la línia cel·lular a l’hora de dissenyar nous microsistemes i nanosistemes per tal de maximitzar la seva eficiència i minimitzar els efectes adversos.
Los micromateriales y nanomateriales presentan propiedades fisicoquímicas únicas y controlables que han permitido la creación de herramientas innovadoras para abordar algunos de los mayores retos que presentan distintas ramas de la ciencia y tecnología, incluyendo la medicina moderna. El crecimiento exponencial de la microtecnología y nanotecnología durante los últimos años ha generado la necesidad de comprender el comportamiento de los micromateriales y nanomateriales en ambientes fisiológicos, especialmente a nivel celular, para conseguir un desarrollo eficiente y seguro. En este contexto, el objetivo principal de esta tesis ha sido estudiar las interacciones entre diferentes tipos de micropartículas y células. Para ello, se han llevado a cabo tres estudios diferentes. Primero, se ha evaluado el impacto del recubrimiento de micropartículas con lípidos y polímeros catiónicos en la internalización de estas en la línea celular no fagocítica HeLa. Se ha determinado que utilización de PEI a una concentración de 0.05 mM presenta el mejor balance entre internalización y citotoxicidad. En segundo lugar, se ha analizado si la modificación de la superficie de las micropartículas tenía un efecto diferente según la línea celular utilizada. Concretamente, se ha observado que las células epiteliales mamarias tumorales (SKBR-3) y no tumorales (MCF-10A) presentaban diferencias significativas con respecto a la eficiencia de internalización y a los mecanismo de endocitosis involucrados. Finalmente, se ha demostrado que los chips intracelulares compuestos de polisilicio, cromo y oro, son buenos candidatos para su utilización en aplicaciones biológicas, ya que muestran una alta biocompatibilidad y son capaces de desempeñar distintas funciones mediante su bifuncionalización. En conjunto, los resultados de esta tesis ponen de manifiesto la importancia de determinar la citotoxicidad, eficiencia de internalización, localización intracelular y el efecto de la línea celular de los nuevos microsistemas y nanosistemas para maximizar su eficiencia y minimizar sus efectos adversos.
The unique and controllable physico-chemical properties of micro- and nanomaterials have allowed the creation of ground-breaking approaches to overcome some of the major challenges of different branches of technology and science, including modern medicine. The fast advances in micro- and nanotechnology have lead to an increasing demand for understanding the behaviour of micro- and nanomaterials within the physiological environments as well as their interactions at the bio-interface, including the cellular level, for an efficient and safe development. In this scenario, the present Thesis aimed to provide an integrated understanding about microparticle interactions with cells. First, we assessed the impact of cationic lipids and polymer coating of microparticles on their uptake by non-phagocytic (HeLa) cells. We found that non-covalently conjugated PEI at a 0.05 mM concentration offers the best balance between uptake efficiency and cytotoxicity. Second, we observed that surface modified microparticles were differently uptaken by tumoral (SKBR-3) and non-tumoral (MCF-10A) human breast epithelial cells, not only in terms of uptake efficiency but also of their endocytic pathways. Third, we demonstrated that polysilicon-chromium-gold multi-material intracellular chips (MMICCs) are suitable for biological applications due to their biocompatibility and capability of developing multiple functions through their bi-functionalization using orthogonal chemistry. Collectively, our results highlight the importance of assessing cell-particle interactions, in terms of cytotoxicity, uptake, intracellular location and cell type effect of newly designed micro- and nanomaterials, not only with respect to the target cells but also the neighbouring cells, in order to ensure their safety and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
30

Kauzlarić, David. "Particle simulation of MEMS,NEMS components and processes - theory, software design and applications." Tönning Lübeck Marburg Der Andere Verl, 2009. http://d-nb.info/99703100X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Misra, Satya Deb. "Particle breakage and material transport in the design of high-efficiency comminution device." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hiltl, Stephanie Gisela [Verfasser], Alexander Akademischer Betreuer] Böker, and Andrij Z. [Akademischer Betreuer] [Pich. "Wrinkle-assisted particle assembly and design / Stephanie Gisela Hiltl ; Alexander Böker, Andrij Pich." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/112823162X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Chung-Wei, and 陳中偉. "Variant Particle Swarm Optimizations for ExactOptimal Design." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/81484791434092368594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Meng-Yi, and 游孟怡. "Particle Swarm Optimization Algorithm Applied to Antenna Design." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/96748032333866295181.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系在職專班
102
Electromagnetic simulation software in microwave and antenna is very important, can be very accurate analysis of complex microwave circuit and the antenna, the general analog design, the designer only repeated tuning parameters to obtain the simulation results, it is time-consuming and labor . Particle swarm optimization (Particle Swarm Optimization, PSO) can search for global optimum, with optimized properties, applied to electromagnetic simulation software IE3D or HFSS, in accordance with the specification requirements of wireless communications antennas, antenna parameters simply enter to carry out design optimization to find the best antenna structure.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Zhi-Liang, and 黃智樑. "LQ Regulator Design Based on Particle Swarm Optimization." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/95075383637967356282.

Full text
Abstract:
碩士
國立高雄海洋科技大學
輪機工程研究所
94
In this paper, a particle swarm optimization (PSO) based linear-quadratic (LQ) state-feedback regulator is investigated. The parameters of LQ regulators are determined by PSO method. A practical example of a rotating inverse pendulum is provided to demonstrate the effectiveness of the PSO-based LQ regulators. The performance of rotating inverse pendulum controlled by PSO-based LQ regulators is more ideal than the performance of rotating inverse pendulum controlled by Traditional LQ regulators. The goal of this study, stabilized the system performance with unstable operation point, can be achieved by using the proposed controller.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Hung-Sui, and 林宏穗. "Design of a Novel Orthogonal Particle Swarm Optimization." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/50747646390219093811.

Full text
Abstract:
碩士
逢甲大學
資訊工程所
92
In this thesis, a novel orthogonal particle swarm optimization (OPSO) for solving large-scale parameter optimization problems is proposed. High performance of OPSO mainly arises from replacing the conventional particle move strategy of particle swarm optimization (PSO) with an intelligent move mechanism (IMM) based on orthogonal experimental design to enhance the search ability. IMM can evenly sample and analyze from the best experience of an individual particle and group particles by using a systematic reasoning method, and then efficiently generate a good candidate solution for the next move of the particle. We used twelve benchmark functions to evaluate the performance of OPSO, and compared with PSO and some existing evolutionary algorithms. According to experimental results and analysis, they show that OPSO performs well and can significantly improve the performance of the standard PSO.
APA, Harvard, Vancouver, ISO, and other styles
37

QUAN, VU HONG, and 武紅軍. "Applications of Particle Swarm Optimization in Controller Design." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5r66n4.

Full text
Abstract:
碩士
國立高雄應用科技大學
電子工程系碩士班
104
Nowadays, particle swarm optimization (PSO), one of the most frequently used metaheuristic algorithms, can be found in many researches and applications. Some features that make PSO superior to classical optimization algorithms are easy to use, computational efficiency, robust and simplicity easy to implement. Recently, the applications of PSO in model predictive control (MPC) have been researched extensively. MPC is an advance control strategy, which uses a predicted model to optimize the errors between the predicted states or outputs and their corresponding reference values. The designation of MPC demands solving the optimization problem in each control sampling. Using classical analytic methods is hard to handle nonlinear objective functions subjected to nonlinear constraints of independent variables. This thesis presented the use of PSO to minimize the cost function of MPC. The controller design examples for three systems, which are inverted pendulum, induction motor and mean arterial pressure system are described. The control performances of the proposed designs are much better than that of the previous methods. It proves the efficiency of this approach.
APA, Harvard, Vancouver, ISO, and other styles
38

Cheng, Hsiang-Wei, and 鄭向為. "Particle Swarm Optimization Containing TANA3 for Engineering Design." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/80529723916423020175.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
104
In recent 20 years, multiple-points search optimization methods have been developed for possible global solutions. However, it is recognized that the computation is time consuming. In traditional single-point search optimization methods spend less computational time, but is easy falling into local optimum. In modern optimal engineering structural design, very often it requires the finite element (FE) analysis. This results in large amount computation time, and losing the efficiency. This thesis explores an selected multiple-points search optimization method including an approximation technique that can reduce the work of FE analysis as well as the computation time. The proposed multiple-points search optimization method in this thesis is particle swarm optimization (PSO). The improve flying back strategy for constraints handling developed in our research group had been combined in PSO resulted in CPSO. An approximation technique called Tana3 is adopted and developed in C++ programming. The CPSO combined Tana3 herein is named ACPSO. The structures of 10-bar truss, 25-bar truss and a two-pieces platform assembly for machine tool are utilized as examples to show the research work and performances. The mechanical analysis applied FE software named ANSYS that connected to ACPSO. The results are compares each other with or without Tana3 in multiple-points or single-point search optimization method. The final result shows that the presented ACPSO indeed can promote the computational efficiency and simultaneously the global design can be obtained. It is concluded that ACPSO is suitable for general large-scale engineering structural design optimization.
APA, Harvard, Vancouver, ISO, and other styles
39

Hsieh, Guo-Sheng, and 謝國聖. "Particle filtering with applications in Navigation System Design." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18859427556075266354.

Full text
Abstract:
碩士
國立臺灣海洋大學
通訊與導航工程系
96
The purpose of this thesis is to apply particle filters to navigation system design. Most of the studies regarding particle filters are based on particles representation of probability densities. Typically proposal distribution which doesn’t involve the latest measurement may cause the particles degeneracy. In this thesis, the Extend Kalman Filter (EKF) is incorporated into particle filters. One of the advantages is that the EKF can handle the heavy-tailed probability distribution. In GPS or GPS/INS navigation, the probability distribution of motion model is often hard to find one that is more appropriate. Therefore many particles will be wasted in low likelihood region. Generally speaking, particles filters exhibit superior performance when comparing with EKF and UKF in GPS and GPS/INS navigation filter designs. First of all, the thesis shows the functions that the EKF plays in the particle filtering configuration. Performance comparison and discussion regarding various roles the EKF plays will be presented. Un-modeling errors in EKF and UKF can be compensated through the incorporation of artificial noise by enlarging the process noise covariance. The strategy for improving the un-modeling errors can be achieved through decreasing the density of particles, which allows particles to explore new regions of the state space. In this thesis, the relationship between the number of particles and process noise covariance in navigation will be compared. In addition, the performance which dues to insufficient number of particles in the filter will also be shown and discussed. Analyses based on various numbers of particles are also included. Besides, MCMC move step which bases on Metropolis-Hastings algorithm (M-H) will be involved. The simulation experiments will be presented and compared to the original one in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
40

Samel, Mihir A. "Numerical Investigation of Gas-Particle Supersonic Flow." 2011. https://scholarworks.umass.edu/theses/716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Zong-Xian, and 吳宗弦. "A FRACTIONAL FACTORIAL DESIGN APPLIED TO SYNTHESIS OF SILVER PARTICLES FOR PARTICLE SIZE AND RECOVERY OPTIMIZATION." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/54863059232798860682.

Full text
Abstract:
碩士
大同大學
化學工程研究所
91
The valid period of a product is so transient in the competitive and progressing society that the improvement and exploitation of products under the increased efficiency and reduced cost are eagerly pursued. The purpose of experimental design is to find the significant factors via least experiments. The fractional factorial design is applied in this experiment to the synthesis of silver particles for particle size and recovery optimization. Ammonia silver solution was used as the precursor in the batch reactor to accomplish the chemical effects of silver powder synthesis. Concentration of silver ions(wt%), concentration of hydrazine hydrate(wt%), concentration of surfactant(wt%), initial pH value of ammonia silver solution(pH), temperature during reaction process (℃), and flow rate of materials(g/min) are influential factors in silver powder synthesis. The six factors were estimated via fractional factorial design requiring the execution of 8 distinct experiments, which were duplicated afterwards. The statistics of the altogether 16 experiments was analyzed by ANOVA to ascertain the significance of the main effect on silver powder properties. Next, regression model was built for each output reflecting the property of silver powder. Folding over technique was utilized to realize the influence of main effects and interactive effects on the powder property. In the work, the experimental results reveal that the fractional factorial designs enable us to find out the concerned main effects and interactive factors on silver powder qualities (average size, surface area, tap density and recovery) based on the fewest numbers of experiments. The most assured experimental results in this research prove that the main effective factors: concentration of silver ions, and of hydrazine hydrate, and of surfactant, reaction temperature and feeding rate, and the interaction effects: between concentration of silver ions and of surfactant, between concentration of silver ions and reaction temperature, are the most significant on the mean particle size of silver powder. Recovery rate, another focus in this work, is demonstrated to be most significant under the conditions of main effects: concentration of silver ions, and of hydrazine hydrate, and of surfactant, and initial pH of ammonia silver solution, and interaction effects: concentration of silver ions and of hydrazine hydrate, concentration of silver ions and of surfactant, concentration of silver ions and reaction temperature, are the most significant on the recovery rate of silver powder.
APA, Harvard, Vancouver, ISO, and other styles
42

Chao, Min-An, and 趙敏安. "Parallelized Particle Filter Design for CUDA Based Computing Platforms." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/56500381413546257911.

Full text
Abstract:
碩士
臺灣大學
電子工程學研究所
98
Particle filtering is a sequential Monte Carlo (SMC) based method which outperforms traditional Kalman based filters in a wide range of real-worlds applications involving the nonlinear/non-Gaussian Bayesian estimation, such as target tracking in surveillance systems, recognition in robot vision, positioning, navigation, and so on. Due to its demand for a great deal of reconfigurability, fast prototyping, and online parallel signal processing, the emerging GPU platform called compute unified device architecture (CUDA) may be regarded as the most appealing platform for implementation. Since the CUDA based platform features the single-instruction multiple-thread (SIMT) execution model and the hierarchical memory model for fine-grained scalability, how to implement an efficient parallelized particle filter design on CUDA becomes an essential yet unsolved problem. The objective of this thesis is to provide an efficient implementation method of parallelized particle filters on CUDA based computing platforms with conceptual and quantitative analysis. Based on the parallelization degree and data locality analysis, two design techniques, 1) finite-redraw importance-maximizing (FRIM) prior editing and 2) localized resampling, are proposed to conquer the bottleneck stage of the particle filtering, ie., the resampling stage, which involves data-dependent global operations. Since the characteristics of CUDA encourage the fast data-independent parallel computation rather than the slow global operations, the proposed techniques aim to reduce the time-consuming global operations with little overhead of additional local computation. The implementation results not only validate the analysis on parallelization degree and data locality of particle filters, but also verify the tradeoff relationships between the reduction on global operations and the local computation overhead. By using the proposed techniques, particle filters can be implemented on CUDA based platforms with less sample sizes and less execution time. On the low- and middle-end CUDA-enabled platforms, NVIDIA GeForce 9400m and GTS250, the speedup brought by proposed techniques can reach 5.73 and 5.37 times, respectively, compared with the direct implementations on these platforms.
APA, Harvard, Vancouver, ISO, and other styles
43

Chung, Chen Po, and 陳柏仲. "The Team Character Design Based on Particle Swarm Optimization." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/52766818891642558926.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系
94
Computer games have highly interactive ability and can integrate various media. Playing computer games have become people’s popular entertainment. Computer games have truly image and sound effect can give the player a rich game experence. But the technology of computer graphics alreay reach a bottleneck in the recent years. So many computer games developers have paid their attention to the AI of game characteristic. They hope the smart and various AI can make the computer game more interesting. The most computer game today use the rule-base design approach because of the simple and easy to implement. But, if the player find the weak point of the computer character, nothing can stop the player to win the game. If the computer character can learn from mistake, there may be a solution of this problem. Some scholar try to implement some learning algorithm to the computer game. But we found it needs large computation and collecting the train data sometimes are difficult. And we found the team work is easily to be found in today’s computer games. So we try to give a new approach to the team play computer game’s AI. For the application of computer game AI, the computation must be quick and stable. Particle Swarm Optimization(PSO) is a new optimization and machine learning technology in Artificial Intelligence. PSO is easy to emplement and there are few parameters to adjust. So we try to implement the PSO as the learning algorithm of computer game characteristic. But according to PSO, there is no coordination between each particle. So it can only create a powerful single character. So we propose a new learning strategy placing the emphasis on the team learning. In summery, this paper proposes a novel method based on PSO to help behavior design in computer games. Compare with the traditional PSO, proposed method can create more efficient team. And there is no need of large computation and training date, which suit the application of computer game. This new mechanism can help AI developer adjust the behavioral parameters which can save the testing time of different combination of parameter. In the experimental results, the proposed mechanism was embedded to design the team bots that indeed presents more changeable and the stable learning characteristic in the Quake III team play mode : Catch the flag.
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Michael Yichung. "Development of master design curves for particle impact dampers." 2003. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-376/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Ting-Wei, and 許庭維. "Study of Modular Design Flow-Zinc-Particle Fuel Cell." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/u7wng8.

Full text
Abstract:
碩士
國立臺北科技大學
車輛工程系所
103
Based on the former researcher&;#39;s framework in AVPSL, this study focus on improving the phenomenon of cell weeping and immediately adding fuel, in order to ameliorate the efficiency of the electrochemical reaction and the flow in the cell. To make the output of electricity of every single cell in the metal fuel cells pack more homogeneous and stable, we propose a Flow-Zinc-Particle Fuel Cell Modular Design. Focusing on the factors which can affect the performances of the cells, such as the current collector materials, the refilling mechanism of Zn particles, and the mechanism of the cell, we find out the best parameter of single cells by analyzing the I-V polarization curves and AC impedance analysis. Finally, we compare the differences in performances between the traditional plat-like metal fuel cell and the Flow-Zinc-Particle Fuel Cell Modular cell. In this research, our concept in the use of Zn particles is different from the traditional design which uses the Zn plates. Zn particles are made from 20g Zn powder and different concentrations of potassium hydroxide solution and the separator is made of nonwoven fabric. With the advantages of flowing electrolyte and independent fuel-refilling canals, not only can the cell refills fuel at any time, but also can the water, carbonate and zinc oxide generated after the reaction can flow out of the cell. The results show that the velocity of electrolyte is fixed at 150ml/min, and the update rate of OH-concentration within the cell is better. We choose copper plates and copper meshes as current collector materials. In order to prevent the interactions between Zn and the electrolyte, we plate Au on the current collector. Copper plates have better performance on current collecting than nickel sheet under the bulk current. The use of copper plate can prevent the appearance of large quantities of H2 bubbles which are produced by the chemical interaction between nickel and Zn particles. Currently, the maximum power of single cell is 13.3w; the energy density is about 528Wh/kg; the current density is 684mA/cm2; the power density is 553.5mW/cm2; the corresponding voltage is 0.8V.
APA, Harvard, Vancouver, ISO, and other styles
46

Chien, Chung-Wei, and 簡仲唯. "Particle Swarm Technique To Optimal Design Of Structural Member." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51580777350261012809.

Full text
Abstract:
碩士
國立中興大學
土木工程學系所
100
Approach of try and error is conventionally used for designing structural members. For the case, with the requirement of prescribed deformation reacted to the certain force, this approach may not be efficient especially in economical sense. Thus it leads to the proposal herein of using particle swarm process to find appropriate solution in selecting structural members. Examples show this suggestion is valid and convenience in use for design practice.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Ching-Ya, and 黃靜雅. "Design of Digital Filters Based on Particle Swarm Optimizations." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/35569848762909019671.

Full text
Abstract:
碩士
國立高雄應用科技大學
電子工程系
97
This paper aims to design a digital filter via Particle Swarm Optimization (PSO). Emulating the collective behavior of creature, the algorithm avoids the local optimal problem and has high convergence speed to optimize the stopband attenuation of the digital filter from the searching domain. Both low pass filter and high pass filter are designed with PSO and Frequency Sampling Method (FSM). Simulation results show that the performance of the proposed method is better than that of Genetic algorithm (GA).
APA, Harvard, Vancouver, ISO, and other styles
48

Shaw, Zheng-Chang, and 蕭正昌. "Design of Nanomagnetic Particle Hyperthermia System for Tumor Applications." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/93412042127338837069.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
94
Hyperthermia with magnetic fluid will act as “Doctor's Hand” in biomedical science. Medical Science and Engineering has been made efforts on this issue which can raise the temperatures over 42oC by synthetic iron oxides (Fe3O4). On the know-how of magnetic fluid heating system design, we have improved pseudo full bridge DC/AC converter. Besides, the simulation tool, “Is spiceTM “, is used for verifying our circuits. We have made a new radiation coil and built several models of the circuit by impedance analyzer. In addition, we used “Ansoft designer v.1.1” to verify the models. What make the system stable is that we used some of EMI (Electromagnetic Interference) proof techniques to reduce the noise and measure the noise weight by using spectrum analyzer. Consequently, we can stabilize our system. Refer to the estimations of the system efficiency, we can quantize and analysis the efficiency of circuit and radiation coil. Regarding the heating frequency optimization for 10 nm synthetic iron oxides, the magnetic fluid heating system (240 kHz) was designed by limited physics theories. According to experiment results, we got a new prospect for experiment efficiency. Compare Specific absorption rate (SAR) with the power source, we can find out where the optimum operation point is referring to system performance. As long we can find the optimum operation point with different samples, we are confident to make variety of hyperthermia experiments on biomedical field.
APA, Harvard, Vancouver, ISO, and other styles
49

Antunes, Christine Luz. "Adsorbent Particle Design for Application in Gas Adsorption Processes." Master's thesis, 2016. http://hdl.handle.net/10362/35809.

Full text
Abstract:
Metal-organic frameworks (MOFs) are novel materials that are showing great potential for different applications and in particular for gas adsorption-based separation processes. MOFs have been subject to a growing scientific interest due to their particular framework versatility and also because they have higher porosity and surface areas in comparison to other traditional adsorbents. Since these materials are relatively new, they are still only mostly studied in their primary powder form. To further study the feasibility of application of MOFs in gas adsorption processes such as Pressure Swing Adsorption (PSA), these must be shaped into body like forms, such as pellets or extrudates. One particular MOF, aluminum terephthalate (MIL-53(Al)) has a very high surface area with a great capacity to adsorb a large amount of gases such as carbon dioxide (CO2). Due to its characteristics there is interest in further studying this material in gas adsorption processes. Therefore, the objective of this work is to shape MIL-53(Al) with different techniques and study the characteristics of the formulated particles. MIL-53(Al) was shaped using two different methods: compression without a binder (binderless) and extrusion with a binder. The binderless method resulted in two samples, one with a 1ton-force compression and another with a 0.5ton-force compression. Polyvinyl alcohol (PVA) was used as a binder to shape four samples with percentages of binder between 2% and 15%. The obtained shaped materials were characterized using several mechanical, structural and physico-chemical techniques. Furthermore, CO2 adsorption equilibria measurements were performed to understand the adsorption capacity of shaped MIL-53(Al) and compared it to its primary powder form. The shaped materials with the best characteristics to be used in CO2 gas adsorption processes were the binderless sample of 0.5ton-force compression and the sample with 5% of PVA binder. Overall, both methods show good potential in shaping MIL-53(Al) and may be a good fit for future scale-up studies.
APA, Harvard, Vancouver, ISO, and other styles
50

徐育良. "The Character Design of Computer Game Using Particle Swarm Optimization." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/51423820034717352633.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系
91
Computer games have highly interactive ability and can integrate various media. Playing computer games have become people’s popular entertainment. The purpose of this thesis is included motion develop and behavior design of character in computer games. According to our gaming experience, today’s games commonly have two faults. Same character has same motion, and character doesn’t have learning ability. We find optimization technology can solve these two problems. Particle Swarm Optimization (PSO) is a new optimization and machine learning technology in Artificial Intelligence. Compare with Genetic Algorithm, PSO is easy to implement and there are few parameters to adjust. Otherwise PSO tend to converge to the best solution quickly in most cases. In this paper, we propose that use PSO to control frame node of character to generate various character motion. And use PSO to adjust weight that can affect behavior of character to let character has learning ability. In motion design, we argue that use skeletal animation and skin-mesh technique to load a set of character model and animation from animator. Then use PSO with interactive evolution technique to optimize parameters of original character motion. In behavior design, we propose that use fuzzy-state machines with machine learning technique as behavioral mechanism. Use PSO with several fitness function dynamically adjust behavior weight to let character has learning ability and apply to bots in QUAKE III. In summery, this paper proposes a novel method which uses Particle Swarm Optimization with several related techniques to help motion and behavior design in computer games. We develop a motion production and optimization system let game animator can generate various character motion faster and easier. This new mechanism can help AI developer design behavioral rules of characters easier, and can let game characters have in-game learning ability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography