Dissertations / Theses on the topic 'General Purpose Simulation System'

To see the other types of publications on this topic, follow the link: General Purpose Simulation System.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'General Purpose Simulation System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bishop, John Leslie. "General purpose visual simulation system." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44699.

Full text
Abstract:
The purpose of the research described herein is to prototype a software system that aids a simulationist in developing a general purpose discrete event simulation model. A literature review has shown the need for an integrated visual simulation system that provides for the graphical definition and interactive specification of the model while maintaining application independence. The General Purpose Visual Simulation System (GPVSS) prototyped in this research meets this need by assisting a simulationist to: (1) graphically design the model and its M visualization, (2) interactively specify the modelâ s logic, and (3) automatically generate the executable version of the model, while maintaining domain independence. GPVSS is prototyped on a Sun 3/160C computer workstation using the SunView graphical interface. It consists of over 11,000 lines of documented code. GPVSS has been successfully tested in three different case studies that are described in this work.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Trimeloni, Thomas. "Accelerating Finite State Projection through General Purpose Graphics Processing." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/175.

Full text
Abstract:
The finite state projection algorithm provides modelers a new way of directly solving the chemical master equation. The algorithm utilizes the matrix exponential function, and so the algorithm’s performance suffers when it is applied to large problems. Other work has been done to reduce the size of the exponentiation through mathematical simplifications, but efficiently exponentiating a large matrix has not been explored. This work explores implementing the finite state projection algorithm on several different high-performance computing platforms as a means of efficiently calculating the matrix exponential function for large systems. This work finds that general purpose graphics processing can accelerate the finite state projection algorithm by several orders of magnitude. Specific biological models and modeling techniques are discussed as a demonstration of the algorithm implemented on a general purpose graphics processor. The results of this work show that general purpose graphics processing will be a key factor in modeling more complex biological systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Євсєєв, В. В., Н. П. Демська, and Ю. М. Олександров. "Моделювання виробничої лінії SMT-монтажу в кібер-фізичних виробничих системах." Thesis, Кременчуцький національний університет імені Михайла Остроградського, 2022. https://openarchive.nure.ua/handle/document/20422.

Full text
Abstract:
Industry 4.0 визначає бачення і принципи функціонування Smart Manufacturing. Таке підприємство використовує модульну структуру, кібер-фізичні системи контролюють фізичні та інформаційні процеси, створюючи своєрідну віртуальну копію реального світу, де приймають децентралізовані рішення. За допомогою Internet of Things (IoT) кібер-фізичні системи з'єднуються і взаємодіють одна з однією та людьми в реальному часі.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Jian. "General-purpose user-defined modelling system (GPMS)." Thesis, Lancaster University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jensen, Justin Alain. "A General-Purpose Animation System for 4D." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6968.

Full text
Abstract:
Computer animation has been limited almost exclusively to 2D and 3D. The tools for 3D computer animation have been largely in place for decades and are well-understood. Existing tools for visualizing 4D geometry include minimal animation features. Few tools have been designed specifically for animation of higher-dimensional objects, phenomena, or spaces. None have been designed to be familiar to 3D animators. A general-purpose 4D animation system can be expected to facilitate more widespread understanding of 4D geometry and space, can become the basis for creating unique 3D visual effects, and may offer new insight into 3D animation concepts. We have developed a software package that facilitates general-purpose animation in four spatial dimensions. Standard features from popular 3D animation software have been included and adapted, where appropriate. Many adaptations are trivial; some have required novel solutions. Several features that are possible only in four or more dimensions have been included. The graphical user interface has been designed to be familiar to experienced 3D animators. Keyframe animation is provided by using a set of curves that defines movement in each dimension or rotation plane. An interactive viewport offers multiple visualization methods including slicing and projection. The viewport allows for both manipulation of 4D objects and navigation through virtual 4D space.
APA, Harvard, Vancouver, ISO, and other styles
6

Graas, Estelle Laure. "Exploration of alternatives to general-purpose computers in neural simulation." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/14815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Childs, S. O. "Disk quality of service in a general-purpose operating system." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597603.

Full text
Abstract:
Users of general-purpose operating systems run a range of multimedia, productivity, and system housekeeping applications. Many of these applications are disk-bound, or have significant disk usage requirements. CPU scheduling is insufficient to ensure reliable performance for such applications, as it cannot control contention for the disk. User-controllable disk scheduling is necessary; the disk scheduler should respect Quality of Service (QoS) specifications defined by users when scheduling disk requests. Disks have a number of distinctive features that influence scheduler design: context-switches that involve seek operations are expensive, disk operations are non-preemptible, and the cost of data transfer varies according to the amount of seek overhead. Any new scheduler must recognise these factors if it is to provide acceptable performance. We present a new disk scheduler for Linux-SRT, a version of Linux with support for CPU QoS. This disk scheduler provides multiple scheduling classes: periodic allocation, static priority, best-effort, and idle. The scheduler makes disk QoS available as a low-level system service, independent of the particular file system used. Applications need not be modified to benefit from disk QoS. The structure of the Linux disk subsystem causes requests from different clients to be executed in an interleaved fashion. This results in many expensive seek operations. We implement laxity, a technique for batching together multiple requests from a single client. This feature that greatly improves the performance of applications performing synchronous I/O, and provides better isolation between applications. We perform experiments to test the effectiveness of our research system in typical scenarios. The results demonstrate that the system can be used to protect time-critical applications from the effects of contention, to regulate low-importance disk-bound tasks, and to limit the disk utilisation of particular processes (allowing resource partitioning). We use the accounting features of our disk scheduler to measure the disk resource usage of typical desktop applications. Based on these measurements, we classify applications and suggest suitable scheduling policies. We also present techniques for determining appropriate parameters for these policies. Scheduling features are of little use unless users can employ them effectively. We extend Linux-SRT's QoS architecture to accommodate control of disk scheduling; the resulting system provides a coherent interface for managing QoS across multiple devices. The disk scheduler exports status information to user space; we construct tools for monitoring and controlling processes' disk utilisation.
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, Akash G. "The Morpheus Visualization System : a general-purpose RDF results browser." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53183.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 69-72).
As the amount of information available on the deep web grows, finding ways to make this information accessible is growing increasingly problematic. As some have estimated that the content in the deep web is several orders of magnitude greater than that in the shallow web, there is a clear need for an effective tool to search the deep web. While many have attempted a solution, none have been successful in effectively addressing the problem of deep web searching. The Morpheus project presents a unique approach to the problem as it integrates the deep web with the shallow web while preserving the semantics of the deep web sites it accesses. At the heart Morpheus is its visualization system which allows users to access the deep web information. The visualization system makes use of clustering algorithms, visual information techniques, as well as the semantics of the deep web sites stored by Morpheus to present deep web results to users in an effective manner. User testing was also conducted to identify problematic areas of the system during development as well as to evaluate the usability of the system's design. Results indicate that users find that the Morpheus visualization system is a highly usable and learnable interface for searching the deep web for results as well as for processing those results.
by Akash G. Shah.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Carden, Steven James. "A mathematical framework for a general purpose constraint management system." Thesis, University of Leeds, 1998. http://etheses.whiterose.ac.uk/1272/.

Full text
Abstract:
The use of constraints in engineering for designing complex models is very popular. Current constraint solvers are divided into two broad classes: general and domain specific. Those that are general can handle very general constraint problems but are typically slow; while those that are domain specific can handle only a specific type of problem but are typically fast. For example, numerical algorithms are slow but general, whilst local propagation techniques are fast but limited to simple problems. It is generally acknowledged that there is a close coupling between engineering constraints and geometric constraints in the design process and so the solution of constraint problems consisting of engineering and geometric constraints is an important research issue. Some authors attempt to overcome the expressive limitations of domain specific solvers by using hybrid systems which try to find a balance between the speed of domain specific solvers and the generality of general solvers. Previous research at the University of Leeds has led to the development of a number of domain specific solvers that are capable of solving geometric and engineering constraint problems separately. In particular, the Leeds solvers are incremental and can find solutions when a new constraint is added very quickly. This thesis investigates the use of a hybrid of the various Leeds solvers with an aim to interactively solving constraint problems in engineering design, This Hybrid would have the speed advantages of the domain specific solvers and the expressiveness of a more general solver. In order for the hybrid to be constructed, commonalties of existing engineering constraints solvers must be identified. A characterisation of existing constraint solvers leads to the identification of a number of issues that need to be addressed before the hybrid can be built. In order to examine these issues, a framework for the constraint satisfaction process is presented that allows abstractions of constraint definition, constraint representation and constraint satisfaction. Using the constraint satisfaction framework, it is possible to study the quality of solution of constraint solvers. This leads to the identification of important problems in current constraint solvers. The constraint process framework leads to a study of the use of various paradigms of collaboration within the hybrid, such as sequential, parallel and concurrent. The study of the quality of solution allows concrete statements to be made about the hybrid collaborations. A new incremental constraint solver is presented that uses the hybrid collaboration paradigms and provides a first step towards a powerful engineering constraint solver.
APA, Harvard, Vancouver, ISO, and other styles
10

witt, micah. "Proton Computed Tomography: Matrix Data Generation Through General Purpose Graphics Processing Unit Reconstruction." CSUSB ScholarWorks, 2014. https://scholarworks.lib.csusb.edu/etd/2.

Full text
Abstract:
Proton computed tomography (pCT) is an image modality that will improve treatment planning for patients receiving proton radiation therapy compared with the current techniques, which are based on X-ray CT. Images are reconstructed in pCT by solving a large and sparse system of linear equations. The size of the system necessitates matrix-partitioning and parallel reconstruction algorithms to be implemented across some sort of cluster computing architecture. The prototypical algorithm to solve the pCT system is the algebraic reconstruction technique (ART) that has been modified into parallel versions called block-iterative-projection (BIP) methods and string-averaging-projection (SAP) methods. General purpose graphics processing units (GPGPUs) have hundreds of stream processors for massively parallel calculations. A GPGPU cluster is a set of nodes, with each node containing a set of GPGPUs. This thesis describes a proton simulator that was developed to generate realistic pCT data sets. Simulated data sets were used to compare the performance of a BIP implementation against a SAP implementation on a single GPGPU with the data stored in a sparse matrix structure called the compressed sparse row (CSR) format. Both BIP and SAP algorithms allow for parallel computation by creating row partitions of the pCT linear system. The difference between these two general classes of algorithms is that BIP permits parallel computations within the row partitions yet sequential computations between the row partitions, whereas SAP permits parallel computations between the row partitions yet sequential computations within the row partitions. This thesis also introduces a general partitioning scheme to be applied to a GPGPU cluster to achieve a pure parallel ART algorithm while providing a framework for column partitioning to the pCT system, as well as show sparse visualization patterns that can be found via specified ordering of the equations within the matrix.
APA, Harvard, Vancouver, ISO, and other styles
11

Back, Adam. "Parallelization of general purpose programs using optimistic techniques from parallel discrete event simulation." Thesis, University of Exeter, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Blake, Carl David. "A REAL-TIME MULTI-TASKING OPERATING SYSTEM FOR GENERAL PURPOSE APPLICATIONS." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rozier, David. "Qualitative modelling and simulation of physical systems for a diagnostic purpose." Thesis, De Montfort University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bewaji-Adedeji, Eniola Olsimbo. "The development of a general-purpose dynamic simulator for food process design and simulation." Thesis, London South Bank University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Grgurich, Aaron James. "DESIGN AND SIMULATION OF A GENERAL PURPOSE, CLASS-A AMPLIFIER FOR HIGH TEMPERATURE APPLICATIONS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1588049969718819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Abou-Rabia, Osman. "Multiprocessing in continuous system simulation." Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/4894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hershberger, John. "Exchanges for Complex Commodities: Toward a General-Purpose System for On-Line Trading." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Chungwuu. "A general robot path verification simulation system: GRPVSS." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45813.

Full text
Abstract:
Collision-detection is a critical task for off-line robot path planning. A general robot path verification simulation system, GRPVSS, applicable for all industrial robots with open-looped links, is created to verify the intended robot path. The manipulator and obstacles are modeled by convex polyhedra to reduce the computation burden required by the collision detection algorithm. As a kinematic simulator, GRPVSS employs motion-time profiles or ideal trapezoid profiles which describe the position-vs-time relation of an individual joint, to generate the robot working trajectory. This approach makes the to-be-verified working path closer to the real one. Both point-to-point(PTP) and continuous path(CP) operations can be simulated by GRPVSS. Collision detection is conducted by performing geometric interference detection between the static configurations of the expanded moving robot and the static obstacles at each simulation step. Inn this case, the resolution of a simulation is critical to path verification. Simulations with low resolution take the risk of undetected collisions, while simulations with high resolution consume too much computing time. GRPVSS computes and employs the lowest resolution level that yields 100% path verification for the specified tolerances of manipulator dimensions. The tolerance value is specified by the user but should not be specified smaller than the positioning accuracy of the simulated industrial robot. The links of the manipulator are expanded by the amount of tolerance. GRPVSS is a graphic simulator. A systematic control supervisor is constructed for the simulator to request input and to proceed all functions interactively with users. The robot motion of a simulated path is animated on a 3-D graphical screen. A11 collision configurations and related information of the simulated path are stored in a file and shown on the screen. The graphical display works on graPHIGS, one of the 3-D graphical software packages published by IBM.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Väyrynen, Mikael. "Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-Chips." Thesis, Linköping University, Linköping University, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17705.

Full text
Abstract:

Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

APA, Harvard, Vancouver, ISO, and other styles
20

Sousa, Ricardo José Alves de. "Development of a general purpose nonlinear solid-shell element and its application to anisotropic sheet forming simulation." Doctoral thesis, Universidade de Aveiro, 2006. http://hdl.handle.net/10773/4700.

Full text
Abstract:
Doutoramento em Engenharia Mecânica
A utilização dos métodos computacionais na Engenharia Mecânica tem assumido cada vez mais relevância, contribuindo para uma melhor compreensão dos processos de conformação plástica em chapa, especialmente aqueles que lidam com materiais anisotrópicos, como é o caso das ligas de alumínio. Dentre estes, o método dos elementos finitos (FEM) tem progredido substancialmente nas últimas duas décadas, em parte devido ao rápido desenvolvimento da arquitectura dos computadores. Para a correcta modelação dos processos de conformação plástica em chap: o desenvolvimento de um elemento finito preciso e eficiente, vocacionado para a modelação de estruturas com parede fina, como é o caso das chapas de metal; o estudo e implementação de modelos constitutivos, considerando a anisotropia material a três dimensões. Assim, é proposto um novo elemento finito sólido-casca, suportando um número arbitrário de pontos de integração numérica ao longo da sua espessura. Devido à sua topologia sólida com oito nós físicos, esta formulação avalia naturalmente variações de espessura, contacto simultâneo em duas faces e modelos constitutivos tridimensionais, aspectos cruciais neste tipo de aplicações. Do lado constitutivo, a caracterização de materiais anisotrópicos pode ser conseguida através de funções de cedência não quadráticas ou através de modelos policristalinos. A descrição matemática da anisotropia plástica é conveniente e computacionalmente eficiente devido ao facto de utilizar parâmetros mecânicos macroscópicos como dados de entrada. Por outro lado, a descrição policristalina é baseada em aspectos físicos micro-estruturais da deformação plástica, sendo a textura cristalográfica o principal dado de entrada para estes modelos. Assim, a rotação de cada um dos grãos é acompanhada individualmente e a anisotropia material é consequentemente evolucional. No entanto, quando comparado com os modelos fenomenológicos, os modelos policristalinos são computacionalmente intensivos e não passíveis de serem usados à escala industrial, em particular na análise de conformação em chapa. Neste trabalho, as duas alternativas são analisadas, mas devido ao seu carácter inovador, ênfase será dada a um modelo multi-escala optimizado, que utiliza o conceito da interacção dos sistemas de deslizamento ao nível do grão e uma transição micro-macro baseada na hipótese de que todos os grãos sofrem o mesmo nível de deformação macroscópico. No final, os dois tópicos referidos (elemento finito e lei constitutiva) são consolidados num código de elementos finitos, sendo então validados e comparados com resultados experimentais ou numéricos, previamente publicados por outros autores.
The use of computational methods in Mechanical Engineering has gained more relevance, contributing to a better understanding of sheet metal forming processes, especially when dealing with anisotropic materials, such as aluminum alloys. Among them, the finite element method (FEM) has made significant progress during the last two decades, partly because of the rapid progress of computational environment. For a proper modeling of anisotropic forming processes, it is necessary to use accurate and efficient finite elements. The class of solid-shell finite elements has been appearing in the last years as an excellent alternative to shell elements to model thin-walled structures, presenting at the same time a number of advantages, namely the use of full constitutive laws and automatic consideration of double-sided contact. At the same time, it is important to utilize constitutive laws that describe the material anisotropy properly. In this work, the main focus is given to the formulation of a new one point quadrature solid-shell finite element. As a distinctive feature, the formulation accounts for an arbitrary number of integration points through its thickness direction. Once it contains eight physical nodes, naturally evaluates thickness strain, double sided contact and full three-dimensional constitutive models, which are crucial aspects in this type of applications. Additionally, simulation of spring-back phenomena of a metal sheet can be made resorting only to a single layer of solid-shell finite elements containing several integration points through the thickness direction. On the constitutive side, anisotropic material modelling can be described utilizing non-quadratic mathematical yield functions or polycrystal models. Phenomenological description of plastic anisotropy is convenient and time-efficient since it is based on macroscopic mechanical properties of the material as input. On the other side, polycrystal description is based on the physical microstructural aspects of plastic deformation, being the crystallographic texture the main input to these models. However, compared to phenomenological approaches, despite having a more sounding theoretical basis, polycrystal models are computationally time-intensive and difficult to employ for large-scale industrial applications, particularly sheet forming analysis and design. Therefore, it is required to select an appropriate approach based on the problem characteristics. In this work, well-chosen anisotropic yield functions are reviewed. Additionally, the description of a time efficent grain-level single crystal model is carried out. In the numerical tests, finite element development and constitutive modelling topics are consolidated in an in-house FEM code, being validated and compared with experiments or numerical results previously reported in the literature.
FCT
POSI BD/12864/2003
APA, Harvard, Vancouver, ISO, and other styles
21

Karlsson, Anders. "Cooling methods for electrical machines : Simulation based evaluation of cooling fins found on low voltage general purpose machines." Thesis, Uppsala universitet, Elektricitetslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-217171.

Full text
Abstract:
The main goal of this thesis project is to identify interesting concepts related to cooling of electrical motors and generators which could be evaluated using suitable computer simulation tools. As the project proceeded it was decided to focus on investigating how the air from a fan flows along the finned frame of a general purpose low voltage electrical machine, how the heat is transferred between the frame and the cooling air and what the temperature distribution looks like. It was also investigated if it is possible to make improvements in the effectiveness of the cooling without adding additional coolers. This investigation focused on varying the fin design and evaluating the resulting temperature distribution. Due to the complex nature of the simulations a segment, and not the full frame, was considered. Simulation model validation was performed through comparing air speed measurements that were performed on two different machines with the corresponding simulated air speed. The validation showed that good agreement between simulated and measured air speeds are obtained. The conclusion from the simulations is that slight modifications to the current fin design could increase the cooling effect of the finned surface. The air velocity measurements also indicate that the cooling of the machines surface could potentially be improved by small changes in the exterior of the frame.
Målet med detta examensarbete var att identifiera intressanta koncept relaterade till kylning av elektriska maskiner och generatorer, som kunde utvärderas med lämplig programvara för datorsimuleringar. Under projektets gång så bestämdes det att fokusera på hur luften från en fläkt flödar längs med en generell lågspänningsmaskin, hur värmen överförs från ramen till den omgivande luften och hur temperaturfördelningen ser ut. Det undersöktes även om det var möjligt att förbättra effektiviteten av kylningen utan att ansluta extra kylanordningar. Undersökningarna fokuserades på olika fendesigner och dess påverkan på värmefördelningen. På grund av simuleringarnas komplexitet så har simuleringarna endast utförts på ett segment istället för hela maskinen. Validering av simuleringarna utfördes genom att jämföra de simulerade lufthastigheterna med verklig lufthastighet som mättes på två maskiner i testmiljö. Valideringen visade att simuleringarna överensstämmer väl med de mätningar som utfördes. Slutsatsen utifrån simuleringarna är att mindre förändringar av fenornas nuvarande design kan förbättra fenornas kylningsförmåga. Mätningarna av lufthastigheten ger även indikationer på att kylningen av maskinens utsida eventuellt kan förbättras genom små förändringar av ramens exteriör.
APA, Harvard, Vancouver, ISO, and other styles
22

Jeong, Lena N. "Development of General Purpose Liquid Chromatography Simulator for the Exploration of Novel Liquid Chromatographic Strategies." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/5079.

Full text
Abstract:
The method development process in liquid chromatography (LC) involves optimization of a variety of method parameters including stationary phase chemistry, column temperature, initial and final mobile phase compositions, and gradient time when gradient mobile phases are used. Here, a general simulation program to predict the results (i.e., retention time, peak width and peak shape) of LC separations, with the ability to study various complex chromatographic conditions is described. The simulation program is based on the Craig distribution model where the column is divided into discrete distance (Δz) and time (Δt) segments in a grid and is based on parameterization with either the linear solvent strength or Neue-Kuss models for chromatographic retention. This algorithm is relatively simple to understand and produces results that agree well with closed form theory when available. The set of simulation programs allows for the use of any eluent composition profile (linear and nonlinear), any column temperature, any stationary phase composition (constant or non-constant), and any composition and shape of the injected sample profile. The latter addition to our program is particularly useful in characterizing the solvent mismatch effect in comprehensive two-dimensional liquid chromatography (2D-LC), in which there is a mismatch between the first dimension (1D) effluent and second dimension (2D) initial mobile phase composition. This solvent mismatch causes peak distortion and broadening. The use of simulations can provide a better understanding of this phenomenon and a guide for the method development for 2D-LC. Another development that is proposed to have a great impact on the enhancement of 2D-LC methods is the use of continuous stationary phase gradients. When using rapid mobile phase gradients in the second dimension separation with diode array detection (DAD), refractive index changes cause large backgrounds such as an injection ridge (from solvent mismatch) and sloping baselines which can be problematic for achieving accurate quantitation. Use of a stationary phase gradient may enable the use of an isocratic mobile phase in the 2D, thus minimizing these background signals. Finally, our simulator can be used as an educational tool. Unlike commercially available simulators, our program can capture the evolution of the chromatogram in the form of movies and/or snapshots of the analyte distribution over time and/or distance to facilitate a better understanding of the separation process under complicated circumstances. We plan to make this simulation program publically available to all chromatographers and educators to aid in more efficient method development and chromatographic training.
APA, Harvard, Vancouver, ISO, and other styles
23

Tako, Antuela Anthi. "Development and use of simulation models in Operational Research : a comparison of discrete-event simulation and system dynamics." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/2984/.

Full text
Abstract:
The thesis presents a comparison study of the two most established simulation approaches in Operational Research, Discrete-Event Simulation (DES) and System Dynamics (SD). The aim of the research implemented is to provide an empirical view of the differences and similarities between DES and SD, in terms of model building and model use. More specifically, the main objectives of this work are: 1. To determine how different the modelling process followed by DES and SD modellers is. 2. To establish the differences and similarities in the modelling approach taken by DES and SD modellers in each stage of simulation modelling. 3. To assess how different DES and SD models of an equivalent problem are from the users’ point of view. In line with the 3 research objectives, two separate studies are implemented: a model building study based on the first and second research objectives and a model use study, dealing with the third research objective. In the former study, Verbal Protocol Analysis is used, where expert DES and SD modellers are asked to ‘think aloud’ while developing simulation models. In the model use study a questionnaire survey with managers (executive MBA students) is implemented, where participants are requested to provide opinions about two equivalent DES and SD models. The model building study suggests that DES and SD modelling are different regarding the model building process and the stages followed. Considering the approach taken to modelling, some similarities are found in DES and SD modellers’ approach to problem structuring, data inputs, validation & verification. Meanwhile, the modellers’ approach to conceptual modelling, model coding, data inputs and model results is considered different. The model use study does not identify many significant differences in the users’ opinions regarding the specific DES and SD models used, implying that from the user’s point of view the type of simulation approach used makes little difference if any. The work described in this thesis is the first of its kind. It provides an understanding of the DES and SD simulation approaches in terms of the differences and similarities involved. The key contribution of this study is that it provides empirical evidence on the differences and similarities between DES and SD from the model building and model use point of view. Albeit the study does not provide a comprehensive comparison of the two simulation approaches, the findings of the study, provide new insights about the comparison of the two simulation approaches and contribute to the limited existing comparison literature.
APA, Harvard, Vancouver, ISO, and other styles
24

Hershberger, John 1980. "Exchanges for complex commodities [electronic resource] : toward a general-purpose system for on-line trading / by John Hershberger." University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000127.

Full text
Abstract:
Title from PDF of title page.
Document formatted into pages; contains 117 pages.
Thesis (M.S.C.S.)--University of South Florida, 2003.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: The modern economy includes a variety of markets, and the Internet has opened opportunities for efficient on-line trading. Researchers have developed algorithms for various auctions, which have become a popular means for on-line sales. They have also designed algorithms for exchange-based markets, similar to the traditional stock exchange, which support fast-paced trading of rigidly standardized securities. In contrast, there has been little work on exchanges for complex nonstandard commodities, such as used cars or collectible stamps. We propose a formal model for trading of complex goods, and present an automated exchange for a limited version of this model. The exchange allows the traders to describe commodities by multiple attributes; for example, a car buyer may specify a model, options, color, and other desirable properties.
ABSTRACT: Furthermore, a trader may enter constraints on the acceptable items rather than a specific item; for example, a buyer may look for any car that satisfies certain constraints, rather than for one particular vehicle. We present an extensive empirical evaluation of the implemented exchange, using artificial data, and then give results for two real-world markets, used cars and commercial paper. The experiments show that the system supports markets with up to 260,000 orders, and generates one hundred to one thousand trades per second.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
25

Sidhu, Gursharan. "A simulation approach for estimating the performance of a multiprocessor digital switching system." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Delobe, Timothy Charles. "Project dynamics : an analysis of the purpose and value of system dynamics applied to information technology project management." Full-text of dissertation on the Internet (582.22 KB), 2010. http://www.lib.jmu.edu/general/etd/2010/masters/delobetc/delobetc_masters_04-20-2010_02.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Fang. "Simulation based A-posteriori search for an ICE microwave ignition system." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/2237/.

Full text
Abstract:
Petrol internal combustion engines (ICEs) in automobiles use a high-voltage spark ignition system, which currently offers an energy efficiency of 25%-35% only and also produces excessive exhaust emissions. Recent political, economic, social, technical, legal and environmental drive has accelerated the worldwide research in ‘greener’ engines, such as the homogenous charge compression ignition (HCCI) engines, which focuses on total resource conservation and emission reduction per mile. However, its ignition timing needs real-time control of cylinder pressure and temperature in a closed loop, which is practically intractable to date. Leapfrogging HCCI and requiring no closed-loop control or modification to the engine, this thesis develops homogenous charge microwave ignition (HCMI) directly to replace the point-based spark ignition. Like HCCI, HCMI is volume based and is also applicable to diesel fuel. Through computer simulations, the thesis verifies the feasibility of the ICE radio frequency ignition concept first proposed by Ward in 1974. Building on the simulation-based design methodology of Boeing 777 aircraft, which required no hardware casting or prototyping at the design stage, this thesis employs intelligent search to evolve ‘designs of experiments’ by simulation means for vehicle-borne HCMI with potential to offer a step change in fuel efficiency and emission reduction. Investigation of this thesis into the effect of piston position confirms with graphical visualisation that the resonant frequency of the engine cylinder is very sensitive to the piston motion, because it can easily cause off-resonance and hence degraded field strength. It is revealed that this is the major factor that encumbers practical realisation of an HCMI system. This thesis shows that the natural frequency changes 0.015 GHz per 0.5 mm in average when the piston moves from 5 mm to 0.5 mm TDC and 0.0021 GHz per 0.05 mm when the piston moves from 0.5 mm to 0.05 mm to TDC. For the geometry of the given ICE cylinder, if the input microwave frequency is fixed, the resonance lasts for 7 s. Investigation on various diameters of cylinders that reveals the results on the effects of piston motion of a certain cylinder can be extended to other cylinders with different diameters. It is also shown that for different types of cylinders the frequency of input microwave can be very different. Therefore, the different microwave source of the HCMI systems has to be designed for different types of vehicles. Simulations reported in the thesis also reveal that a microwave based ignition takes 30 ns to 100 ns to break down the median of a permittivity and permeability that are the same as the chemically optimal 14.7:1 air-fuel mixture. This is much shorter than the duration of the microwave resonance and hence makes HCMI feasible in terms of duration. For a running engine, the variations of AFR can also cause off-resonance. It is found that the AFR does not affect the resonant frequency as much as piston motion does. The frequency only changes 38MHz when the AFR varies from 10:1 to 16:1. Properties and effects of microwave emitter and couplers are also studied and the results confirm with graphical visualisation that, for an emitter in the form of a probe antenna, the electric field intensity is dependant on the antenna length. For the given geometry of the Chryslor-Dorge ICE studied, a probe antenna of a length around 30% of wavelength shorter than the end of transmission line offers the best coupling efficiency in an HCMI system. To search for globally optimal designs, the Nelder-Mead simplex method and the ‘intelligent’ evolutionary algorithm (EA) are coupled with CAD simulations. These machine learning methods are shown efficient and reliable in dealing with multiple parameters. Under practical constraints, the best ignition timing and AFR combination is found, which for a 100 W input offers an electric field intensity of up to 9.8×106 V m-1, almost doubling the minimum requirement of 5.5×106 V m-1 for a plasma breakdown of the air-fuel mixture. In this work, six different geometric shapes of antennae are studied. Through the EA based global search, it is confirmed that the length and screen radius of the probe antenna do not affect the resonant frequency significantly. For the given ICE geometry, an antenna length of 14.3 mm offers the best efficiency and the least reflection regardless of the screen radius. The radius affects resonance the least among all the parameters searched, although it can contribute to enhancing electric field and reducing reflection of the coupling. For maximal electric field strength in the cylinder, the best combination of the antenna length and the screen radius is also searched and results are fully tabulated in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
28

Lundqvist, Viktor. "A smoothed particle hydrodynamic simulation utilizing the parallel processing capabilites of the GPUs." Thesis, Linköping University, Department of Science and Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-21761.

Full text
Abstract:

Simulating fluid behavior has proven to be a demanding challenge which requires complex computational models and highly efficient data structures. Smoothed Particle Hydrodynamics (SPH) is a particle based computational model used to simulate fluid behavior that has been found capable of producing convincing results. However, the SPH algorithm is computational heavy which makes it cumbersome to work with.

This master thesis describes how the SPH algorithm can be accelerated by utilizing the GPU’s computational resources. It describes a model for how to distribute the work load on the GPU and presents a suitable data structure. In addition, it proposes a method to represent and handle moving objects in the fluids surroundings. Finally, the performance gain due to the GPU is evaluated by comparing processing times with an identical implementation running solely on the CPU.

APA, Harvard, Vancouver, ISO, and other styles
29

Li, Min. "Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/29129.

Full text
Abstract:
With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area. Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator. Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board. With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable. In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system. In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to achieve maximal performance on GPUs and obtain significant speedups. The proposed algorithms based on GPU parallelism collectively aim to contribute to improving the performance of EDA tools in Computer aided design (CAD) community on GPUs and other many-core platforms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Serdar, Usenmez. "Design Of An Integrated Hardware-in-the-loop Simulation System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/2/12612051/index.pdf.

Full text
Abstract:
This thesis aims to propose multiple methods for performing a hardware-in-the-loop simulation, providing the hardware and software tools necessary for design and execution. For this purpose, methods of modeling commonly encountered dynamical system components are explored and techniques suitable for calculating the states of the modeled system are presented. Modules and subsystems that enable the realization of a hardware-in-the-loop simulation application and its interfacing with external controller hardware are explained. The thesis also presents three different simulation scenarios. Solutions suitable for these scenarios are provided along with their implementations. The details and specifications of the developed software packages and hardware platforms are given. The provided results illustrate the advantages and disadvantages of the approaches used in these solutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Tao. "General Aviation Demand Forecasting Models and a Microscopic North Atlantic Air Traffic Simulation Model." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/71663.

Full text
Abstract:
This thesis is focused on two topics. The first topic is the General Aviation (GA) demand forecasting models. The contributions to this topic are three fold: 1) we calibrated an econometric model to investigate the impact of fuel price on the utilization rate of GA piston engine aircraft, 2) we adopted a logistic model to identify the relationship between fuel price and an aircraft's probability of staying active, and 3) we developed an econometric model to forecast the airport-level itinerant and local GA operations. Our calibration results are compared with those reported in literature. Demand forecasts are made with these models and compared with those prepared by the Federal Aviation Administration. The second topic is to model the air traffic in the Organized Track System (OTS) over the North Atlantic. We developed a discrete-time event model to simulate the air traffic that uses the OTS. We proposed four new operational procedures to improve the flight operations for the OTS. Two procedures aim to improve the OTS assignments in the OTS entry area, and the other two aim to benefit flights once they are inside the OTS. The four procedures are implemented with the simulation model and their benefits are analyzed. Several implementation issues are discussed and recommendations are given.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Kayasal, Ugur. "Modeling And Simulation Of A Navigation System With An Imu And A Magnetometer." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608786/index.pdf.

Full text
Abstract:
In this thesis, the integration of a MEMS based inertial measurement unit and a three axis solid state magnetometer are studied. It is a fact that unaided inertial navigation systems, especially low cost MEMS based navigation systems have a divergent behavior. Nowadays, many navigation systems use GPS aiding to improve the performance, but GPS may not be applicable in some cases. Also, GPS provides the position and velocity reference whereas the attitude information is extracted through estimation filters. An alternative reference source is a three axis magnetometer, which provides direct attitude measurements. In this study, error propagation equations of an inertial navigation system are derived
measurement equations of magnetometer for Kalman filtering are developed
the unique method to self align the MEMS navigation system is developed. In the motion estimation, the performance of the developed algorithms are compared using a GPS aided system and magnetometer aided system. Some experiments are conducted for self alignment algorithms.
APA, Harvard, Vancouver, ISO, and other styles
33

Oktay, Gorkem. "Design And Simulation Of A Traction Control System For An Integrated Active Safety System For Road Vehicles." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610204/index.pdf.

Full text
Abstract:
Active safety systems for road vehicles make a crucial preventive contribution to road safety. In recent years, technological developments and the increasing demand for road safety have resulted in the integration and cooperation of these individual active safety systems. Traction control system (TCS) is one of these individual systems, which is capable of inhibiting wheel-spin during acceleration of the vehicle on slippery surfaces. In this thesis, design methodology and simulation results of a traction control system for four wheeled road vehicles are presented. The objective of the TCS controller is basically to improve directional stability, steer-ability and acceleration performance of vehicle by controlling the wheel slip during acceleration. In this study, the designed traction control system based on fuzzy logic is composed of an engine torque controller and a slip controller. Reference wheel slip values were estimated from the longitudinal acceleration data of the vehicle. Engine torque controller determines the throttle opening angle corresponding to the desired wheel torque, which is determined by a slip controller to track the reference slip signals. The wheel torques delivered by the engine are compensated by brake torques according to the desired wheel torque determined by the slip controller. Performance of the TCS controller was analyzed through several simulations held in MATLAB/Simulink for different road conditions during straight line acceleration and combined acceleration and steering. For simulations, an 8 DOF nonlinear vehicle model with nonlinear tires and a 2 DOF nonlinear engine model were built.
APA, Harvard, Vancouver, ISO, and other styles
34

Fakharian, Qom Somaye. "Multi-Resolution Modeling of Managed Lanes with Consideration of Autonomous/Connected Vehicles." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2559.

Full text
Abstract:
Advanced modeling tools and methods are essential components for the analyses of congested conditions and advanced Intelligent Transportation Systems (ITS) strategies such as Managed Lanes (ML). A number of tools with different analysis resolution levels have been used to assess these strategies. These tools can be classified as sketch planning, macroscopic simulation, mesoscopic simulation, microscopic simulation, static traffic assignment, and dynamic traffic assignment tools. Due to the complexity of the managed lane modeling process, this dissertation investigated a Multi-Resolution Modeling (MRM) approach that combines a number of these tools for more efficient and accurate assessment of ML deployments. This study clearly demonstrated the differences in the accuracy of the results produced by the traffic flow models incorporated into different tools when compared with real-world measurements. This difference in the accuracy highlighted the importance of the selection of the appropriate analysis levels and tools that can better estimate ML and General Purpose Lanes (GPL) performance. The results also showed the importance of calibrating traffic flow model parameters, demand matrices, and assignment parameters based on real-world measurements to ensure accurate forecasts of real-world traffic conditions. In addition, the results indicated that the real-world utilization of ML by travelers can be best predicated with the use of dynamic traffic assignment modeling that incorporates travel time, toll, and travel time reliability of alternative paths in the assignment objective function. The replication of the specific dynamic pricing algorithm used in the real-world in the modeling process was also found to provide the better forecast of ML utilization. With regards to Connected Vehicle (CV) operations on ML, this study demonstrated the benefits of using results from tools with different modeling resolution to support each other’s analyses. In general, the results showed that providing toll incentives for Cooperative Adaptive Cruise Control (CACC)-equipped vehicles to use ML is not beneficial at lower market penetrations of CACC due to the small increase in capacity with these market penetrations. However, such incentives were found to be beneficial at higher market penetrations, particularly with higher demand levels.
APA, Harvard, Vancouver, ISO, and other styles
35

Cihangir, Cigdem. "A Hierarchical Decision Support System For Workforce Planning In Medical Equipment Maintenance Services." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612778/index.pdf.

Full text
Abstract:
In this thesis, we propose a hierarchical level decision support system for workforce planning in medical equipment maintenance services. In strategic level, customer clusters and the total number of field engineers is determined via a mixed integer programming and simulation. In MIP, we aim to find the minimum number of field engineers. Afterwards, we analyze service measures such as response time via simulation. In tactical level, quarterly training program for the field engineers is determined via mixed integer programming and the results are interpreted in terms of service level via simulation.
APA, Harvard, Vancouver, ISO, and other styles
36

Leseeto, Saidimu. "The role of risk management in pastoral policy development and poverty measurement : system dynamics simulation approach." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/344349/.

Full text
Abstract:
Livestock-based agriculture plays an important role in the development of sub-saharan Africa, especially those countries whose livestock industry contributes significantly to the Gross Domestic Product (GDP). In Kenya, agriculture alone accounts for 21% of the GDP and provides employment directly or indirectly to over 75% of the total labour force. The livestock industry, mainly arid rangelands, contributes 50% of the agricultural productivity. However, these Arid and Semi-Arid Lands (ASALs) are exposed to a myriad of risks affecting the environment which is the pastoral core asset. These risks arise from climatic change and variability, growth in human population and expanding settlements, changes in the land use systems, poor infrastructure, diseases, wildlife predation, and inter-ethnic conflicts. The consequences of these pastoral risks include: (1) declining per capita asset value, (2) increased health problems, (3) increased poverty, and (4) declining GDP generated from pastoralism. While a lot of resources have been invested in responding to the pastoral crisis associated with droughts, there is still inadequate understanding of the policy measures to put in place as mitigation strategies. The aims of this research are (1) identify the main pastoral risks and community response strategies, (2) assess the impact the identified risks on the wellbeing of pastoralists based on financial, human, physical, natural and social capital measurements (5 C‘s), and (3) develop a System Dynamics (SD) model to assess the holistic impact of community and government response strategies on pastoral wellbeing. Samburu district, in northern Kenya, was chosen as a study area because it is classified as 100% ASAL and experiences frequent droughts and changing land use systems. The research process involved literature synthesis, analysis of both cross-sectional and a 5-year panel data, and the development of a System Dynamics model. Cross-section data was primarily collected for the purposes of identifying the extent to which risks affect households, while the 5-year panel data was sourced from the Arid Lands Resource Management Project (ALRMP). Descriptive and empirical analysis showed that droughts, land use system and human population were considered as the main cause of shrinking rangeland productivity and as a result declining per capita livestock. This was further confirmed from the panel data analysis indicating climate variability as the main driver of pastoral wellbeing. Droughts affect rangeland pasture productivity, market prices, livestock assets, and households‘ nutritional status and poverty levels. These results imply a multifaceted nature of pastoral system with compound affects. The SD simulation result, which was run over the period January 2006 to December 2030, provided insights on policy evaluation and the state of pastoral wellbeing. Baseline scenario indicated reducing livestock ownership, causing high malnutrition and poverty rates. Strategies which incorporated rangeland rehabilitation, planned settlements, livestock disease control, insurance against droughts, reducing inter-ethnic conflicts, and timely destocking offered better policy options. These strategies resulted in reduced malnutrition, increased pasture productivity, reduced livestock losses and ultimately reducing poverty rates among the pastoral communities.
APA, Harvard, Vancouver, ISO, and other styles
37

Tekin, Gokhan. "Design And Simulation Of An Integrated Active Yaw Control System For Road Vehicles." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609243/index.pdf.

Full text
Abstract:
Active vehicle safety systems for road vehicles play an important role in accident prevention. In recent years, rapid developments have been observed in this area with advancing technology and electronic control systems. Active yaw control is one of these subjects, which aims to control the vehicle in case of any impending spinning or plowing during rapid and/or sharp maneuver. In addition to the development of these systems, integration and cooperation of these independent control mechanisms constitutes the current trend in active vehicle safety systems design. In this thesis, design methodology and simulation results of an active yaw control system for two axle road vehicles have been presented. Main objective of the yaw control system is to estimate the desired yaw behavior of the vehicle according to the demand of the driver and track this desired behavior accurately. The design procedure follows a progressive method, which first aims to design the yaw control scheme without regarding any other stability parameters, followed by the development of the designed control scheme via taking other stability parameters such vehicle sideslip angle into consideration. A two degree of freedom vehicle model (commonly known as &ldquo
Bicycle Model&rdquo
) is employed to model the desired vehicle behavior. The design of the controller is based on Fuzzy Logic Control, which has proved itself useful for complex nonlinear design problems. Afterwards, the proposed yaw controller has been modified in order to limit the vehicle sideslip angle as well. Integration of the designed active yaw control system with other safety systems such as Anti-Lock Braking System (ABS) and Traction Control System (TCS) is another subject of this study. A fuzzy logic based wheel slip controller has also been included in the study in order to integrate two different independent active systems to each other, which, in fact, is a general design approach for real life applications. This integration actually aims to initiate and develop the integration procedure of the active yaw control system with the (ABS). An eight degree of freedom detailed vehicle model with nonlinear tire model is utilized to represent the real vehicle in order to ensure the validity of the results. The simulation is held in MATLAB/Simulink environment, which has provided versatile design and simulation capabilities for this study. Wide-ranging simulations include various maneuvers with different road conditions have been performed in order to demonstrate the performance of the proposed controller.
APA, Harvard, Vancouver, ISO, and other styles
38

Sahin, Hakan. "Design Of A Secondary Packaging Robotic System." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606922/index.pdf.

Full text
Abstract:
The use of robotic systems in consumer goods industry has increased over recent years. However, food industry has not taken to the robotics technology with the same desire as in other industries due to technical and commercial reasons. Difficulties in matching human speed and flexibility, variable nature of food products, high production volume rates, lack of appropriate end-effectors, high initial investment rate of the so-called systems and low margins in food products are still blocking the range of use of robotics in food industry. In this thesis study, as a contribution to the use of robotic systems in food industry, a secondary packaging robotic system is designed. The system is composed of two basic subsystems: a dual-axis controlled robotic arm and a special-purpose gripper. Mechanical and control systems design of basic subsystems are performed within the scope of the study. During the designing process, instead of using classical design methods, modern computer-aided design and engineering tools are utilized.
APA, Harvard, Vancouver, ISO, and other styles
39

Pektas, Seda. "On-line Controller Tuning By Matlab Using Real System Responses." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605596/index.pdf.

Full text
Abstract:
This thesis attempts to tune any controller without the mathematical model knowledge of the system it is controlling. For that purpose, the optimization algorithm of MATLAB®
6.5 / Nonlinear Control Design Blockset (NCD) is adapted for real-time executions and combined with a hardware-in-the-loop simulation provided by MATLAB®
6.5 / Real-Time Windows Target (RTWT). A noise-included model of a DC motor position control system is obtained in MATLAB®
/ SIMULINK first and simulated to test the modified algorithm in some aspects. Then the presented methodology is verified using the physical plant (DC motor position control system) where tuning algorithm is driven mainly by the real system data and the required performance parameters specified by a user defined constraint window are successfully satisfied. Resultant improvements on the step response behavior of DC motor position control system are shown for two case studies.
APA, Harvard, Vancouver, ISO, and other styles
40

Nigania, Nimit. "FPGA prototyping of custom GPGPUs." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51966.

Full text
Abstract:
Prototyping new systems on hardware is a time-consuming task with limited scope for architectural exploration. The aim of this work was to perform fast prototyping of general-purpose graphics processing units (GPGPUs) on field programmable gate arrays (FPGAs) using a novel tool chain. This hardware flow combined with the higher level simulation flow using the same source code allowed us to create a whole tool chain to study and build future architectures using new technologies. It also gave us enough flexibility at different granularities to make architectural decisions. We will also discuss some example systems that were built using this tool chain along with some results.
APA, Harvard, Vancouver, ISO, and other styles
41

Rocha, João Miguel Lopes de Almeida. "Aceleração GPU da animação de superfícies deformáveis." Master's thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/1880.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática
A simulação de tecidos virtuais desempenha um papel importante em diversas áreas, como as indústrias dos jogos de computador e do cinema, sendo um tópico de investigação com grande actividade. A simulação é, normalmente, efectuada recorrendo a sistemas de partículas. Sobre as partículas são, de uma forma geral, definidas uma série de interacções com base num modelo físico de superfície, que caracteriza as propriedades do tecido, sobretudo no que diz respeito às suas deformações internas. A simulação é uma tarefa de computação extremamente intensiva graças a factores como a avaliação do modelo da superfície ou a utilização de métodos de integração numérica para a resolução do sistema de equações diferenciais que determinam a dinâmica do tecido. Qualquer destes factores depende, de forma directa, do número de partículas usado para discretizar a superfície. Na área da computação gráfica, alguns trabalhos foram já realizados no sentido de acelerar a animação da simulação de tecidos através da programação de GPU, como em [Zel05], [Zel07] e [Den06]. O GPU moderno contém vários processadores especializados em processar grandes quantidades de dados em paralelo, apresentando uma capacidade computacional, no que toca ao número de operações de vírgula flutuante por unidade de tempo, muito superior à do CPU, sendo particularmente apropriado a problemas que possam ser expressos como computações paralelas com alta intensidade de cálculo matemático. Neste trabalho, pretende-se contribuir com a aceleração de um simulador de tecidos com realismo acrescido, desenvolvido em [Birr07], recorrendo a um modelo de hardware e programação para GPU inovador, que o apresenta como um verdadeiro co-processador genérico ao CPU, o NVIDIA CUDA [Cud07]. As contribuições previstas estendem-se à realização de um estudo sobre as vantagens e desvantagens da utilização deste modelo quando comparado com outros, como [Zel05], [Zel07] ou [Den06], através de uma análise cuidada dos resultados obtidos, bem como quais as melhores soluções conseguidas na prática.
APA, Harvard, Vancouver, ISO, and other styles
42

Korkmaz, Ozgur. "Development Of A Miniaturized Automated Production Control System." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614013/index.pdf.

Full text
Abstract:
In this thesis a custom embedded system and control software developed for an Automated Storage and Retrieval System (AS/RS) which is based on the Computer Integrated Manufacturing Laboratory (CIMLAB) located in the Department of Mechanical Engineering, Middle East Technical University. Primary objective of this study is AS/RSs related control rules can be applicable to the current physical system. The secondary objective is determined as developing the control system in a flexible way that allows adding new equipments to the system and configuring parts of the system. Two types of control board are manufactured and also boards&rsquo
firmware and computer software are developed. These two boards communicate with computer one at a time. Some AS/RS related control rules are implemented at the control software. According to these rules the control software assigns tasks to the related board. Also the control software records necessary information in order to measure the performance of the AS/RS. Several control rules as storage assignment, dwell point and sequencing of storage and retrieval order rules are applicable to the AS/RS without need for low level programming. Because of the physical limitation, batching rules cannot be applied to the current system. Also a graphical user interface is developed for using the system easily and observing the real time status of the system equipments. Two experiments are designed and run in order to show flexibility of the control system. Different control rules applied to each of the experiment. Experiment results put forth the control system was quite successful in meeting the objectives.
APA, Harvard, Vancouver, ISO, and other styles
43

Ryd, Jonatan, and Jeffrey Persson. "Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method." Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.

Full text
Abstract:
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware.
Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
APA, Harvard, Vancouver, ISO, and other styles
44

Assaad, Mohamad Ali. "An overview on systems of systems control : general discussions and application to multiple autonomous vehicles." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2466/document.

Full text
Abstract:
La thèse porte sur le contrôle des systèmes de systèmes (SdS) et, sur la manière de construire des SdS adaptables et fiables. Ce travail fait partie du laboratoire d’excellence Labex MS2T sur le développement des SdS technologiques. Les SdS sont des systèmes complexes constitués de plusieurs systèmes indépendants qui fonctionnent ensemble pour atteindre un objectif commun. L’ingénierie des SdS est une approche qui se concentre sur la manière de construire et de concevoir des SdS fiables capables de s’adapter à l’environnement dynamique dans lequel ils évoluent. Compte tenu de l’importance du contrôle des systèmes constituants (SC) pour atteindre les objectifs du SdS , la première partie de cette thèse a consisté en une étude bibliographique sur le sujet du contrôle des SdS. Certaines méthodes de contrôle existent pour les systèmes à grande échelle et les systèmes multi-agents, à savoir, le contrôle hiérarchique, distribué et décentralisé peuvent être utiles et sont utilisés pour contrôler les SdS. Ces méthodes ne conviennent pas pour contrôler un SdS dans sa globalité et son évolution, en raison de l’indépendance de leur SC ; alors que les “frameworks” multi-vues conviennent mieux à cet objectif. Une approche de ”framework” générale est proposée pour modéliser et gérer les interactions entre les SC dans un SdS. La deuxième partie de notre travail a consisté à contribuer aux systèmes de transport intelligent. À cette fin, nous avons proposé le gestionnaire de manœuvres coopératives pour les véhicules autonomes (CMMAV), un “framework” qui guide le développement des applications coopératives dans les véhicules autonomes. Pour valider le CMMAV, nous avons développé le gestionnaire de manœuvres latérales coopératives (CLMM), une application sur les véhicules autonomes qui permet d’échanger des demandes afin de coopérer lors de manœuvres de dépassement sur autoroute. Cette application a été validée par des scénarios formels, des simulations informatiques, et testée sur les véhicules autonomes du projet Robotex au laboratoire Heudiasyc
This thesis focuses on System of Systems (SoS) control, and how to build adaptable and reliable SoS. This work is part of the Labex MS2T laboratory of excellence on technological SoS development. SoS are complex systems that consist of multiple independent systems that work together to achieve a common goal. SoS Engineering is an approach that focuses on how to build and design reliable SoS that can adapt to the dynamic environment in which they operate. Given the importance of controlling constituent systems (CS) in order to achieve SoS objectives, the first part of this thesis involved a literature study about the subject of SoS control. Some control methods exist for large-scale systems and multi-agent systems, namely, hierarchical, distributed, and decentralized control might be useful and are used to control SoS. These methods are not suitable for controlling SoS in its whole, because of the independence of their CS; whereas, multi-views frameworks are more suitable for this objective. A general framework approach is proposed to model and manage the interactions between CS in a SoS. The second part of our work consisted of contributing to Intelligent Transportation Systems. For this purpose, we have proposed the Cooperative Maneuvers Manager for Autonomous Vehicles (CMMAV), a framework that guides the development of cooperative applications in autonomous vehicles. To validate the CMMAV, we have developed the Cooperative Lateral Maneuvers Manager (CLMM), an application on the autonomous vehicles that enables equipped vehicles to exchange requests in order to cooperate during overtaking maneuvers on highways. It was validated by formal scenarios, computer simulations, and tested on the autonomous vehicles of the Equipex Robotex in Heudiasyc laboratory
APA, Harvard, Vancouver, ISO, and other styles
45

Håkansson, David. "Aerothermal and Kinetic Modelling of a Gas Turbine Dry Low Emission Combustion System." Thesis, KTH, Strömningsmekanik och Teknisk Akustik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298477.

Full text
Abstract:
Growing environmental concerns are causing a large transformation within the energy industry. Within the gas turbine industry, there is a large drive to develop improved modern dry-low emission combustion systems. The aim is to enable gas turbines to run on green fuels like hydrogen, while still keeping emission as NOx down. To design these systems, a thorough understanding of the aerothermal and kinetic processes within the combustion system of a gas turbine is essential. The goal of the thesis was to develop a one-dimensional general network model of the combustion system of Siemens Energy SGT-700, which accurately could predict pressure losses, mass flows, key temperatures, and emissions. Three models were evaluated and a code that emulated some aspects of the control system was developed. The models and the code were evaluated and compared to each other and to test data from earlier test campaigns performed on SGT-700 and SGT-600. Simulations were also carried out with hydrogen as the fuel.  In the end, a model of the SGT-700 combustion chamber was developed and delivered to Siemens Energy. The model had been verified against test data and predictions made by other Siemens Energy thermodynamic calculation software, for a range of load conditions. The preforms of the model, when hydrogen was introduced into the fuel mixture, were also tested and compared to test data
En växande medvetenhet kring klimatfrågan, har medfört stora förändringar i energibranschen. I och med detta behöver även gasturbinindustrin förbättra de nuvarande dry-low emissions systemen och göra det möjligt för gasturbiner att förbränna gröna bränslen som väte. Samtidigt måste också utsläppen av NOx hållas nere. För att kunna utforma dessa system behövs en fullständig förståelse för de aerotermiska och kinetiska processerna i en gasturbins förbränningskammare. Målet med detta examensarbete var att utveckla en endimensionell generell nätverksmodell för förbränningssystemet i Siemens Energys SGT-700. Modellen skulle noggrant kunna förutsäga tryckförluster, massflöden, viktiga temperaturer samt utsläpp. Tre modeller utvärderades och en kod som emulerade vissa aspekter av styrsystemet utvecklades också. Modellerna och koden utvärderades och jämfördes mot varandra och även mot testdata från tidigare testserier som utfördes på SGT-700 och SGT-600. Simuleringar utfördes också med väte som bränsle. Slutligen levererades en modell av SGT-700 förbränningskammaren till Siemens Energy. Modellen har verifierats för en rad olika lastfall, mot testdata och data som genererats av andra termodynamisk beräkningsprogram som utvecklats av Siemens Energy. Hur modellen uppförde sig när väte var introducerat in i olika lastfall jämfördes också mot testdata
APA, Harvard, Vancouver, ISO, and other styles
46

Zabel, Martin, Thomas B. Preußer, Peter Reichel, and Rainer G. Spallek. "SHAP-Secure Hardware Agent Platform." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200701011.

Full text
Abstract:
This paper presents a novel implementation of an embedded Java microarchitecture for secure, realtime, and multi-threaded applications. Together with the support of modern features of object-oriented languages, such as exception handling, automatic garbage collection and interface types, a general-purpose platform is established which also fits for the agent concept. Especially, considering real-time issues, new techniques have been implemented in our Java microarchitecture, such as an integrated stack and thread management for fast context switching, concurrent garbage collection for real-time threads and autonomous control flows through preemptive round-robin scheduling.
APA, Harvard, Vancouver, ISO, and other styles
47

Johansson, Gustav. "Real-Time Linux Testbench on Raspberry Pi 3 using Xenomai." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235484.

Full text
Abstract:
Test benches are commonly used to simulate events to an embedded system for validation purposes. Microcontrollers can be used for making test benches and can be programmed with a bare-metal style, i.e. without an Operating System (OS), for simple cases. If the test bench would be too complex for a microcontroller, then a Real-Time Operating System (RTOS) could be used instead of a more complex hardware. A RTOS has limited functionalities to guarantee high predictability. A General-Purpose Operating System (GPOS) has a vast number of functionalities but has low predictability. The literature study looks therefore into approaches to improve the real-time predictability of Linux. The result of the literature study finds an approach called Xenomai Cobalt to be the optimal solution, considering the target usecase and project resources. The Xenomai Cobalt approach was evaluated on a Raspberry Pi (RPi) 3 using its General-Purpose Input/Output (GPIO) pins and a latency test. An application was written using Xenomai's Application Programming Interface (API). The application used the GPIO pins to read from a function generator and to write to an oscilloscope. The measurements from the oscilloscope were then compared to the measurements done by the application. The result showed the measured dierences between the RPi 3 and the oscilloscope. The result of the measurements showed that reading varied 66:20 μs, and writing varied 56:20 μs. The latency test was executed with a stress test and the worst measured latency was 82 μs. The resulting measured dierences were too high for the project requirements. However, the majority of the measurements were much smaller than the worstcases with 23:52 μs for reading and 34:05 μs for writing. This means the system could be used better as a rm real-time system instead of a hard real-time system.
Testbänkar används ofta för att simulera händelser till ett inbyggt system för validering. Till simpla testbänkar kan mikrokontroller användas. För mer avancerade testbänkar kan RTOS användas på mer komplex hårdvara. RTOS har begränsad funktionalitet för att garantera en hög förutsägbarhet. GPOS har stora mängder funktionaliteter men har istället en låg förutsägbarhet.Litteraturstudien undersökte därför möjligheterna till att få Linux att hantera realtid. Resultatet av litteraturstudien fann ett tillvägagångssätt vid namn Xenomai Cobalt att vara den optimala lösningen för att få Linux till Real-Time Linux.Xenomai Cobalt utvärderades på en RPi 3 med hjälp av dess GPIO-pinnar och ett fördröjningstest. En applikation skrevs med Xenomai’s API. Applikationen använde GPIO-pinnarna till att läsa från en funktionsgenerator och till att skriva till ett oskilloskop. Mätningarna från oskilloskopet jämfördes sen med applikationens mätningar.Resultatet visade mätskillnaderna mellan RPi 3 och oskilloskopet med systemet i viloläge. Resultatet av mätningarna visade att läsningen varierade med 66.20 µs och skrivandet med 56.20 µs. Fördröjningstestet utfördes medstresstestning och visade den värsta uppmätta fördröjningen, resultatet blev82 µs.De resulterande mätskillnaderna blev dock för höga för projektets krav. Majoriteten av mätningarna var mycket mindre än de värsta fallen med 23.52 µs för läsning och 34.05 µs för skrivning. Detta innebar att systemet kan användas med bättre precision som ett fast realtidssystem istället för ett hårt realtidssystem.
APA, Harvard, Vancouver, ISO, and other styles
48

Batista, Nathan Eduardo Ribeiro. "Previsão e simulação em modelos de equilíbrio geral dinâmico." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-18072016-103615/.

Full text
Abstract:
O objetivo deste trabalho é estudar as principais características de uma família de modelos que busca retratar a dinâmica do equilíbrio geral de uma economia. Por equilíbrio em economia entende-se como sendo uma situação onde nenhum agente econômico tem qualquer incentivo para modificar a sua estratégia escolhida. Neste trabalhamos não haverá uma preocupação em demonstrar a existência, unicidade e estabilidade do equilíbrio, mas sim entender como alterações exógenas afetam o equilíbrio. Brevemente, podemos entender a definição de equilíbrio geral como sendo a perspectiva de análise de equilíbrio simultâneo entre todos os mercados de uma economia. No conceito de equilíbrio geral todos os mercados devem estar em equilíbrio, de modo que nenhuma mudança em um mercado específico não venha a recompensar um determinado agente. O equilíbrio geral é constratado com a perspectiva de equilíbrio parcial, no qual parte da economia é considerada, tomando-se como dado o que está acontecendo nos demais mercados. Os modelos de equilíbrio geral que envolvem dinâmica e choques estocástico são conhecidos pelo acrônimo DSGE, ou seja, Dynamic Stochastic General Equilibrium. Na realidade trata-se de uma família de modelos, pois compreendem uma gama variada de modelos com diferentes níveis de sofisticação. Neste exercício pelo caráter generalista da modelagem decidimos por estudar o modelo DSGE proposto por Smets-Wouters (2003). Veremos neste estudo uma descrição deste modelo, seus fundamentos teóricos, e uma simulação deste modelo, de modo a vermos quais as direções de política econômica que tal modelo nos permite compreender. Os resultados obtidos utilizando-se dados da economia americana nos permitiu inferir direções para as variáveis produto e consumo diante de um choque de inovação. Foi possível por meio do modelo adotado estimar uma direção para a trajetória do PIB para os próximos anos.
The purpose of this academic work is to study a family of models called stochastic economic general equilibrium models (DSGE), and implement a solution to a general model developed model by Smets-Wouters (2003). We will note in this essay a description of this model, its theoretical foundations, and a simulation of this model, so as to see what the economic policy directions that such a model allows us to understand. The results obtained using data from the US economy allowed us to infer directions for the variables product and consumption under a innovation shock. Yet we will be able to estimate a direction to the trajectory of PIB for years to next years.
APA, Harvard, Vancouver, ISO, and other styles
49

Lambert, Jason. "Parallélisation de simulations interactives de champs ultrasonores pour le contrôle non destructif." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112125/document.

Full text
Abstract:
La simulation est de plus en plus utilisée dans le domaine industriel du Contrôle Non Destructif. Elle est employée tout au long du processus de contrôle, que ce soit pour en accélérer la mise au point ou en comprendre les résultats. Les travaux menés au cours de cette thèse présentent une méthode de calcul rapide de champ ultrasonore rayonné par un capteur multi-éléments dans une pièce isotrope, permettant un usage interactif des simulations. Afin de tirer parti des architectures parallèles communément disponibles, un modèle régulier (qui limite au maximum les branchements divergents) dérivé du modèle générique présent dans la plateforme logicielle CIVA a été mis au point. Une première implémentation de référence a permis de le valider par rapport aux résultats CIVA et d'analyser son comportement en termes de performances. Le code a ensuite été porté et optimisé sur trois classes d'architectures parallèles aujourd'hui disponibles dans les stations de calcul : le processeur généraliste central (GPP), le coprocesseur manycore (Intel MIC) et la carte graphique (nVidia GPU). Concernant le processeur généraliste et le coprocesseur manycore, l'algorithme a été réorganisé et le code implémenté afin de tirer parti des deux niveaux de parallélisme disponibles, le multithreading et les instructions vectorielles. Sur la carte graphique, les différentes étapes de simulation de champ ont été découpées en une série de noyaux CUDA. Enfin, des bibliothèques de calculs spécifiques à ces architectures, Intel MKL et nVidia cuFFT, ont été utilisées pour effectuer les opérations de Transformées de Fourier Rapides. Les performances et la bonne adéquation des codes produits ont été analysées en détail pour chaque architecture. Dans plusieurs cas, sur des configurations de contrôle réalistes, des performances autorisant l'interactivité ont été atteintes. Des perspectives pour traiter des configurations plus complexes sont dressées. Enfin la problématique de l'industrialisation de ce type de code dans la plateforme logicielle CIVA est étudiée
The Non Destructive Testing field increasingly uses simulation.It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developped. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developped. First, a reference implementation was developped to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), manycore coprocessors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multhreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels.Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced algorithms were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to adress more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT plateform CIVA is discussed
APA, Harvard, Vancouver, ISO, and other styles
50

Richard, Edouard. "Étude et réalisation d’un nouveau système de référence spatio-temporel basé sur des liens inter-satellites dans une constellation GNSS." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066545/document.

Full text
Abstract:
L'exactitude délivrée par les systèmes de positionnement globaux par satellites (GNSS) est un facteur clé pour de nombreuses applications scientifiques telles que le positionnement de points géodésiques ou d’autres satellites, l'établissement de systèmes de référence spatio-temporels, la synchronisation d’horloges ou encore l'étude directe du lien pour sonder l’atmosphère. L'augmentation de la constellation GNSS avec des mesures de pseudo-distances entre les satellites est une option prometteuse pour améliorer l'exactitude du système. Plusieurs études présentent l'apport qualitatif de ces liens inter-satellites (ISL), mais ne permettent pas de mesurer efficacement l'impact quantitatif de cette technologie. Dans cette thèse, nous avons effectué une étude différentielle entre un système classique (possédant seulement des liens standards espace-sol) et un système augmenté avec des ISL. Les deux systèmes sont étudiés sous les mêmes hypothèses et à travers le même code de calcul. Celui-ci est composé de deux parties distinctes et autonomes : une simulation d’observables sous la forme de pseudo-temps de vol bruités, et une analyse qui délivre, après ajustement des paramètres, les bilans d’erreurs quantitatifs. La comparaison des bilans d'erreurs quantitatifs associés aux deux systèmes nous permet d’établir, pour une même application donnée, les différences de performance relatives entre les deux systèmes. Les résultats obtenus permettent de franchir un pas de plus vers la validation de l’apport des liens inter-satellites et sont à considérer pour les versions futures des systèmes de navigation par satellites
The accuracy reached by the Global Navigation Satellite Systems (GNSS) is critically important for many scientific applications such as geodetic point or satellite positioning, space-time reference frame realization, clocks synchronization or the study of the links to probe the atmosphere. One option for improving the system accuracy is the use of inter-satellite pseudo-range measurements, so called inter-satellite links (ISL). Several studies have shown the qualitative interest of ISL but do not allow to efficiently measure the quantitative impact of this new technology on space-time positioning. In this thesis, we present a differential study between a standard system (with standard satellite-to-ground links only) and system augmented by ISL. The two systems are compared under the same hypothesis and simulated within the same software. The software is made of two distinct and independent parts : the simulation which generates the noisy pseudo-ranges, and an analysis which uses a non linear adjustment procedure in order to recover the initial parameters of the simulation and compute the quantitative error budgets. For a given application, the quantitative comparison between the error budgets of both systems allow us to highlight the relative merits of the two configurations. Our results are a further step in the characterization of the interest of ISL and should prove useful for the design of future satellite navigation system design
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography