Dissertations / Theses on the topic 'Computation speedup'

To see the other types of publications on this topic, follow the link: Computation speedup.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computation speedup.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Terner, Olof, and Hedbjörk Villhelm Urpi. "Quantum Computational Speedup For The Minesweeper Problem." Thesis, Uppsala universitet, Teoretisk fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325945.

Full text
Abstract:
Quantum computing is a young but intriguing field of science. It combines quantum mechanics with information theory and computer science to potentially solve certain formerly computationally expensive tasks more efficiently. Classical computers are based on bits that can take on the value zero or one. The values are distinguished by voltage differences in transistors. Quantum computers are instead based on quantum bits, or qubits, that are represented physically by something that exhibits quantum properties, like for example electrons. Qubits also take on the value zero or one, which could correspond to spin up and spin down of an electron. However, qubits can also be in a superposition state between the quantum states corresponding to the value zero and one. This property is what causes quantum computers to be able to outperform classical computers at certain tasks. One of these tasks is searching through an unstructured database. Whereas a classical computer in the worst case has to search through the whole database in order to find the sought element, i.e. the computation time is proportional to the size of the problem, it can be shown that a quantum computer can find the solution in a time proportional to the square root of the size of the problem. This report aims to illustrate the advantages of quantum computing by explicitly solving the classical Windows game Minesweeper, which can be reduced to a problem resembling the unstructured database search problem. It is shown that solving Minesweeper with a quantum algorithm gives a quadratic speedup compared to solving it with a classical algorithm. The report also covers introductory material to quantum mechanics, quantum gates, the particular quantum algorithm Grover's algorithm and complexity classes, which is necessary to grasp in order to understand how Minesweeper can be solved on a quantum computer.
APA, Harvard, Vancouver, ISO, and other styles
2

Mezher, Rawad. "Randomness for quantum information processing." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS244.pdf.

Full text
Abstract:
Cette thèse est basée sur la génération et la compréhension de types particuliers des ensembles unitaires aleatoires. Ces ensembles est utile pour de nombreuses applications de physique et de l’Information Quantique, comme le benchmarking aléatoire, la physique des trous noirs, ainsi qu’à la démonstration de ce que l’on appelle un "quantum speedup" etc. D'une part, nous explorons comment générer une forme particulière d'évolution aléatoire appelée epsilon-approximateunitary t-designs . D'autre part, nous montrons comment cela peut également donner des exemples de quantum speedup, où les ordinateurs classiques ne peuvent pas simuler en temps polynomiale le caractère aléatoire. Nous montrons également que cela est toujours possible dans des environnements bruyants et réalistes
This thesis is focused on the generation and understanding of particular kinds of quantum randomness. Randomness is useful for many tasks in physics and information processing, from randomized benchmarking , to black hole physics , as well demonstrating a so-called quantum speedup , and many other applications. On the one hand we explore how to generate a particular form of random evolution known as a t-design. On the other we show how this can also give instances for quantum speedup - where classical computers cannot simulate the randomness efficiently. We also show that this is still possible in noisy realistic settings. More specifically, this thesis is centered around three main topics. The first of these being the generation of epsilon-approximate unitary t-designs. In this direction, we first show that non-adaptive, fixed measurements on a graph state composed of poly(n,t,log(1/epsilon)) qubits, and with a regular structure (that of a brickwork state) effectively give rise to a random unitary ensemble which is a epsilon-approximate t-design. This work is presented in Chapter 3. Before this work, it was known that non-adaptive fixed XY measurements on a graph state give rise to unitary t-designs , however the graph states used there were of complicated structure and were therefore not natural candidates for measurement based quantum computing (MBQC), and the circuits to make them were complicated. The novelty in our work is showing that t-designs can be generated by fixed, non-adaptive measurements on graph states whose underlying graphs are regular 2D lattices. These graph states are universal resources for MBQC. Therefore, our result allows the natural integration of unitary t-designs, which provide a notion of quantum pseudorandomness which is very useful in quantum algorithms, into quantum algorithms running in MBQC. Moreover, in the circuit picture this construction for t-designs may be viewed as a constant depth quantum circuit, albeit with a polynomial number of ancillas. We then provide new constructions of epsilon-approximate unitary t-designs both in the circuit model and in MBQC which are based on a relaxation of technical requirements in previous constructions. These constructions are found in Chapters 4 and 5
APA, Harvard, Vancouver, ISO, and other styles
3

Pandya, Ajay Kirit. "Performance of multithreaded computations on high-speed networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ32212.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chitty, Darren M. "Improving the computational speed of genetic programming." Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.686812.

Full text
Abstract:
Genetic Programming (GP) is well known as a computationally intensive technique especially when considering regression or classification tasks with large datasets. Consequently, there has been considerable work conducted into improving the computational speed of GP. Recently, this has concentrated on exploiting highly parallel architectures in the form of Graphics Processing Units (GPUs). However, the reported speeds fall considerably short of the computational capabilities of these GPUs. This thesis investigates this issue, seeking to considerably improve the computational speed of GP. Indeed, this thesis will demonstrate that considerable improvements in the speed of GP can be achieved when fully exploiting a parallel Central Processing Unit (CPU) exceeding the performance of the latest GPU implementations. This is achieved by recognising that GP is as much a memory bound technique as a compute bound technique. By adopting a two dimensional stack approach, better exploitation of memory resources is achieved in addition to reducing interpreter overheads. This approach is applied to CPU and GPU implementations and compares favorably with compiled versions of GP. The second aspect of this thesis demonstrates that although considerable performance gains can be achieved using parallel hardware, the role of efficiency within GP should not be forgotten. Efficiency saving can boost the computational speed of parallel GP significantly. Two methods are considered, parsimony pressure measures and efficient tournament selection. The second efficiency technique enables a CPU implementation of GP to outperform a GPU implementation for classification type tasks even though the CPU has only a tenth of the computational power. Finally both CPU and GPU are combined for ultimate performance. Speedups of more than a thousand fold over a basic sequential version of GP are achieved and three fold over the best GPU implementation from the literature. Consequently, this speedup increases the usefulness of GP as a machine learning technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Yousefi, Mojir Kayran. "A Computational Model for Optimal Dimensional Speed on New High-Speed Lines." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230.

Full text
Abstract:
High Speed Lines (HSL) in rail passenger services are regarded as one of the most significant projects in many countries comparing to other projects in the transportation area. According to the EU (European Council Directive 96/48/EC,2004) , high-speed lines are either new-built lines for speeds of 250km/h or greater, or in some cases upgraded traditional lines. At the beginning of 2008, there were 10,000 km of new HSL lines in operation, and by taking into account the upgraded conventional lines, in total, there were 20,000 km line in the world. The network is growing fast because of the demand for short travelling time and comfort isincreasing rapidly. Since HSL projects require a lot of capital, it is getting more important for governments and companies to estimate and to calculate the total costs and benefits of building, maintaining, and operating of HSL so that they can decide better and more reliable in choosing between projects. There are many parameters which affect the total costs and benefits of an HSL. The most important parameter is dimensional speed which has a great influence on other parameters. For example, tunnels need larger cross section for higher speed which increases construction costs. More important, higher speed also influences the number of passengers attracted from other modes of transport. Due to a large number of speed-dependant parameters, it is not a simple task to estimate an optimal dimensional speed by calculating the costs and benefits of an HSL manually. It is also difficult to do analysis for different speeds, as speed changes many other relevant parameters. As a matter of fact, there is a need for a computational model to calculate the cost-benefit for different speeds. Based on the computational model, it is possible to define different scenarios and compare them to each other to see what the potentially optimal speed would be for a new HSL project. Besides the optimal speed, it is also possible to analyze and find effects of two other important parameters, fare and frequency, by cost-benefit analysis (CBA). The probability model used in the calculation is based on an elasticity model, and input parameters are subject to flexibility to calibrate the model appropriately. Optimal high-speed line (OHSL) tool is developed to make the model accessible for the users.
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Kieron David. "Computational analysis of low speed axial flow rotors." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Yu Ping. "Computational study of shock control at transonic speed." Thesis, Cranfield University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yildirim, Erkan. "Computational study of high speed blade-vortex interaction." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10994.

Full text
Abstract:
This thesis presents inviscid compressible simulations for the orthogonal blade-vortex interaction. A numerical model between the tail rotor of a helicopter and the trailing vortex system formed by the main rotor blades is assumed. The study takes a ‘building-block’ approach to investigating this problem. Firstly, the impulsive instantaneous blocking of the axial core flow by a flat plate is considered. In the second step, the three-dimensional gradual cutting of the vortex by a sharp flat-plate that moves at a finite speed through the vortex is performed. Finally the chopping of the vortex by a blunt leading edge aerofoil, which incorporates both the blocking effect and also the stretching and distortion of the vortex lines is studied. The solutions reveal that the compressibility effects are strong when the axial core flow of the vortex is impulsively blocked. This generates a weak shock-expansion structure propagating along the vortex core on opposite sides of the cutting surface. The shock and expansion waves are identified as the prominent acoustic signatures in the interaction. In a simplified, two-dimensional axisymmetric model, the modelling of the physical evolution of the vortex, including the evolution of the complex vortical structures that controls the vortex core size near the cutting surface, are studied. Furthermore, the three dimensional simulations revealed that there is a secondary and a tertiary noise sources due to compressibility effects at the blade leading edge and due to the shock-vortex interaction taking place on the blade, which is exposed to a transonic free-stream flow.
APA, Harvard, Vancouver, ISO, and other styles
9

Rohrseitz, Nicola. "The computation of linear speed for visual flight control in Drosophila melanogaster /." Zürich : ETH, 2009. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lord, Steven John. "Computational and experimental study of hydraulic shock." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kavuri, Kranthi. "Investigation of the validity of the ASTM standard for computation of International Friction Index." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Luo, Jun 1976. "Computational study of rotating stall in high-speed compressor." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/82773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Colonia, Simone. "Multiphysics computational modelling of high-speed partially-rarefied flows." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/3000640/.

Full text
Abstract:
Hypersonic flows of practical importance often involve flow fields having continuum and rarefied regions. It is well known that Boltzmann equation based methods can provide more physically accurate results in flows having rarefied and non-equilibrium regions than continuum flow models. However, these methods are extremely computational expensive in near-equilibrium regions, which prohibits their application to practical problems with complex geometries and large domains where continuum and rarefied regions coexist. On the other hand, Navier-Stokes (or Euler) based methods are computationally efficient in simulating a wide variety of flow problems, but the use of continuum theories for the flow problems involving rarefied gas or very small length scales produces inaccurate results due to the breakdown of the continuum assumption and occurrence of strong thermal non-equilibrium. A practical approach for solving flow fields having continuum to rarefied regions is to develop numerical methods combining approaches able to compute the continuum regime and/or the rarefied (or thermal non-equilibrium) regime. The aim of this thesis is to investigate and develop new methods for the calculation of hypersonic flow fields that contain both continuum and rarefied flow regions. The first part of the work is dedicated to the continuum regime. Among the different numerical inviscid flux functions available in the literature, the AUSM-family has been shown to be capable of solving to a good accuracy flow fields at a wide range of Mach regime including high-speed flows. For this reason an implicit formulation of the AUSM+ and AUSM+up schemes, with a Jacobian defined fully analytically, has been implemented in the Helicopter Multi-Block CFD code (HMB2), developed at the University of Liverpool, to predict continuum high-speed flow. The original form of the schemes lead to the presence of different branches in the computational algorithm for the Jacobian since do not guarantee the fluxes to be continuously differenciable functions of the primitive variables. Thus, a novel formulation of the AUSM+ and AUSM+up schemes is proposed in chapter 2. Here, a blending is introduced by means of parametric sigmoidal functions at the points of discontinuity in the schemes formulations. Predictions for wide range of test cases obtained employing the proposed formulation are compared with results available in the literature in chapter 3 to show that the reliability of the schemes has been preserved in the proposed formulation. Later on, the work focuses on partially-rarefied high-speed flows. At the University of Liverpool, this kind of flows are simulated using the hybrid approach available in the Multi-Physics Code (MΦC) where a discrete velocity method for kinetic Boltzmann equations is coupled with a traditional Navier-Stokes solver. Firstly, the discrete velocity method has been improved with the implementation of kinetic models for diatomic gases in the framework. A validation of the correctness of the implemented models is discussed in chapter 6. However, employing a discrete velocity method in hybrid simulation leads to high computational and memory cost. In this context, gas-kinetic schemes have been identified by the author, in the related literature, as efficient approaches, relative to discrete velocity methods, capable of modelling complex gas flows with moderate rarefaction effects but with significant thermal non-equilibrium. Thus, two gas-kinetic schemes, analytically defined on the basis of the Chapman-Enskog expansion of non-dimensional Shakhov and Rykov models, have been proposed in chapter 5. Compared with similar gas-kinetic schemes available in the literature, the presented schemes differ in the approach employed to evaluate the terms of Chapman-Enskog solutions and in the kinetic models used as mathematical foundations of the schemes. In chapters 6 and 7 the scheme is tested for various cases and Mach numbers, including complex 3D flows, proving to be a viable way to improve the performance of hybrid simulations, maintaining an acceptable level of reliability, if used in place of more complex methods for weakly rarefied flows. Finally, chapter 8 includes a summary of the findings as well as suggestions for future works.
APA, Harvard, Vancouver, ISO, and other styles
14

Hale, Nicholas. "On the use of conformal maps to speed up numerical computations." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

He, Jincan, and Sundhanva Bhatt. "Mission Optimized Speed Control." Thesis, KTH, Fordonsdynamik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223334.

Full text
Abstract:
Transportation underlines the vehicle industry's critical role in a country's economic future.The amount of goods moved, specically by trucks, is only expected to increase inthe near future. This work attempts to tackle the problem of optimizing fuel consumptionin Volvo trucks, when there are hard constraints on the delivery time and speed limits.Knowledge of the truck such as position, state, conguration etc., along with the completeroute information of the transport mission is used for fuel optimization.Advancements in computation, storage, and communication on cloud based systems, hasmade it possible to easily incorporate such systems in assisting modern eet. In this work,an algorithm is developed in a cloud based system to compute a speed plan for the completemission for achieving fuel minimization. This computation is decoupled from thelocal control operations on the truck such as prediction control, safety, cruise control, etc.;and serves as a guide to the truck driver to reach the destination on time by consumingminimum fuel.To achieve fuel minimization under hard constraints on delivery (or arrival) time andspeed limits, a non-linear optimization problem is formulated for the high delity modelestimated from real-time drive cycles. This optimization problem is solved using a Nonlinearprogramming solver in Matlab.The optimal policy was tested on two drive cycles provided by Volvo. The policy wascompared with two dierent scenarios, where the mission demands hard constraints ontravel time and the speed limits in addition to no trac uncertainties (deterministic). with a cruise controller running at a constant set speed throughout the mission. Itis observed that there is no signicant fuel savings. with maximum possible fuel consumption; achieved without the help of optimalspeed plan (worst case). It is seen that there is a notable improvement in fuelsaving.In a real world scenario, a transport mission is interrupted by uncertainties such as trac ow, road blocks, re-routing, etc. To this end, a stochastic optimization algorithm is proposedto deal with the uncertainties modeled using historical trac ow data. Possiblesolution methodologies are suggested to tackle this stochastic optimization problem.
APA, Harvard, Vancouver, ISO, and other styles
16

Arasanipalai, Sriram Sharan. "Two-equation model computations of high-speed (ma=2.25, 7.2), turbulent boundary layers." Thesis, [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Costen, Fumie. "High speed computational modeling in the application of UWB signals." 京都大学 (Kyoto University), 2005. http://hdl.handle.net/2433/144793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Robbins, David James. "Development of computational fluid dynamics methods for low-speed flows." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Feszty, Daniel. "Numerical simulation and analysis of high-speed unsteady spiked body flows." Thesis, University of Glasgow, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Faiz, J. "Computational methods for the design of multi-tooth-per-pole switched reluctance motors." Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lim, Teck-Bin. "A unified computational fluid dynamics-aeroacoustics analysis of high speed propeller." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/12064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Byrne, Michael Dwyer. "A computational theory of working memory : speed, parallelism, activation, and noise." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/29797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Motaman, Shahed. "High-speed imaging and computational modelling of close-coupled gas atomization." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/6430/.

Full text
Abstract:
Gas atomization process, especially Closed Coupled Gas Atomization (CCGA) is a very efficient processing method to produce ultra fine, spherodised metal powders. In this process, the high-pressure gas jet is used to disintegrate the molten metal stream in to the spherical powders. Due to hydrodynamic and thermal interaction between high-pressure gas jet and molten metal stream especially near melt delivery nozzle, this technique is very complex and challenging for atomization industries. Melt delivery nozzle design is one of the key factors to control powders properties. The optical Schlieren technique and analogue water atomizer along with Computational Fluid Dynamics (CFD) numerical methods are practical methods for observing the single or two-phase flow of gas-metal interaction during CCGA. This research is focused on the optical Schlieren and numerical CFD techniques to observe single-phase gas flow behaviour with different melt nozzles tip design and gas dies profile. The CFD numerical results are validated by the experimental Schlieren test results. The effect of melt tip design on open to closed-wake condition near melt nozzle was investigated. Comparing the CFD velocity field and velocity streamlines of different nozzle design at different atomization gas pressure could help to propose a new hypothesis of how open to closed-wake condition occurs at different nozzle tip design. In addition, the flow separation problem around melt nozzle by two different gas die systems was investigated. The results showed there are two major mechanisms for this phenomenon, which depends on gas die system set-up, melt nozzle tip protrusion length and mis-match angle of external nozzle wall to the gas jet direction.
APA, Harvard, Vancouver, ISO, and other styles
24

Dawes, A. S. "Natural co-ordinates and high speed flows : a numerical method for reactive gases." Thesis, Cranfield University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Qiang. "Effects of Adaptive Discretization on Numerical Computation using Meshless Method with Live-object Handling Applications." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14480.

Full text
Abstract:
The finite element method (FEM) has difficulty solving certain problems where adaptive mesh is needed. Motivated by two engineering problems in live-object handling project, this research focus on a new computational method called the meshless method (MLM). This method is built upon the same theoretical framework as FEM but needs no mesh. Consequently, the computation becomes more stable and the adaptive computational scheme becomes easier to develop. In this research, we investigate practical issues related to the MLM and develop an adaptive algorithm to automatically insert additional nodes and improve computational accuracy. The study has been in the context of the two engineering problems: magnetic field computation and large deformation contact. First, we investigate the effect of two discretization methods (strong-form and weak-form) in MLM for solving linear magnetic field problems. Special techniques for handling the discontinuity boundary condition at material interfaces are proposed in both discretization methods to improve the computational accuracy. Next, we develop an adaptive computational scheme in MLM that is comprised of an error estimation algorithm, a nodal insertion scheme and a numerical integration scheme. As a more general approach, this method can automatically locate the large error region around the material interface and insert nodes accordingly to reduce the error. We further extend the adaptive method to solve nonlinear large deformation contact problems. With the ability to adaptively insert nodes during the computation, the developed method is capable of using fewer nodes for initial computation and thus, effectively improves the computational efficiency. Engineering applications of the developed methods have been demonstrated by two practical engineering problems. In the first problem, the MLM has been utilized to simulate the dynamic response of a non-contact mechanical-magnetic actuator for optimizing the design of the actuator. In the second problem, the contact between the flexible finger and the live poultry product has been analyzed by using MLM. These applications show the developed method can be applied to a broad spectrum of engineering applications where an adaptive mesh is needed.
APA, Harvard, Vancouver, ISO, and other styles
26

Carrier, Alain. "Computational investigation of low speed flow over low aspect ratio aircraft configurations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA306662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Manga, Kranti Kiran. "Computational method for solving spoke dymanics on high speed rolling tweel ®." Connect to this title online, 2008. http://etd.lib.clemson.edu/documents/1211389887/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ashcroft, Graham Ben. "A computational and experimental investigation into the aeroacoustics of low speed flows." Thesis, University of Southampton, 2004. https://eprints.soton.ac.uk/47065/.

Full text
Abstract:
The noise produced by low Mach number (M ≤ 0.4) laminar and turbulent flows is studied using computational and experimental techniques. The emphasis is on the development and application of numerical methods to further the understanding of noise generation and far field radiation. Numerical simulations are performed to investigate the tonal noise radiated by two- and three-dimensional cavities submerged in low-speed turbulent and laminar flows. A numerical approach is developed that combines near field flow computations with an integral radiation model to enable the far field signal to be evaluated without the need to directly resolve the propagation of the acoustic waves to the far field. Two basic geometries are employed in these investigations: a plane rectangular cavity and a rectangular cavity with a lip. Results for the two geometries show good agreement with available experimental data, and highlight the sensitivity of the amplitude and directivity of the radiated sound to geometry, flow speed and the properties of the incoming boundary layer. The cavity with a lip is shown to behave as a Helmholtz resonator. The plane cavities are characterized by the more familiar Rossiter modes. Both geometries are characterized by intense near field oscillations and strong noise radiation. To quantify the effects of three-dimensional phenomena on the generation and radiation of sound, a fully three-dimensional simulation is performed. The Navier-Stokes equations are solved directly using an optimized prefactored compact scheme for spatial discretization. Results are compared with those from a two-dimensional simulation and the effects of the three-dimensional phenomena are discussed. Finally, wind tunnel tests are performed to quantify the effects of geometry and flow speed on the velocity and pressure fields within a plane rectangular cavity. Velocity measurements are made using the Laser Doppler Anemometry and Particle Image Velocimetry techniques. Instantaneous and statistical data are employed to probe the flows. Although coherent vortical structures are found to characterize the shear layer, their intermittent nature prevents self-sustaining oscillations developing and consequently the pressure field is broadband in nature.
APA, Harvard, Vancouver, ISO, and other styles
29

Kövamees, Gustav, and Sanna Penton. "A Statistical Study of How Different Factors and the Availability of High-Speed Internet Affect Housing Prices." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209789.

Full text
Abstract:
This paper presents an empirical study of the impact of different factors on real estate sales prices in the countryside of Sweden. Further, this thesis analyzes the value of broadband to Swedish households examining whether access to fiber is correlated with higher real estate sales prices. The research goal was to find what three factors that affect the sales price the most and further to determine whether consumers in Sweden are willing to pay more money for real estates located in areas where fiber broadband access is available, than for a property that does not offer this amenity. Using data collected from real estate sales during 2015 achieved from Valueguard and information regarding broadband access from The Swedish Post and Telecom Authority, a hedonic pricing model in real estate economics was applied. In order to evaluate how different factors influence real estate prices, sales statistics was analyzed through the mathematical method of multiple linear regression. One conclusion drawn from this thesis is that it is possible to explain approximately 66 % of the price of a single-family residential located in the Swedish countryside. The value of a property in the countryside is most dependent of where it is located, both in terms of municipality and also of how central and close to water it is situated within a smaller community. Results from this thesis also suggest that fiber availability may indeed have a positive effect on real estate prices, increasing them with 2.8% - 12%, as the presence of fiber-based broadband in a neighborhood area is increasing.
Denna uppsats presenterar en empirisk studie vilken redogör för i vilken utsträckning olika faktorer inverkar på fastighetspriser på den svenska landsbygden. Vidare analyserar denna studie värdet av snabbt bredband för svenska hushåll genom att undersöka huruvida tillgången på fiber är korrelerad med högre fastighetspriser. Målet med denna studie var att finna de tre faktorer som är mest signifikanta för försäljningspriset och vidare att bestämma huruvida husköpare i Sverige är benägna att betala mer för fastigheter som ligger i områden där fiber finns tillgängligt, än för fastigheter som ligger i områden där fibertillgången är begränsad. Genom att använda insamlad data för husförsäljningar från 2015 erhållen av företaget Valueguard samt information om fibertillgänglighet från Svenska Post och Telestyrelsen, konstruerades en hedonisk prissättningsmodell. För att utvärdera hur olika faktorer påverkar fastighetspriser analyserades försäljningsstatistiken genom den matematiska metoden multipel linjär regression. En slutsats som härrör från denna studie är att det är möjligt att förklara cirka 66% av priset för ett enfamiljshus på den svenska landsbygden. Värdet på en egendom på landsbygden styrs i störst utsträckning av läget, både i termer av kommun och hur centralt samt nära till vatten bostaden är belägen. Resultat från denna uppsats tyder också på att tillgången på fiber kan ha en positiv effekt på fastighetspriser och bidra till en prisökning med mellan 2.8% -12 %. Värdet på fastigheter tycks öka och är högre i områden där tillgängligheten på snabbt bredband i form av fiber i omgivningen är stor.
APA, Harvard, Vancouver, ISO, and other styles
30

Fuller, Eric James. "Experimental and computational investigation of helium injection into air at supersonic and hypersonic speeds." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39977.

Full text
Abstract:
Experiments were performed with two different helium injector models at different injector transverse and yaw angles in order to determine the mixing rate and core penetration of the injectant and the flow field total pressure losses. when gaseous injection occurs into a supersonic freestream. Tested in the Virginia Tech supersonic tunnel. with a freestream Mach number of 3.0 and conditions corresponding to a freestream Reynolds number of 5.0 x 107 1m. was a single. sonic. 5X underexpanded, helium jet at a downstream angle of 30° relative to the freestream. This injector was rotated from 0° to _28° to test the effects of injector yaw. The second model was an array of three supersonic, 5X underexpanded helium injectors with an exit Mach number of 1.7 and a transverse angle of 15°. This model was tested in the NASA Langley Mach 6.0, High Reynolds number tunnel, with freestream conditions corresponding to a Reynolds number of 5.4 x 10⁷ /m. The injector array as tested at yaw angles of 0° and -15°. Surface flow visualization showed that significant flow asymmetries were produced by injector yaw. Nanosecond exposure shadowgraph pictures were taken, showing the gaseous injection plume to be unsteady, and further studies demonstrated this unsteadiness was related to shock waves orthogonal to the injectant bow shock, that were generated at a frequency of 30 kHz. The primary data technique used, was a concentration probe which measured the molar concentration of helium in the flow field. Concentration data and other meanflow data was taken at several downstream axial stations and yielded contours of helium concentration, total pressure, Mach number, velocity, and mass flux, as well as the static properties. From these contour plots, the various mixing rates for each case were determined. The injectant mixing rates, expressed as the maximum concentration decay, and mixing distances were found to be unaffected by injector yaw, in the Mach 3.0 experiments, but were adversely affected by injector yaw in the Mach 6.0 experiments. One promising aspect of injector yaw was the that as the yaw angle was increased, lateral motion of the injectant plume became significant, and the turbulent mixing region area increased by approximately 34%. Comparisons of the 15° transverse angled injection into a Mach 6.0 flow to previous experiments with 15° injection into a Mach 3.0 freestream, demonstrated that there is a significant decrease in initial mixing, at Mach 6.0, resulting in a much longer mixing distance. From a parametric computational study of the Mach 6.0 experiments, the effects of adjacent injectors was found to decrease lateral spreading while increasing the vertical penetration of the injectant plume, and marginally increasing the injectant core decay rate. Matching of the computational results to the experimental results was best achieved when using the Baldwin-Lomax turbulence model without the Degani-Schiff modification.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Reardon, Jonathan Paul. "Computational Analysis of Transient Unstart/Restart Characteristics in a Variable Geometry, High-Speed Inlet." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/95883.

Full text
Abstract:
This work seeks to analyze the transient characteristics of a high-speed inlet with a variable-geometry, rotating cowl. The inlet analyzed is a mixed compression inlet with a compression ramp, sidewalls and a rotating cowl. The analysis is conducted at nominally Mach 4.0 wind tunnel conditions. Advanced Computational Fluid Dynamics techniques such as transient solutions to the Unsteady Reynolds-averaged Navier-Stokes equations and relative mesh motion are used to predict and investigate the unstart and restart processes of the inlet as well as the associated hysteresis. Good agreement in the quasi-steady limit with a traditional analysis approach was obtained. However, the new model allows for more detailed, time-accurate information regarding the fully transient features of the unstart, restart, and hysteresis to be obtained that could not be captured by the traditional, quasi-steady analysis. It is found that the development of separated flow regions at the shock impingement points as well as in the corner regions play a principal role in the unstart process of the inlet. Also, the hysteresis that exists when the inlet progresses from the unstarted to restarted condition is captured by the time-accurate computations. In this case, the hysteresis manifests itself as a requirement of a much smaller cowl angle to restart the inlet than was required to unstart it. This process is shown to be driven primarily by the viscous, separated flow that sets up ahead of the inlet when it is unstarted. In addition, the effect of cowl rotation rate is assessed and is generally found to be small; however, definite trends are observed. Finally, a rigorous assessment of the computational errors and uncertainties of the Variable-Cowl Model indicated that Computation Fluid Dynamics is a valid tool for analyzing the transient response of a high-speed inlet in the presence of unstart, restart and hysteresis phenomena. The current work thus extends the state of knowledge of inlet unstart and restart to include transient computations of contraction ratio unstart/restart in a variable-geometry inlet.
Doctor of Philosophy
Flight at high speeds requires efficient engine operation and performance. As the vehicle traverses through its flight profile, the engine will undergo changes in operating conditions. At high speeds, these changes can lead to significant performance loss and can be detrimental to the vehicle. It is, therefore, important to develop tools for predicting characteristics of the engine and its response to disturbances. Computational Fluid Dynamics is a common method of computing the fluid flow through the engine. However, traditionally, CFD has been applied to predict the static performance of an engine. This work seeks to advance the state of the art by applying CFD to predict the transient response of the engine to changes in operating conditions brought about by a variable geometry inlet with rotating components.
APA, Harvard, Vancouver, ISO, and other styles
32

Pasciak, Alexander Samuel. "The theoretical development of a new high speed solution for Monte Carlo radiation transport computations." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4875.

Full text
Abstract:
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the Monte Carlo technique useful for calculations requiring a near real-time evaluation period. For Monte Carlo simulations, a small computational unit called a Field Programmable Gate Array (FPGA) is capable of bringing the power of a large cluster computer into any personal computer (PC). Because an FPGA is capable of executing Monte Carlo simulations with a high degree of parallelism, a simulation run on a large FPGA can be executed at a much higher rate than an equivalent simulation on a modern single-processor desktop PC. In this thesis, a simple radiation transport problem involving moderate energy photons incident on a three-dimensional target is discussed. By comparing the theoretical evaluation speed of this transport problem on a large FPGA to the evaluation speed of the same transport problem using standard computing techniques, it is shown that it is possible to accelerate Monte Carlo computations significantly using FPGAs. In fact, we have found that our simple photon transport test case can be evaluated in excess of 650 times faster on a large FPGA than on a 3.2 GHz Pentium-4 desktop PC running MCNP5—an acceleration factor that we predict will be largely preserved for most Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
33

Claudet, Andre Aman. "Data reduction for high speed computational analysis of three dimensional coordinate measurement data." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Crowell, Andrew R. "Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366204830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Dongli. "Computational and experimental study of shock wave interactions with cells." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:38beffe8-06c9-4b49-89f8-f5318c527800.

Full text
Abstract:
This thesis presents a combined numerical and experimental study on the response of kidney cells to shock waves. The motivation was to develop a mechanistic model of cell deformation in order to improve the clinical use of shock waves, by either enhancing their therapeutic action against target cells or minimising their impact on healthy cells. An ultra-high speed camera was used to visualise individual cells, embedded in tissue-mimicking gel, in order to measure their deformation when subject to a shock wave from a clinical shock wave source. Advanced image processing was employed to extract the contour of the cell from the images. The evolution of the observed cell contour revealed a relatively small deformation during the compressional phase and a much larger deformation during the tensile phases of a shock wave. The experimental observations were captured by a numerical model which describes the volumetric cell response with a bilinear Equation of State and the deviatoric cell response with a viscoelastic framework. Experiments using human kidney cancer cells (CAKI-2) and noncancerous kidney cells (HRE and HK-2) were compared to the model in order to determine their mechanical properties. The differences between cancerous and noncancerous cells were exploited to demonstrate a design process by which shock waves may be able to improve the specificity on targeted cancer cells while having minimal effect on normal cells. The cell response to shock waves was studied in a more biophysically realistic environment to include influence of cell size, shape and orientation, and the presence of neighbouring cells. The most significant difference was predicted when cells were in a cluster in which case the presence of neighbouring cells resulted in a four-fold increase on the von Mises stress and the membrane strain. Finally the numerical model was extended to capture the effect of cell damage using one of two paradigms. In the first paradigm the model captured microdamage during one shock wave but then assumed that the cell recovered by the time the next shock wave arrived. The second model allowed microdamage to accumulate with increasing number of shock waves. These models may be able to explain the strong effect that shock wave loading rate has on tissue damage. In conclusion a validated numerical model has been developed which provides a mechanistic understanding of how cells respond to shock waves. The model has application in suggesting improved strategies for current uses of shock waves, e.g., lithotripsy, as well as opening up new indications such as cancer treatment.
APA, Harvard, Vancouver, ISO, and other styles
36

Piepgrass, Nathan. "Computational Efficiency of a Hybrid Mass Concentration and Spherical Harmonic Modeling." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/876.

Full text
Abstract:
Through Spherical Harmonics, one can describe complex gravitational fields. However as the order and degree of the spherical harmonics increases, the computation speed rises exponentially. In addition, for onboard applications of spherical harmonics, the processors are radiation hardened in order to mitigate negative effects of the space environment on electronics. But, those processors have outdated processing speeds, resulting in a slower onboard spherical harmonic program. This thesis examines a partial solution to the slow computation speed of spherical harmonics programs. The partial solution was to supplant the gravity models in the flight software. The spherical harmonics gravity model can be replaced by a hybrid model, a mass concentrations model and a secondary (lesser degree or order) spherical harmonics model. That hybrid model can lead to greater processing speeds while maintaining the same level of accuracy. To compute the mass values for the mass concentration model, a potential estimation scheme was selected. In that scheme, mass values were computed by minimizing the integral of the difference between the correct and the estimated potential. The best hybrid model for the 8 degree and 8 order, 15 degree and 15 order, and 30 degree and 30 order lunar potential fields is developed following three different approaches: potential zeros method, gravitational anomalies method, and iterative method. Afterwards, the accuracy and computation time of the models are measured and compared to the primary spherical harmonic lunar model. In the aftermath, while the best hybrid model for all three cases was able to run faster than the primary spherical harmonic model, it was unable to be sufficiently accurate to replace the primary spherical harmonic model. The mass estimation scheme is severely hindered by the condition number and convergence issues resulting in inaccurate estimates for the mass values for a given distribution. It is recommended to alleviate the condition number error by eliminating the inverse in the mass estimation scheme. Other recommendations include fixing the convergence error, investing in software and hardware development, and focusing on other hybrid research objectives.
APA, Harvard, Vancouver, ISO, and other styles
37

Konda, Pavan Chandra. "Multi-Aperture Fourier Ptychographic Microscopy : development of a high-speed gigapixel coherent computational microscope." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/9015/.

Full text
Abstract:
Medical research and clinical diagnostics require imaging of large sample areas with sub-cellular resolution. Conventional imaging techniques can provide either high-resolution or wide field-of-view (FoV) but not both. This compromise is conventionally defeated by using a high NA objective with a small FoV and then mechanically scan the sample in order to acquire separate images of its different regions. By stitching these images together, a larger effective FoV is then obtained. This procedure, however, requires precise and expensive scanning stages and prolongs the acquisition time, thus rendering the observation of fast processes/phenomena impossible. A novel imaging configuration termed Multi-Aperture Fourier Ptychographic Microscopy (MA-FPM) is proposed here based on Fourier ptychography (FP), a technique to achieve wide-FoV and high-resolution using time-sequential synthesis of a high-NA coherent illumination. MA-FPM configuration utilises an array of objective lenses coupled with detectors to increase the bandwidth of the object spatial-frequencies captured in a single snapshot. This provides high-speed data-acquisition with wide FoV, high-resolution, long working distance and extended depth-of-field. In this work, a new reconstruction method based on Fresnel diffraction forward model was developed to extend FP reconstruction to the proposed MA-FPM technique. MA-FPM was validated experimentally by synthesis of a 3x3 lens array system from a translating objective-detector system. Additionally, a calibration procedure was also developed to register dissimilar images from multiple cameras and successfully implemented on the experimental data. A nine-fold improvement in captured data-bandwidth was demonstrated. Another experimental configuration was proposed using the Scheimpflug condition to correct for the aberrations present in the off-axis imaging systems. An experimental setup was built for this new configuration using 3D printed parts to minimise the cost. The design of this setup is discussed along with robustness analysis of the low-cost detectors used in this setup. A reconstruction model for the Scheimpflug configuration FP was developed and applied to the experimental data. Preliminary experimental results were found to be in agreement with this reconstruction model. Some artefacts were observed in these results due to the calibration errors in the experiment. These can be corrected by using the self-calibration algorithm proposed in the literature, which is left as a future work. Extensions to this work can include implementing multiplexed illumination for further increasing the data acquisition speed and diffraction tomography for imaging thick samples.
APA, Harvard, Vancouver, ISO, and other styles
38

Rohweder, Matthew Flynn. "A numerical investigation of flowfield modification in high-speed airbreathing inlets using energy deposition." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Rohweder_09007dcc80722a47.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Jan. 5, 2010). Includes bibliographical references (p. 52-53).
APA, Harvard, Vancouver, ISO, and other styles
39

Krasteva, Denitza Tchavdarova Jr. "Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/37035.

Full text
Abstract:
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Clavica, Francesco. "Computational and experimental time domain, one dimensional models of air wave propagation in human airways." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/9622.

Full text
Abstract:
The scientific literature on airflow in the respiratory system is usually associated with rigid ducts. Many studies have been conducted in the frequency domain to assess respiratory system mechanics. Time-domain analyses appear more independent from the hypotheses of periodicity, required by frequency analysis, providing data that are simpler to interpret since features can be easily associated to time. However, the complexity of the bronchial tree makes 3-D simulations too expensive computationally, limiting the analysis to few generations. 1-D modelling in space-time variables has been extensively applied to simulate blood pressure and flow waveforms in arteries, providing a good compromise between accuracy and computational cost. This work represents the first attempt to apply this formulation to study pulse waveforms in the human bronchial tree. Experiments have been carried out, in this work, to validate the model capabilities in modelling pressure and velocity waveforms when air pulses propagate in flexible tubes with different mechanical and geometrical properties. The experiments have shown that the arrival of reflected air waves occurs in correspondence of the theoretical timing once the wave speed is known. Reflected backward compression waves have generated an increase of pressure (P) and decrease of velocity (U) while expansion backward waves have produced a decrease of P and increase of U according to the linear analysis of wave reflections. The experiments have demonstrated also the capabilities of Wave intensity analysis (WIA), an analytical technique used to study wave propagation in cardiovascular system, in separating forward and backward components of pressure and velocity also for the air case. After validating the 1-D modelling in space and time variables, several models for human airways have been considered starting from simplified versions (bifurcation trachea- main bronchi, series of tubes) to more complex systems up to seven generations of bifurcations according to both symmetrical and asymmetrical models. Calculated pressures waveforms in trachea are shown to change accordingly to both peripheral resistance and compliance variations, suggesting a possible non-invasive assessment of peripheral conditions. A favourable comparison with typical pressure and flow waveforms from impulse oscillometry system, which has recently been introduced as a clinical diagnostic technique, is also shown. The results suggested that a deeper investigation of the mechanisms underlying air wave propagation in lungs could be a useful tool to better understand the differences between normal and pathologic conditions and how pathologies may affect the pattern of pressure and velocity waveforms.
APA, Harvard, Vancouver, ISO, and other styles
41

Pérez, Heredia Jorge. "A computational view on natural evolution : on the rigorous analysis of the speed of adaptation." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19205/.

Full text
Abstract:
Inspired by Darwin’s ideas, Turing (1948) proposed an evolutionary search as an automated problem solving approach. Mimicking natural evolution, evolutionary algorithms evolve a set of solutions through the repeated application of the evolutionary operators (mutation, recombination and selection). Evolutionary algorithms belong to the family of black box algorithms which are general purpose optimisation tools. They are typically used when no good specific algorithm is known for the problem at hand and they have been reported to be surprisingly effective (Eiben and Smith, 2015; Sarker et al., 2002). Interestingly, although evolutionary algorithms are heavily inspired by natural evolution, their study has deviated from the study of evolution by the population genetics community. We believe that this is a missed opportunity and that both fields can benefit from an interdisciplinary collaboration. The question of how long it takes for a natural population to evolve complex adaptations has fascinated researchers for decades. We will argue that this is an equivalent research question to the runtime analysis of algorithms. By making use of the methods and techniques used in both fields, we will derive plenty of meaningful results for both communities, proving that this interdisciplinary approach is effective and relevant. We will apply the tools used in the theoretical analysis of evolutionary algorithms to quantify the complexity of adaptive walks on many landscapes, illustrating how the structure of the fitness landscape and the parameter conditions can impose limits to adaptation. Furthermore, as geneticists use diffusion theory to track the change in the allele frequencies of a population, we will develop a brand new model to analyse the dynamics of evolutionary algorithms. Our model, based on stochastic differential equations, will allow to describe not only the expected behaviour, but also to measure how much the process might deviate from that expectation.
APA, Harvard, Vancouver, ISO, and other styles
42

Chow, Andy Ho Fai. "Adaptive traffic control system : a study of strategies, computational speed and effect of prediction error /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202002%20CHOW.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 126-129). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
43

El-Ibrahim, Salah Jamil Saleh. "Prediction of the effects of aerofoil surface irregularities at high subsonic speeds using the Viscous Garabedian and Korn (VKG) method." Thesis, University of Hertfordshire, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Dyhr, Jonathan Peter. "Behavioral and Theoretical Evidence that Non-directional Motion Detectors Underlie the Visual Estimation of Speed in Insects." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/195704.

Full text
Abstract:
Insects use an estimate of the angular speed of the visual image across the eye (termed optic flow) for a wide variety of behaviors including flight speed control, visual navigation, depth estimation, grazing landings, and visual odometry. Despite the behavioral importance of visual speed estimation, the neuronal mechanisms by which the brain extracts optic flow information from the retinal image remain unknown. This dissertation investigates the underlying neuronal mechanisms of visual speed estimation via three complementary strategies: the development of neuronally-based computational models, testing of the models in a behavioral simulation framework, and behavioral experiments using bumblebees. Using these methods I demonstrate the sufficiency of two non-directional models of motion detection for reproducing real-world, speed dependent behaviors, propose potential neuronal circuits by which these models may be physiologically implemented, and predict the expected responses of these neurons to a range of visual stimuli.
APA, Harvard, Vancouver, ISO, and other styles
45

Railsback, Steven, Daniel Ayllón, Uta Berger, Volker Grimm, Steven Lytinen, Colin Sheppard, and Jan C. Thiele. "Improving Execution Speed of Models Implemented in NetLogo." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-221788.

Full text
Abstract:
NetLogo has become a standard platform for agent-based simulation, yet there appears to be widespread belief that it is not suitable for large and complex models due to slow execution. Our experience does not support that belief. NetLogo programs often do run very slowly when written to minimize code length and maximize clarity, but relatively simple and easily tested changes can almost always produce major increases in execution speed. We recommend a five-step process for quantifying execution speed, identifying slow parts of code, and writing faster code. Avoiding or improving agent filtering statements can often produce dramatic speed improvements. For models with extensive initialization methods, reorganizing the setup procedure can reduce the initialization effort in simulation experiments. Programming the same behavior in a different way can sometimes provide order-of-magnitude speed increases. For models in which most agents do nothing on most time steps, discrete event simulation—facilitated by the time extension to NetLogo—can dramatically increase speed. NetLogo’s BehaviorSpace tool makes it very easy to conduct multiple-model-run experiments in parallel on either desktop or high performance cluster computers, so even quite slow models can be executed thousands of times. NetLogo also is supported by efficient analysis tools, such as BehaviorSearch and RNetLogo, that can reduce the number of model runs and the effort to set them up for (e.g.) parameterization and sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Mingoo. "Application of computational intelligence to power system vulnerability assessment and adaptive protection using high-speed communication /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/5855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Zhe. "Optimizing neural network structures: faster speed, smaller size, less tuning." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6460.

Full text
Abstract:
Deep neural networks have achieved tremendous success in many domains (e.g., computer vision~\cite{Alexnet12,vggnet15,fastrcnn15}, speech recognition~\cite{hinton2012deep,dahl2012context}, natural language processing~\cite{dahl2012context,collobert2011natural}, games~\cite{silver2017mastering,silver2016mastering}), however, there are still many challenges in deep learning comunity such as how to speed up training large deep neural networks, how to compress large nerual networks for mobile/embed device without performance loss, how to automatically design the optimal network structures for a certain task, and how to further design the optimal networks with improved performance and certain model size with reduced computation cost. To speed up training large neural networks, we propose to use multinomial sampling for dropout, i.e., sampling features or neurons according to a multinomial distribution with different probabilities for different features/neurons. To exhibit the optimal dropout probabilities, we analyze the shallow learning with multinomial dropout and establish the risk bound for stochastic optimization. By minimizing a sampling dependent factor in the risk bound, we obtain a distribution-dependent dropout with sampling probabilities dependent on the second order statistics of the data distribution. To tackle the issue of evolving distribution of neurons in deep learning, we propose an efficient adaptive dropout (named evolutional dropout) that computes the sampling probabilities on-the-fly from a mini-batch of examples. To compress large neural network structures, we propose a simple yet powerful method for compressing the size of deep Convolutional Neural Networks (CNNs) based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets. To automatically design neural networks, we study how to design a genetic programming approach for optimizing the structure of a CNN for a given task under limited computational resources yet without imposing strong restrictions on the search space. To reduce the computational costs, we propose two general strategies that are observed to be helpful: (i) aggressively selecting strongest individuals for survival and reproduction, and killing weaker individuals at a very early age; (ii) increasing mutation frequency to encourage diversity and faster evolution. The combined strategy with additional optimization techniques allows us to explore a large search space but with affordable computational costs. To further design the optimal networks with improved performance and certain model size under reduced computation cost, we propose an ecologically inspired genetic approach for neural network structure search , that includes two types of succession: primary and secondary succession as well as accelerated extinction. Specifically, we first use primary succession to rapidly evolve a community of poor initialized neural network structures into a more diverse community, followed by a secondary succession stage for fine-grained searching based on the networks from the primary succession. Accelerated extinction is applied in both stages to reduce computational cost. In addition, we also introduce the gene duplication to further utilize the novel block of layers that appeared in the discovered network structure.
APA, Harvard, Vancouver, ISO, and other styles
48

Railsback, Steven, Daniel Ayllón, Uta Berger, Volker Grimm, Steven Lytinen, Colin Sheppard, and Jan C. Thiele. "Improving Execution Speed of Models Implemented in NetLogo." JASSS, 2016. https://tud.qucosa.de/id/qucosa%3A30227.

Full text
Abstract:
NetLogo has become a standard platform for agent-based simulation, yet there appears to be widespread belief that it is not suitable for large and complex models due to slow execution. Our experience does not support that belief. NetLogo programs often do run very slowly when written to minimize code length and maximize clarity, but relatively simple and easily tested changes can almost always produce major increases in execution speed. We recommend a five-step process for quantifying execution speed, identifying slow parts of code, and writing faster code. Avoiding or improving agent filtering statements can often produce dramatic speed improvements. For models with extensive initialization methods, reorganizing the setup procedure can reduce the initialization effort in simulation experiments. Programming the same behavior in a different way can sometimes provide order-of-magnitude speed increases. For models in which most agents do nothing on most time steps, discrete event simulation—facilitated by the time extension to NetLogo—can dramatically increase speed. NetLogo’s BehaviorSpace tool makes it very easy to conduct multiple-model-run experiments in parallel on either desktop or high performance cluster computers, so even quite slow models can be executed thousands of times. NetLogo also is supported by efficient analysis tools, such as BehaviorSearch and RNetLogo, that can reduce the number of model runs and the effort to set them up for (e.g.) parameterization and sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Milligan, Ryan Timothy. "DUAL MODE SCRAMJET: A COMPUTATIONAL INVESTIGATION ON COMBUSTOR DESIGN AND OPERATION." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1251725076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Benjanirat, Sarun. "Computational studies of the horizontal axis wind turbines in high wind speed condition using advanced turbulence models." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-08222006-145334/.

Full text
Abstract:
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2007.
Samual V. Shelton, Committee Member ; P.K. Yeung, Committee Member ; Lakshmi N. Sankar, Committee Chair ; Stephen Ruffin, Committee Member ; Marilyn Smith, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography