To see the other types of publications on this topic, follow the link: Acceleration.

Dissertations / Theses on the topic 'Acceleration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Acceleration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Morrison, John T. "Selective Deuteron Acceleration using Target Normal Sheath Acceleration." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365523293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Borgström, Fredrik. "Acceleration of FreeRTOS withSierra RTOS accelerator : Implementation of a FreeRTOS software layer onSierra RTOS accelerator." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188518.

Full text
Abstract:
Today, the effect of the most common ways to improve the performance of embedded systems and real-time operating systems is stagnating. Therefore it is interesting to examine new ways to push the performance boundaries of embedded systems and real-time operating systems even further. It has previously been demonstrated that the hardware-based real-time operating system, Sierra, has better performance than the software-based real-time operating system, FreeRTOS. These real-time operating systems have also been shown to be similar in many aspects, which mean that it is possible for Sierra to accelerate FreeRTOS. In this thesis an implementation of such acceleration has been carried out. Because existing real-time operating systems are constantly in development combined with that it was several years since an earlier comparison between the two real-time operating systems was per-formed, FreeRTOS and Sierra were compared in terms of functionality and architecture also in this thesis. This comparison showed that FreeRTOS and Sierra share the most fundamental functions of a real-time operating system, and thus can be accelerated by Sierra, but that FreeRTOS also has a number of exclusive functions to facilitate the use of that real-time operating system. The infor-mation obtained by this comparison was the very essence of how the acceleration would be imple-mented. After a number of performance tests it could be concluded that all of the implemented functions, with the exception of a few, had shorter execution time than the corresponding functions in the original version of FreeRTOS.
Idag är effekten av de vanligaste åtgärderna för att förbättra prestandan av inbyggda system och realtidsoperativsystem väldigt liten. På grund av detta är det intressant att undersöka nya åtgärder för att tänja prestandagränserna av inbyggda system och realtidsoperativsystem ytterliggare. Det har tidigare påvisats att det hårdvarubaseraderealtidsoperativsystemet, Sierra, har bättre prestanda än det mjukvarubaseraderealtidsoperativsystemet, FreeRTOS. Dessa realtidsoperativsystem har även visats vara lika i flera aspekter, vilket betyder att det är möjligt för Sierra att accelererera FreeRTOS. I detta examensarbete har en implementering av en sådan acceleration genomförts. Eftersom befintliga realtidsoperativsystem ständigtär i utveckling i kombination med att det är flera år sedan som en tidigare jämförelse mellan de båda systemen utfördes, så jämfördes FreeRTOS och Sierra i fråga om funktionalitet och uppbyggnad även i detta examensarbete.Denna jämförelse visade att FreeRTOS och Sierra delar de mest grundläggande funktionerna av ett realtidsoperativsystem, och som därmed kan accelereras av Sierra, men att FreeRTOS även har ett antal exklusiva funktioner för att underlätta användningen av det realtidsoperativsystemet. Informationen som erhölls av denna jämförelse var sedan grunden för hur själva accelerationen skulle implementeras. Efter ett antal prestandatesterkunde det konstateras att alla implementerade funktioner, med undantag för ett fåtal, hade kortare exekveringstid än motsvarande funktioner i ursprungsversionen av FreeRTOS.
APA, Harvard, Vancouver, ISO, and other styles
3

Laidler, Christopher. "GPU acceleration of the frequency domain acceleration search for binary pulsars." Doctoral thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33752.

Full text
Abstract:
Graphics processing units (GPUs) have been used to accelerate computation in a broad range of fields; this work presents a GPU-accelerated search for pulsars. Pulsars are highly magnetised neutron stars with extremely stable rotational periods. These periods can be accurately measured, which makes them exceptionally powerful reference tools in the field of astrophysics. Pulsars have very weak emissions, making them difficult to find. Most pulsars are found in large-scale surveys, which generate a large amount of data, and require extensive data processing. This work describes a GPU-based solution, with implications for real-time processing of pulsar search data. Pulsar astronomy uses radio telescope observations with high spectral and temporal resolution, which produce very large data sets and require intensive Digital Signal Processing. Large-scale pulsar surveys using next-generation radio telescopes such as the Square Kilometre Array (SKA), will have to be performed in real time as the volumes of raw data produced will be too large to be stored for an extended period. These computational requirements are compounded when searching for binary pulsars as their orbital motion makes them difficult to detect using classic periodicity searches. However, these rare pulsars are of great interest to physicists, as they allow us to test general relativity. Acceleration searches are the most common technique for detecting signals from binary pulsars that may be missed by standard search techniques. One of these, the frequency domain acceleration search (FDAS), mitigates the effect of orbital acceleration by correlating a matched template with the spectrum of a signal. This method has been shown to be more efficient than the alternative time domain acceleration search (TDAS)s. Even so, it is extremely computationally intensive to perform on a large scale. The existing implementation, Accelsearch, is run on a central processing unit (CPU), which limits its performance. We address this problem by creating a GPU port of the FDAS. An analysis of the fundamental calculations on which the FDAS is based informs the design of a fully asynchronous pipeline that exploits multiple levels of parallelism. This entails developing a novel technique for calculating Fresnel integrals, which increases the speed and numerical accuracy of the calculations, in both single- and double-precision. Furthermore, we develop a new estimate which improves the numerical accuracy of filter coefficients for accelerations close to zero. The GPU-accelerated pipeline achieves speeds 30 to 70 times faster than the existing serial CPU implementation. Our results clearly show that GPU acceleration is effective at reducing the cost of processing the FDAS component, to the point at which the SKA1-mid survey data could be searched in real time using 340 to 675 desktop GPUs from the Pascal generation.
APA, Harvard, Vancouver, ISO, and other styles
4

Roskell, Melanie. "Head acceleration during balance." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5750.

Full text
Abstract:
The overall purpose of this thesis was to study the angular and linear accelerations that occur at the head during quiet standing in healthy humans. To date, there were few descriptions of linear head accelerations in quiet standing, and no focus on angular head accelerations. The contribution of the vestibular system to standing balance can be better understood from recognizing these, the stimuli that the system experiences during the task. Head accelerations were measured under four manipulations of sensory condition, and RMS and median frequency values were reported for linear and angular head accelerations in Reid’s planes. Coherence was also calculated between force plate forces and head accelerations, and between lower leg EMG and angular head accelerations in the directions of the semi-circular canals. This study considered two factors in the manipulation of quiet standing sway: vision (eyes open/closed) and surface (hard/compliant foam). The results show that angular head accelerations are repeatable under full sensory conditions, and that angular head acceleration RMS is above known vestibular thresholds in all tested sensory conditions. Linear head acceleration absolute maximum and RMS values matched previous reports under similar conditions. Significant coherence was found below 7Hz in both coherence analyses, likely due to the mechanical linkage. This coherence also showed defined troughs in varying regions, which were attributed to the interference of active systems (visual, somatosensory and vestibular) on the mechanical propagation of forces. The results also reinforced that the inverted pendulum model is valid in quiet standing on a hard surface in the sagittal and frontal planes. This study shows that the vestibular system is able to detect sway at the head during quiet standing under all four sensory conditions tested. Consequently, the vestibular system may play a range of roles in quiet standing, which may change as its relative importance in balance increases. The measurement of head accelerations is confirmed as a useful technique in studying balance in quietly standing humans.
APA, Harvard, Vancouver, ISO, and other styles
5

Neutze, Richard. "Acceleration and optical interferometry." Thesis, University of Canterbury. Physics, 1995. http://hdl.handle.net/10092/6569.

Full text
Abstract:
The influence of acceleration on a number of physical systems is examined. We present a full relativistic treatment of a simple harmonic oscillator with relativistic velocities. The line element for Schwarzschild geometry is expanded in a set of Cartesian coordinates and is shown to be locally equivalent (neglecting curvature) to the line element of a linearly accelerating frame of reference. We consider the rate of a linearly accelerating quantum mechanical clock and the measurement of frequency by non-inertial observers, requiring this measurement to be of finite duration. These analyses demonstrate the standard measurement hypothesis for accelerating observers only approximates the physical behaviour of these systems. We derive the output of an optical ring interferometer in a variety of experimental contexts. A full relativistic reanalysis of the modified Laub drag experiment of Sanders and Ezekiel is performed, correcting a number of errors in their work and giving an overall discrepancy between experiment and theory of 1300 ppm. We examine the behaviour of a ring interferometer containing an accelerating glass sample. Our analysis predicts sideband structure will arise when a glass sample is oscillated along one arm of a Mach-Zehnder interferometer and the resulting output Fourier analysed. We also predict a resonant cavity containing a linearly accelerating glass sample will display optical ringing. A rigorous analysis of a ring interferometer with angular acceleration is presented. This predicts a resonant cavity with angular acceleration will also display optical ringing and demonstrates the beat frequency in a ring laser with angular acceleration is the instantaneous Sagnac beat frequency. Finally, we analyse the optical output of a rotating ring laser with one mirror oscillating, predicting sideband structure in spectra obtained from Fourier analysis of the beat between the opposite beams, and the beat between adjacent modes when the laser has multimode operation.
APA, Harvard, Vancouver, ISO, and other styles
6

Sanbar, Rania. "AutoSampler : Life Acceleration Test." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-25678.

Full text
Abstract:
FOSS wants to offer a reliable robust new generation of Infratec analytical instrument (called Prototype model) to its customers because reliability and robustness is a very important issue for FOSS reputation. Therefore a lifetime test is necessaryto be done on the new sample handling system to avoid any problem out in the field.The main objective of this thesis is to construct a robustand stablemechanical AutoSampler; thedeveloped”AutoSampler”will subject the new generation of Infratec analytical instrument toaLife Acceleration test(LAT). The test is to highlight; the weak parts of the Prototype model, designmargin, robustness, reliability leveland to find potential weaknesses in the design at an early stage in the development phase,where changes can be done at low costs. Failure causes will be inspected, modificationimplemented and tested again. The LATtobe performed on at least threePrototype modelsfor 700,000 cycles which correspond to that the instrument is being used during 7 years. If thenew Prototypemodelfulfil the requested time, that will be great otherwisewe will learn something newand FOSS will come out with a better developed construction.This work resulted in a robust mechanical construction performing Life Acceleration Test on the Prototype model of a new generation of Infratecwhere three concepts of LAT mechanism were suggested: Elevator, Conveyors and Screw concept. A table of comparison concerning price, robustness cons and pros was made. Out of which the Screw concept was approved. The concept was then developed to a finished mechanism that was tested to assure its stabilityand reliability in performance. Technically AutoSampler fulfilled all the needed requirements. Results showed that AutoSampler is a good robust solution to perform LAT on Prototype model. LAT started on Prototype model which resulted in different findings and development. Even though many errors appeared at the beginning of the test the overall functionality of the Prototype was satisfactory. The analyses results were stable within the normal designed range. This report coversthe LAT test done from the 17thof March to 30thof April, 2014. But today June 1stand after the last modifications were implemented the AutoSampler did over 20000 cycle with few interruption which shows a satisfactory results in Prototype performance. Thanks to the detected failures thatresulted to a better robust design
APA, Harvard, Vancouver, ISO, and other styles
7

Gudmundson, Stephan. "TRANSPARENT SATELLITE BANDWIDTH ACCELERATION." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606743.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
While the transition to IP internetworking in space-based applications has a tremendous upside, there are significant challenges of communications efficiency and compatibility to overcome. This paper describes a very high efficiency, low-risk, incremental architecture for migrating to IP internetworking based on the use of proxies. In addition to impressive gains in communications bandwidth, the architecture provides encapsulation of potentially volatile decisions such as particular vendors and network technologies. The specific benchmarking architecture is a NetAcquire Corporation COTS telemetry system that includes built-in TCP-Tranquility (also known as SCPS-TP) and Reed-Solomon Forward Error Correction capabilities as well as a specialized proxy-capable network stack. Depending on network conditions, we will show that the effective bandwidth for satellite transmissions can be increased as much as a factor of one hundred with no external changes to existing internetworking equipment.
APA, Harvard, Vancouver, ISO, and other styles
8

Charalambous, Georgios. "TEACHERS IN THE ERA OF ACCELERATION : How the acceleration of ICT developments influences the ICT use by teachers at school." Thesis, Linköpings universitet, Pedagogik och vuxnas lärande, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130804.

Full text
Abstract:
In the effort to examine the factors that impact the use of ICT by teachers, research has up until now neglected the acceleration of ICT developments as a factor that affects the successful integration of ICT in education. The technological acceleration in general has triggered significant changes at the social level, such as the acceleration of social change and the acceleration of the pace of life. This is why the study of the acceleration of ICT provides for a good theoretical framework to study the teachers and their interaction with ICT in a broader context, one that engages the environment in which a teacher functions as a teacher and a learner. This study explores the role of the acceleration of ICT as a factor that affects the use of ICT by teachers in Cyprus secondary schools. The Social Acceleration (SA) theory is used to interpret the whole situation. After examining how teachers perceive the ICT acceleration, how it affects them at school and personally as lifelong learners the results showed that ICT acceleration is not a significant factor in the use of ICT by teachers at schools in Cyprus but it still affects teachers indirectly as lifelong learners. I argue that the teachers have established a superficial relation to technology that has to do with a short-sighted vision of ICT integration which also the Ministry of Education shares. I propose that serious decisions should be made at a policy level in order to make a conscious adoption of technology, not necessarily running behind the accelerated ICT developments but exploiting the potential of ICT according to the needs of the educational system.
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Justin A. "Analytical and Experimental Investigation of Time-Variant Acceleration Fields." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567258549794284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Popp, Antonia. "Dynamics of electron-acceleration in laser-driven wakefields: Acceleration limits and asymmetric plasma waves." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-138159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Martinez, Arturo F. "Accelerating Developmental Math Students in California Community Colleges| A Comparative Assessment of Two Acceleration Models." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10824201.

Full text
Abstract:

Community colleges across the nation are under increasing pressure to find ways to improve the rate of which students, placed in remediation, complete college-level coursework. The attrition of students placed into the lowest levels of developmental mathematics has been a challenge for many colleges to overcome. Research has well recorded the lack of progress of students placed three to four levels below a transfer-level course. Yet, few studies have compared the outcome of similar students in accelerated programs designed to shorten the pathways through remediation. This study focused on students placed in the lowest levels of remediation at two colleges offering consecutive sequences of course-redesign and compression models of acceleration. Using multivariate analyses, the comparative effect on completion rates of students accelerated through two different developmental math acceleration programs from two different colleges within a four year period (2013–2017) were examined. Moreover, this study used student background characteristics, math placement and math acceleration model to predict developmental and college level math course completion using logistic regression analysis.

The results of this study suggest students placed in developmental mathematics who are in an accelerated pathway have decreased time to complete remediation and a transfer-level math course. Findings indicate course-redesign acceleration model yielded more statistically significant improvements in transfer-level math and developmental math completion rates for first-generation students, as well as students placed in both low-level and mid-level remediation. The compression model of acceleration showed significant improvement in completion rates for students placed in mid-level remediation yet results were mixed for students placed in low-level remediation. Students in consecutive acceleration courses were most likely to complete a transfer-level math course, and historically underrepresented minority students were more likely to complete remediation, under certain circumstances, in the compression acceleration model.

These findings inform the college administrators on the potential of sequential accelerated programs. The implications of these results contribute to redesigning academic programs and support current developmental policy reforms. Community colleges are encouraged to consider the recommendations in this study, such as integrating course redesign in California Assembly Bill 705 and California Community College Guided Pathways, to help non-traditional students who are most often placed into the lowest levels of remediation.

APA, Harvard, Vancouver, ISO, and other styles
12

Tripp, Lloyd D. "+Gz acceleration loss of consciousness /." Cincinnati, Ohio University of Cincinnati, 2002. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1089841115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lindegaard, Karl-Petter. "Acceleration Feedback in Dynamic Positioning." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-286.

Full text
Abstract:
This dissertation contains new results on the design of dynamic positioning (DP) systems for marine surface vessels. A positioned ship is continuously exposed to environmental disturbances, and the objective of the DP system is to maintain the desired position and heading by applying adequate propeller thrust. The disturbances can be categorized into three classes. First, there are stationary forces mainly due to wind, ocean currents, and static wave drift. Secondly, there are slowly-varying forces mainly due to wave drift, a phenomenon experienced in irregular seas. Finally there are rapid, zero mean linear wave loads causing oscillatory motion with the same frequency as the incoming wave train. The main contribution of this dissertation is a method for better compensation of the second type of disturbances, slowly-varying forces, by introducing feedback from measured acceleration. It is shown theoretically and through model experiments that positioning performance can be improved without compromising on thruster usage. The specific contributions are: • Observer design: Two observers with wave filtering capabilities was developed, analyzed, and tested experimentally. Both of them incorporate position and, if available, velocity and acceleration measurements. Filtering out the rapid, zero mean motion induced by linear wave loads is particularly important whenever measured acceleration is to be used by the DP controller, because in an acceleration signal, the high frequency contributions from the linear wave loads dominate. • Controller design: A low speed tracking controller has been developed. The proposed control law can be regarded as an extension of any conventional PID-like design, and stability was guaranteed for bounded yaw rate. A method for numerically calculating this upper bound was proposed, and for most ships the resulting bound will be higher than the physical limitation. For completeness, the missing nonlinear term that, if included in the controller, would ensure global exponential stability was identified. The second contribution of this dissertation is a new method for mapping controller action into thruster forces. A low speed control allocation method for overactuated ships equipped with propellers and rudders was derived. Active use of rudders, together with propeller action, is advantageous in a DP operation, because the overall fuel consumption can be reduced. A new model ship, Cybership II, together with a low-cost position reference system was developed with the aim of testing the proposed concepts. The acceleration experiments were carried out at the recently developed Marine Cybernetics Laboratory, while the control allocation experiment was carried out at the Guidance, Navigation and Control Laboratory. The main results of this dissertation have been published or are still under review for publication in international journals and at international conferences.
APA, Harvard, Vancouver, ISO, and other styles
14

Brandén, Henrik. "Convergence Acceleration for Flow Problems." Doctoral thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-576.

Full text
Abstract:
Convergence acceleration techniques for the iterative solution of system of equations arising in the discretisations of compressible flow problems governed by the steady state Euler or Navier-Stokes equations is considered. The system of PDE is discretised using a finite difference or finite volume method yielding a large sparse system of equations. A solution is computed by integrating the corresponding time dependent problem in time until steady state is reached. A convergence acceleration technique based on semicirculant approximations is applied. For scalar model problems, it is proved that the preconditioned coefficient matrix has a bounded spectrum well separated from the origin. A very simple time marching scheme such as the forward Euler method can be used, and the time step is not limited by a CFL-type criterion. Instead, the time step can asymptotically be chosen as a constant, independent of the number of grid points and the Reynolds number. Numerical experiments show that grid and parameter independent convergence is achieved also in more complicated problem settings. A comparison with a multigrid method shows that the semicirculant convergence acceleration technique is more efficient in terms of arithmetic complexity. Another convergence acceleration technique based on fundamental solutions is proposed. An algorithm based on Fourier technique is provided for the fast application. Scalar model problems are considered and a theory, where the preconditioner is represented as an integral operator is derived. Theory and numerical experiments show that for first order partial differential equations, grid independent convergence is achieved.
APA, Harvard, Vancouver, ISO, and other styles
15

Cirovic, Srdjan. "Cerebral circulation during acceleration stress." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58910.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Brandén, Henrik. "Convergence acceleration for flow problems /." Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2001. http://publications.uu.se/theses/91-554-4914-X/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Culhane, Leo. "Acceleration characteristics of forward skating." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114587.

Full text
Abstract:
The purpose of this study was to quantify kinetic and kinematic variables in ice hockey forward acceleration tasks as well as to compare two skate models: a regular ice hockey skate (SKATE) and a skate with a modified flexible tendon guard (SKATE FTG). Twelve adult male subjects performed four acceleration trials with each skate model. Strain gauges on the blade permitted direct skate push-off force estimates while body accelerations (forward-backwards) were estimated from a sensor placed on the player's back. The results demonstrated the feasibility to quantify bilateral skating dynamics. In terms of single and double support time, the combined left and right stride estimates approximate 80% and 20% of the skating stride, respectively. Overall, there were no significant differences between skate models in terms of time to complete the skate task or average stride rates. Significant contact time differences between the right SKATE and right SKATE FTG (0.41 vs 0.36s) contributed to greater impulse and power output were observed; however, the opposing effect of air resistance did not permit substantial time improvements over the 54 m skating distance.
Le but de cette étude était de quantifier les variables cinétiques et cinématiques pour une tache de patinage en ligne droite ainsi que de comparer deux modèles de patins: un modèle de patin de hockey conventionnel (SKATE) et un patin avec un protège-tendon souple modifié (SKATE FTG). Douze sujets adultes masculins ont effectué quatre essais d'accélération avec chaque modèle de patin. Les jauges de contrainte sur le support de lame ont permis d'estimer les forces de réaction au sol tandis que les valeurs d'accélérations (accélération et décélération) durant la phase de propulsion ont été estimés à partir d'un capteur placé sur le dos du joueur. Les résultats ont démontré la faisabilité de quantifier les forces dynamiques bilatérales. Les résultats ont confirmés la faisabilité de mesuré les forces dynamiques lors de tâches de patinage sur glace. Les valeurs temporelles de simple et de double-support combinant les patins droit et gauche sont approximativement 80 % et 20% d'une foulée complete. Dans l'ensemble, il n'y avait pas de différences significatives entre les modèles de patins en termes de temps pour compléter la tâche ni de différence entre les fréquences de foulée moyenne. D'importantes différences de temps de contact entre le patin droit SKATE et le patin droit SKATE FTG (0,41 vs 0.36s) ont contribué à une plus grande impulsion et puissance de sortie ont pu être observés, mais l'effet opposé de résistance de l'air n'a pas permis des améliorations substantielles de temps sur la distance de 54 m de patinage.
APA, Harvard, Vancouver, ISO, and other styles
18

Hoggins, Carl Andrew. "Hardware acceleration of photon mapping." Thesis, University of Newcastle Upon Tyne, 2011. http://hdl.handle.net/10443/1242.

Full text
Abstract:
The quest for realism in computer-generated graphics has yielded a range of algorithmic techniques, the most advanced of which are capable of rendering images at close to photorealistic quality. Due to the realism available, it is now commonplace that computer graphics are used in the creation of movie sequences, architectural renderings, medical imagery and product visualisations. This work concentrates on the photon mapping algorithm [1, 2], a physically based global illumination rendering algorithm. Photon mapping excels in producing highly realistic, physically accurate images. A drawback to photon mapping however is its rendering times, which can be significantly longer than other, albeit less realistic, algorithms. Not surprisingly, this increase in execution time is associated with a high computational cost. This computation is usually performed using the general purpose central processing unit (CPU) of a personal computer (PC), with the algorithm implemented as a software routine. Other options available for processing these algorithms include desktop PC graphics processing units (GPUs) and custom designed acceleration hardware devices. GPUs tend to be efficient when dealing with less realistic rendering solutions such as rasterisation, however with their recent drive towards increased programmability they can also be used to process more realistic algorithms. A drawback to the use of GPUs is that these algorithms often have to be reworked to make optimal use of the limited resources available. There are very few custom hardware devices available for acceleration of the photon mapping algorithm. Ray-tracing is the predecessor to photon mapping, and although not capable of producing the same physical accuracy and therefore realism, there are similarities between the algorithms. There have been several hardware prototypes, and at least one commercial offering, created with the goal of accelerating ray-trace rendering [3]. However, properties making many of these proposals suitable for the acceleration of ray-tracing are not shared by photon mapping. There are even fewer proposals for acceleration of the additional functions found only in photon mapping. All of these approaches to algorithm acceleration offer limited scalability. GPUs are inherently difficult to scale, while many of the custom hardware devices available thus far make use of large processing elements and complex acceleration data structures. In this work we make use of three novel approaches in the design of highly scalable specialised hardware structures for the acceleration of the photon mapping algorithm. Increased scalability is gained through: • The use of a brute-force approach in place of the commonly used smart approach, thus eliminating much data pre-processing, complex data structures and large processing units often required. • The use of Logarithmic Number System (LNS) arithmetic computation, which facilitates a reduction in processing area requirement. • A novel redesign of the photon inclusion test, used within the photon search method of the photon mapping algorithm. This allows an intelligent memory structure to be used for the search. The design uses two hardware structures, both of which accelerate one core rendering function. Renderings produced using field programmable gate array (FPGA) based prototypes are presented, along with details of 90nm synthesised versions of the designs which show that close to an orderof- magnitude speedup over a software implementation is possible. Due to the scalable nature of the design, it is likely that any advantage can be maintained in the face of improving processor speeds. Significantly, due to the brute-force approach adopted, it is possible to eliminate an often-used software acceleration method. This means that the device can interface almost directly to a frontend modelling package, minimising much of the pre-processing required by most other proposals.
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Graham. "Steric acceleration of intramolecular cyclisations." Thesis, University of Surrey, 2004. http://epubs.surrey.ac.uk/771937/.

Full text
Abstract:
The promotion of intramolecular cyclisations using various synthetic methodologies remains an area of considerable interest in organic chemistry. In this thesis, the potential for large, bulky groups ("steric buttresses") to promote intramolecular cyclisations by a combination of entropic and enthalpic factors is presented. For the purposes of this study both Diels-Alder and ene cyclisations have been studied. The use of steric buttresses to promote the ene cyclisation of a 1,7-diene under relatively mild conditions is described, with different buttressing groups attached, to enable comparison of their relative buttressing ability. In this series the thermal stability of the cyclic products was also investigated, thus allowing further conclusions to be drawn on the buttressing ability of the groups studied. The ene reaction of a range of 1,6-dienes to give substituted pyrrolidines was investigated. This has enabled comparisons to be made on the reactivity of the enophiles in question. More importantly, the relative buttressing ability of the buttresses studied has been assesseda, llowing their ability to both promote cyclisation and control selectivity to be classified. In addition our efforts to develop a viable synthetic route to kainic acid are discussed. The thermolysis of 1,6-dienes incorporating a hetero-enophile component was also the subject of study. The enophiles in this case ranged from carbonyl compounds to their imino and nitrile counterparts. Once again, this has enabled conclusions to be drawn on the reactivity of the enophiles involved. In addition, these studies have allowed us to better understand the suitability of steric buttressing as an aid to intramolecular cyclisations. Finally, in an effort to develop a chiral steric buttressing methodology, the use of ß-cyclodextrin ("macrocyclic steric buttressing") to promote an intramolecular Diels- Alder cyclisation is discussed. This study also resulted in the discovery of a remarkable solvent effect for an intramolecular, pericyclic reaction.
APA, Harvard, Vancouver, ISO, and other styles
20

Malquarti, Michaël. "Scalar fields and cosmic acceleration." Thesis, University of Sussex, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Samal, Kruttidipta. "FPGA acceleration of CNN training." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54467.

Full text
Abstract:
This thesis presents the results of an architectural study on the design of FPGA- based architectures for convolutional neural networks (CNNs). We have analyzed the memory access patterns of a Convolutional Neural Network (one of the biggest networks in the family of deep learning algorithms) by creating a trace of a well-known CNN architecture and by developing a trace-driven DRAM simulator. The simulator uses the traces to analyze the effect that different storage patterns and dissonance in speed between memory and processing element, can have on the CNN system. This insight is then used create an initial design for a layer architecture for the CNN using an FPGA platform. The FPGA is designed to have multiple parallel-executing units. We design a data layout for the on-chip memory of an FPGA such that we can increase parallelism in the design. As the number of these parallel units (and hence parallelism) depends on the memory layout of input and output, particularly if parallel read and write accesses can be scheduled or not. The on-chip memory layout minimizes access contention during the operation of parallel units. The result is an SoC (System on Chip) that acts as an accelerator and can have more number of parallel units than previous work. The improvement in design was also observed by comparing post synthesis loop latency tables between our design and one with a single unit design. This initial design can help in designing FPGAs targeted for deep learning algorithms that can compete with GPUs in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
22

Östgren, Magnus. "FPGA acceleration of superpixel segmentation." Thesis, Mälardalens högskola, Inbyggda system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48577.

Full text
Abstract:
Superpixel segmentation is a preprocessing step for computer vision applications, where an image is split into segments referred to as superpixels. Then running the main algorithm on these superpixels reduces the number of data points processed in comparison to running the algorithm on pixels directly, while still keeping much of the same information. In this thesis, the possibility to run superpixel segmentation on an FPGA is researched. This has resulted in the development of a modified version of the algorithm SLIC, Simple Linear Iterative Clustering. An FPGA implementation of this algorithm has then been built in VHDL, it is designed as a pipeline, unrolling the iterations of SLIC. The designed algorithm shows a lot of potential and runs on real hardware, but more work is required to make the implementation more robust, and remove some visual artefacts.
APA, Harvard, Vancouver, ISO, and other styles
23

McMahon, Matthew M. "Modeling Ion Acceleration Using LSP." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440426562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Atahary, Tanvir. "Acceleration of Cognitive Domain Ontologies." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1460734067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Barbot, Benoît. "Acceleration for statistical model checking." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0041/document.

Full text
Abstract:
Ces dernières années, l'analyse de systèmes complexes critiques est devenue de plus en plus importante. En particulier, l'analyse quantitative de tels systèmes est nécessaire afin de pouvoir garantir que leur probabilité d'échec est très faible. La difficulté de l'analyse de ces systèmes réside dans le fait que leur espace d’état est très grand et que la probabilité recherchée est extrêmement petite, de l'ordre d'une chance sur un milliard, ce qui rend les méthodes usuelles inopérantes. Les algorithmes de Model Checking quantitatif sont les algorithmes classiques pour l'analyse de systèmes probabilistes. Ils prennent en entrée le système et son comportement attendu et calculent la probabilité avec laquelle les trajectoires du système correspondent à ce comportement. Ces algorithmes de Model Checking ont été largement étudié depuis leurs créations. Deux familles d'algorithme existent : - le Model Checking numérique qui réduit le problème à la résolution d'un système d'équations. Il permet de calculer précisément des petites probabilités mais soufre du problème d'explosion combinatoire- - le Model Checking statistique basé sur la méthode de Monte-Carlo qui se prête bien à l'analyse de très gros systèmes mais qui ne permet pas de calculer de petite probabilités. La contribution principale de cette thèse est le développement d'une méthode combinant les avantages des deux approches et qui renvoie un résultat sous forme d'intervalles de confiance. Cette méthode s'applique à la fois aux systèmes discrets et continus pour des propriétés bornées ou non bornées temporellement. Cette méthode est basée sur une abstraction du modèle qui est analysée à l'aide de méthodes numériques, puis le résultat de cette analyse est utilisé pour guider une simulation du modèle initial. Ce modèle abstrait doit à la fois être suffisamment petit pour être analysé par des méthodes numériques et suffisamment précis pour guider efficacement la simulation. Dans le cas général, cette abstraction doit être construite par le modélisateur. Cependant, une classe de systèmes probabilistes a été identifiée dans laquelle le modèle abstrait peut être calculé automatiquement. Cette approche a été implémentée dans l'outil Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas ont été effectuées, qui montrent l'efficacité de la méthode. Cette approche à été implanté dans l'outils Cosmos et des expériences sur des modèles de référence ainsi que sur une étude de cas on été effectué, qui montre l'efficacité de la méthode
In the past decades, the analysis of complex critical systems subject to uncertainty has become more and more important. In particular the quantitative analysis of these systems is necessary to guarantee that their probability of failure is very small. As their state space is extremly large and the probability of interest is very small, typically less than one in a billion, classical methods do not apply for such systems. Model Checking algorithms are used for the analysis of probabilistic systems, they take as input the system and its expected behaviour, and compute the probability with which the system behaves as expected. These algorithms have been broadly studied. They can be divided into two main families: Numerical Model Checking and Statistical Model Checking. The former computes small probabilities accurately by solving linear equation systems, but does not scale to very large systems due to the space size explosion problem. The latter is based on Monte Carlo Simulation and scales well to big systems, but cannot deal with small probabilities. The main contribution of this thesis is the design and implementation of a method combining the two approaches and returning a confidence interval of the probability of interest. This method applies to systems with both continuous and discrete time settings for time-bounded and time-unbounded properties. All the variants of this method rely on an abstraction of the model, this abstraction is analysed by a numerical model checker and the result is used to steer Monte Carlo simulations on the initial model. This abstraction should be small enough to be analysed by numerical methods and precise enough to improve the simulation. This abstraction can be build by the modeller, or alternatively a class of systems can be identified in which an abstraction can be automatically computed. This approach has been implemented in the tool Cosmos, and this method was successfully applied on classical benchmarks and a case study
APA, Harvard, Vancouver, ISO, and other styles
26

Bailey, Fred Washington. "Models for differential age acceleration." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12052009-020133/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Cathebras, Joël. "Hardware Acceleration for Homomorphic Encryption." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS576/document.

Full text
Abstract:
Dans cette thèse, nous nous proposons de contribuer à la définition de systèmes de crypto-calculs pour la manipulation en aveugle de données confidentielles. L’objectif particulier de ce travail est l’amélioration des performances du chiffrement homomorphe. La problématique principale réside dans la définition d’une approche d’accélération qui reste adaptable aux différents cas applicatifs de ces chiffrements, et qui, de ce fait, est cohérente avec la grande variété des paramétrages. C’est dans cet objectif que cette thèse présente l’exploration d’une architecture hybride de calcul pour l’accélération du chiffrement de Fan et Vercauteren (FV).Cette proposition résulte d’une analyse de la complexité mémoire et calculatoire du crypto-calcul avec FV. Une partie des contributions rend plus efficace l’adéquation d’un système non-positionnel de représentation des nombres (RNS) avec la multiplication de polynôme par transformée de Fourier sur corps finis (NTT). Les opérations propres au RNS, facilement parallélisables, sont accélérées par une unité de calcul SIMD type GPU. Les opérations de NTT à la base des multiplications de polynôme sont implémentées sur matériel dédié de type FPGA. Des contributions spécifiques viennent en soutien de cette proposition en réduisant le coût mémoire et le coût des communications pour la gestion des facteurs de rotation des NTT.Cette thèse ouvre des perspectives pour la définition de micro-serveurs pour la manipulation de données confidentielles à base de chiffrement homomorphe
In this thesis, we propose to contribute to the definition of encrypted-computing systems for the secure handling of private data. The particular objective of this work is to improve the performance of homomorphic encryption. The main problem lies in the definition of an acceleration approach that remains adaptable to the different application cases of these encryptions, and which is therefore consistent with the wide variety of parameters. It is for that objective that this thesis presents the exploration of a hybrid computing architecture for accelerating Fan and Vercauteren’s encryption scheme (FV).This proposal is the result of an analysis of the memory and computational complexity of crypto-calculation with FV. Some of the contributions make the adequacy of a non-positional number representation system (RNS) with polynomial multiplication Fourier transform over finite-fields (NTT) more effective. RNS-specific operations, inherently embedding parallelism, are accelerated on a SIMD computing unit such as GPU. NTT-based polynomial multiplications are implemented on dedicated hardware such as FPGA. Specific contributions support this proposal by reducing the storage and the communication costs for handling the NTTs’ twiddle factors.This thesis opens up perspectives for the definition of micro-servers for the manipulation of private data based on homomorphic encryption
APA, Harvard, Vancouver, ISO, and other styles
28

Conti, Francesco <1988&gt. "Heterogeneous Architectures For Parallel Acceleration." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7406/1/phd_thesis_AMS_wFRONTESPIZIO.pdf.

Full text
Abstract:
To enable a new generation of digital computing applications, the greatest challenge is to provide a better level of energy efficiency (intended as the performance that a system can provide within a certain power budget) without giving up a systems's flexibility. This constraint applies to digital system across all scales, starting from ultra-low power implanted devices up to datacenters for high-performance computing and for the "cloud". In this thesis, we show that architectural heterogeneity is the key to provide this efficiency and to respond to many of the challenges of tomorrow's computer architecture - and at the same time we show methodologies to introduce it with little or no loss in terms of flexibility. In particular, we show that heterogeneity can be employed to tackle the "walls" that impede further development of new computing applications: the utilization wall, i.e. the impossibility to keep all transistors on in deeply integrated chips, and the "data deluge", i.e. the amount of data to be processed that is scaling up much faster than the computing performance and efficiency. We introduce a methodology to improve heterogeneous design exploration of tightly coupled clusters; moreover we propose a fractal heterogeneity architecture that is a parallel accelerator for low-power sensor nodes, and is itself internally heterogeneous thanks to an heterogeneous coprocessor for brain-inspired computing. This platform, which is silicon-proven, can lead to more than 100x improvement in terms of energy efficiency with respect to typical computing nodes used within the same domain, enabling the application of complex algorithms, vastly more performance-hungry than the current state-of-the-art in the ULP computing domain.
APA, Harvard, Vancouver, ISO, and other styles
29

Conti, Francesco <1988&gt. "Heterogeneous Architectures For Parallel Acceleration." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7406/.

Full text
Abstract:
To enable a new generation of digital computing applications, the greatest challenge is to provide a better level of energy efficiency (intended as the performance that a system can provide within a certain power budget) without giving up a systems's flexibility. This constraint applies to digital system across all scales, starting from ultra-low power implanted devices up to datacenters for high-performance computing and for the "cloud". In this thesis, we show that architectural heterogeneity is the key to provide this efficiency and to respond to many of the challenges of tomorrow's computer architecture - and at the same time we show methodologies to introduce it with little or no loss in terms of flexibility. In particular, we show that heterogeneity can be employed to tackle the "walls" that impede further development of new computing applications: the utilization wall, i.e. the impossibility to keep all transistors on in deeply integrated chips, and the "data deluge", i.e. the amount of data to be processed that is scaling up much faster than the computing performance and efficiency. We introduce a methodology to improve heterogeneous design exploration of tightly coupled clusters; moreover we propose a fractal heterogeneity architecture that is a parallel accelerator for low-power sensor nodes, and is itself internally heterogeneous thanks to an heterogeneous coprocessor for brain-inspired computing. This platform, which is silicon-proven, can lead to more than 100x improvement in terms of energy efficiency with respect to typical computing nodes used within the same domain, enabling the application of complex algorithms, vastly more performance-hungry than the current state-of-the-art in the ULP computing domain.
APA, Harvard, Vancouver, ISO, and other styles
30

Acquaviva, Viviana. "Weak Lensing and Cosmic Acceleration." Doctoral thesis, SISSA, 2006. http://hdl.handle.net/20.500.11767/4188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nodes, Christoph. "Particle Acceleration in Pulsar Wind Nebulae." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-80683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Choony, Nandeo. "Steric acceleration of some pericyclic reactions." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843609/.

Full text
Abstract:
Steric effects are important in synthesis. Whilst steric hindrance is well known in hindering reactions, steric effects can also be employed to accelerate reactions, in particular cycloaddition reactions, and even to promote reactions that otherwise do not occur. A survey is included in previous work on steric effects in chemical reactions, principally cycloadditions. This includes a brief discussion on the importance of orientation and solvent effects on Diels-Alder cyclisations and ene reactions. The effect of substituents on the cyclisation of N-allyl furfurylamines has been studied. It was shown that bulky N-protecting groups enhance cyclisation, an effective buttress being the trityl (triphenylmethyl) group. The latter has the added advantage of being particularly easy to remove. A study of some ene reactions has also been carried out and steric acceleration of these processes has also been observed. A novel reaction involving an intermolecular cycloaddition followed by a sterically accelerated ene reaction has also been uncovered. Some attempts have also been made at carrying out these sterically accelerated reactions on a solid support as required in combinatorial chemistry. This involved the preparation of a new type of substituted support, with a view to utilising this in a combinatorial approach to synthesis. The structures have been supported by using a molecular modelling package, Alchemy 2000 from Tripos Associates Inc.
APA, Harvard, Vancouver, ISO, and other styles
33

Akkerman, V'yacheslav. "Turbulent burning, flame acceleration, explosion triggering." Doctoral thesis, Umeå : Department of Physics, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Marquez, Damian Jose Ignacio. "Multilevel acceleration of neutron transport calculations." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19731.

Full text
Abstract:
Thesis (M.S.)--Nuclear and Radiological Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Stacey, Weston M.; Committee Co-Chair: de Oliveira, Cassiano R.E.; Committee Member: Hertel, Nolan; Committee Member: van Rooijen, Wilfred F.G.
APA, Harvard, Vancouver, ISO, and other styles
35

Williams, Logan Todd. "Ion acceleration mechanisms of helicon thrusters." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47691.

Full text
Abstract:
A helicon plasma source is a device that can efficiently ionize a gas to create high density, low temperature plasma. There is growing interest in utilizing a helicon plasma source in propulsive applications, but it is not yet known if the helicon plasma source is able to function as both an ion source and ion accelerator, or whether an additional ion acceleration stage is required. In order to evaluate the capability of the helicon source to accelerate ions, the acceleration and ionization processes must be decoupled and examined individually. To accomplish this, a case study of two helicon thruster configurations is conducted. The first is an electrodeless design that consists of the helicon plasma source alone, and the second is a helicon ion engine that combines the helicon plasma source with electrostatic grids used in ion engines. The gridded configuration separates the ionization and ion acceleration mechanisms and allows for individual evaluation not only of ion acceleration, but also of the components of total power expenditure and the ion production cost. In this study, both thruster configurations are fabricated and experimentally characterized. The metrics used to evaluate ion acceleration are ion energy, ion beam current, and the plume divergence half-angle, as these capture the magnitude of ion acceleration and the bulk trajectory of the accelerated ions. The electrode-less thruster is further studied by measuring the plasma potential, ion number density, and electron temperature inside the discharge chamber and in the plume up to 60 cm downstream and 45 cm radially outward. The two configurations are tested across several operating parameter ranges: 343-600 W RF power, 50-450 G magnetic field strength, 1.0-4.5 mg/s argon flow rate, and the gridded configuration is tested over a 100-600 V discharge voltage range. Both configurations have thrust and efficiency below that of contemporary thrusters of similar power, but are distinct in terms of ion acceleration capability. The gridded configuration produces a 65-120 mA ion beam with energies in the hundreds of volts that is relatively collimated. The operating conditions also demonstrate clear control over the performance metrics. In contrast, the electrodeless configuration generally produces a beam current less than 20 mA at energies between 20-40 V in a very divergent plume. The ion energy is set by the change in plasma potential from inside the device to the plume. The divergence ion trajectories are caused by regions of high plasma potential that create radial electric fields.. Furthermore, the operating conditions have limited control of the resulting performance metrics. The estimated ion production cost of the helicon ranged between 132-212 eV/ion for argon, the lower bound of which is comparable to the 157 eV/ion in contemporary DC discharges. The primary power expenditures are due to ion loss to the walls and high electron temperature leading to energy loss at the plasma sheaths. The conclusion from this work is that the helicon plasma source is unsuitable as a single-stage thruster system. However, it is an efficient ion source and, if paired with an additional ion acceleration stage, can be integrated into an effective propulsion system.
APA, Harvard, Vancouver, ISO, and other styles
36

Andresen, Ellen Wiig. "Novelty Detection in Knowledge Base Acceleration." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22971.

Full text
Abstract:
Knowledge bases provide the users of the World Wide Web with a vast amount of structured information. They are meant to represent what we know about the world the way it is today. Therefore, every time something happens, knowledge bases need to be updated according to the new happening. A knowledge base is most often organized around entities and their relations. Entities represent an object in the real world, such as religions, persons or places, and a relation is a connection between two entities. Today, the process of updating knowledge bases is purely done by humans, who unfortunately are not able to keep up with everything that happen in the world. In order to make this job easier, systems for doing Knowledge base acceleration, KBA, are proposed. They are meant to, given a stream of news, pick out what is relevant updates for the different entities in a knowledge base. To make the most of such a system, and to make sure that it only return news that provide useful information to the content managers, it should only return news that contain \textit{new} information, that is, it should perform novelty detection. This thesis explore the properties a KBA system need to fulfil in order to solve the task it is supposed to as good as possible. It argues that a KBA system need to include novelty detection to be useful, and present a prototype for novelty detection in a KBA system. The prototype is implemented using different approaches to novelty detection, and compare these.
APA, Harvard, Vancouver, ISO, and other styles
37

Pelletier, Stéphane. "Acceleration methods for image super-resolution." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86530.

Full text
Abstract:
Image super-resolution (SR) attempts to recover a high-resolution (HR) image or video sequence from a set of degraded and aliased low-resolution (LR) ones. The computational complexity associated with many SR algorithms may hinder their use in time-critical applications. This motivates our interest in techniques for accelerating computations associated with edge-preserving image SR problems. Edge-preserving formulations are preferable to quadratic ones since they yield perceptually improved images with sharper edges. First, we propose a simple preconditioning method for accelerating the solution of edgepreserving image restoration problems in which a linear shift-invariant (LSI) point spread function (PSF) is employed. This application is a special case of SR with a single LR image and a magnification factor of one. We demonstrate that the proposed approach offers significant advantages of simplicity, and in several cases, speed, over traditional methods for accelerating such problems.
Secondly, we adapt the previous approach to edge-preserving SR problems from multiple translated LR images. Our technique involves reordering the HR pixels in a similar way to what is done in preconditioning methods for quadratic formulations. However, due to the edge-preserving requirements, the Hessian matrix of the cost function varies during the minimization process. We develop an efficient update scheme for the preconditioner in order to cope with this situation. Unlike some other acceleration strategies that round the displacement values between the LR images on the HR grid, the proposed method does not sacrifice the optimality of the observation model.
Thirdly, we describe a technique for preconditioning SR problems involving rational magnification factors. The use of such factors is motivated in part by the fact that, under certain circumstances, optimal SR zooms are non-integers. We show that by reordering the pixels of the LR images, the structure of the problem to solve is modified in such a way that preconditioners based on circulant operators can be used.
Finally, we apply our SR acceleration techniques to compressed color video sequences and to Bayer pattern images taken from a camera whose sensor is covered with a color filter array (CFA). Through experimental results, we demonstrate that the proposed techniques can provide significant speed improvement in many scenarios.
La super-résolution (SR) vise à reconstruire une image ou une séquence vidéo de haute résolution (HR) à partir d'images dégradées de basse résolution (BR). La complexité des calculs requis par plusieurs méthodes de SR peut entraver l'utilisation de ces dernières lorsque le temps d'exécution est critique. Ceci motive notre intérêt pour l'accélération d'algorithmes de SR préservant les contours dans l'image. Les méthodes à préservation de contours sont préférables aux approches quadratiques car elles produisent des images aux contours mieux définis.
Premièrement, nous proposons une méthode de préconditionnement pour l'accélération de problèmes de restoration d'image employant une PSF spatialement invariante. Cette application est un cas spécifique de SR d'une seule image de BR avec un facteur de grossissement unitaire. Nous démontrons que l'approche proposée est plus simple et souvent plus rapide que les méthodes traditionelles employées pour l'accélération de problèmes similaires. Deuxièmement, nous adaptons l'approche précédente aux problèmes de SR s'appliquant à des images translatées. Notre technique réordonne les pixels de HR d'une manière similaire à ce qui se fait pour les formulations quadratiques. Toutefois, en raison des exigences de préservation de contours, la matrice hessienne de la fonction objective varie durant la minimisation. Nous développons une méthode de mise-à-jour rapide du préconditionneur pour surmonter cette situation. Contrairement à d'autres stratégies d'accélération qui arrondissent les déplacements entre les images de BR sur la grille de HR, notre méthode ne sacrifie pas l'optimalité du modèle d'observation.
Troisièmement, nous décrivons une technique pour le préconditionnement de problèmes de SR employant un facteur de grossissement rationel. L'utilisation de tels facteurs est motivée par le fait que, dans certaines circonstances, les facteurs optimaux ne sont pas des entiers. Nous démontrons qu'en réorganisant les pixels des images de BR, la structure du problème est modifiée de manière à permettre l'utilisation de préconditionneurs basés sur les matrices circulantes.
Finalement, nous appliquons nos techniques d'accélération à des séquences vidéo compressées et à des images Bayer acquises avec une caméra dotée d'un filtre CFA. Par le biais d'expériences, nous démontrons que les techniques proposées peuvent accélérer les calculs de manière significative dans plusieurs scénarios.
APA, Harvard, Vancouver, ISO, and other styles
38

Fernández, Becerra David. "Multicore acceleration of sparse electromagnetics computations." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104641.

Full text
Abstract:
Multicore processors have become the dominant industry trend to increase computer systems performance, driving electromagnetics (EM) practitioners to redesign their applications using parallel programming paradigms. This is especially true for computations involving complex data structures such as sparse matrix computations that often arise in EM simulations with the finite element method (FEM). These computations require pointer manipulation that render useless many compiler optimizations and parallel shared memory frameworks (e.g. OpenMP). This work presents new sparse data structures and techniques to efficiently exploit multicore parallelism and short-vector units (the last of which has not been exploited by state of the art sparse matrix libraries) for recurrent computationally intensive kernels in EM simulations, such as the sparse matrix-vector multiplication (SMVM) and the conjugate gradient (CG) algorithms. Up to 14 times performance speedups are demonstrated for the accelerated SMVM kernel and 5.8x for the CG kernel using the proposed methods over conventional approaches for two different multicore architectures. Finally, a new method to solve the FEM for parallel processing is presented and an optimized implementation is realized on two different generations of NVIDIA GPUs (manycore) accelerators with performance increases of up to 27.53 times compared to compiler optimized CPU results.
Les processeurs multicœurs sont devenus la tendance dominante de l'industrie pour accroître la performance des systèmes informatiques, forçant les concepteurs de systèmes électromagnétiques (EM) à reconcevoir leurs applications en utilisant des paradigmes de programmation parallèle. Cela est particulièrement vrai pour les calculs impliquant des structures de données complexes comme les calculs de matrices creuses qui surviennent souvent dans des simulations électromagnétiques (EM) avec la méthode d'analyse par éléments finis (FÉM). Ces calculs nécessitent de manipulation de pointeurs qui rendent inutiles de nombreuses optimisations du compilateur et les bibliothèques de mémoire partagée parallèle (OpenMP, par exemple). Ce travail présente de nouvelles structures de données rares et de nouvelles techniques afin d'exploiter efficacement le parallélisme multicœur et les unités de vecteur court (dont le dernier n'a pas été exploité par des bibliothèques de matrices creuses à la fine pointe de la technologie) pour les noyaux de calcul intensif récurrents dans les simulations EM, tels que les multiplications matrice-vecteur rares (SMVM) et des algorithmes à gradient conjugué (CG). Des performances d'accélérations jusqu'à 14 fois supérieures sont démontrées pour le noyau accéléré par SMVM et jusqu'à 5,8 fois supérieures pour le noyau CG en utilisant les méthodes proposées par rapport aux approches conventionnelles pour deux architectures multicœurs différentes. Enfin, une nouvelle méthode pour résoudre la FÉM pour le traitement parallèle est présentée et une implantation optimisée est réalisée sur deux générations d'accélérateurs de GPU NVIDIA (multicœur) avec des augmentations de performances allant jusqu'à 27,53 fois par rapport aux résultats du CPU optimisé par compilateur.
APA, Harvard, Vancouver, ISO, and other styles
39

Burge, Christina Alice. "Particle acceleration in noisy magnetised plasmas." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3588/.

Full text
Abstract:
Particle dynamics in the solar corona are of interest since the behaviour of the coronal plasma is important for the understanding of how the solar corona is heated to such high temperatures compared to the photosphere (≈ 1 million Kelvin, compared to a photospheric temperature of ≈ 6 thousand Kelvin ). This thesis deals with particle behaviour in various forms of magnetic and electric fields. The method via which particles are accelerated at reconnection regions is of particular interest as particle acceleration at a magnetic reconnection region is the basis for many solar flare models. Solar flares are releases of energy in the solar corona. The amounts of energy released range from the very small amounts released by nanoflares, that cannot be observed individually, to large events such as X-class flares and coronal mass ejections. Chapter one provides background information about the structure of the Sun and about various forms of solar activity, including solar flares, sunspots, and the generation of the solar magnetic field. Chapter 2 explores various theories of magnetic reconnection. Magnetic reconnection re- gions are usually characterised as containing a central ’null’, a region where the magnetic field is zero, and particles can be freely accelerated in the presence of an electric field, as they decouple from the magnetic field and move non-adiabatically. Chapter 2 gives examples of how such reconnection regions could be formed. Chapter 3 deals with the construction of a ’noisy’ reconnection region. For the purposes of this work, ’noisy’ fields were created by perturbing the magnetic and electric fields with a superposition of eigenmode oscillations. The method for the calculation of such eigenmodes, and the creation of the electric and magnetic fields is detailed here. Chapter 4 details the consequences for particle behaviour in a noisy reconnection region. The behaviour of electrons and protons in such fields was studied. It was found that adding perturbations to the magnetic field caused many smaller nulls to form, which increased the size of the non-adiabatic region. This increased non-adiabatic region led to greater energisa- tion of particles. The X-ray spectra that could be produced by the accelerated electrons were 4 5 also calculated. In this chapter I also investigate the consequences of altering the distribution of the spectrum of modes, and altering the value of the inertial resistivity. In chapter 5, the effects of collisional scattering on particles was also investigated. Colli- sional scattering was introduced by integrating particle trajectories using a stochastic Runge- Kutta method (which is a form of numerical integration). It was found that adding collisional scattering at a reconnection region causes a significant change in particle dynamics in suffi- ciently small electric fields. Particles which undergo collisional scattering in the presence of a small electric field gain more energy than those which do not undergo collisional scatter- ing. This effect decreases as the size of the electric field is increased. The correct relativistic expressions for particle collisions were derived. It was found that collisions have a negligible effect on relativistic particles. Collisional scattering was also used to simulate the drift of particles across magnetic fields. It was found that adding more scattering caused the trajectories of the particles to change from normal gyromotion around the magnetic field, and that particles instead travelled across the magnetic field. I also developed a diffusion coefficient to allow the calculation of a particle’s drift across a magnetic field using only 1D equations. Chapter 6 discusses the findings made in this thesis, and explores how these findings could be built upon in the near future.
APA, Harvard, Vancouver, ISO, and other styles
40

Eliasson, Lars. "Satellite observations of auroral acceleration processes." Doctoral thesis, Umeå universitet, Rymdfysik, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-102339.

Full text
Abstract:
Measurements with satellite and sounding rocket borne instruments contain important information on remote and local processes in regions containing matter in the plasma state. The characteristic features of the particle distributions can be used to explain the morphology and dynamics of the different plasma populations. Charged particles are lost from a region due to precipitation into the atmosphere, charge exchange processes, or convection to open magnetic field lines. The sources of the Earth’s magnetospheric plasma are mainly ionization and extraction of upper atmosphere constituents, and entry of solar wind plasma. The intensity and distribution of auroral precipitation is controlled in part by the conditions of the interplanetary magnetic field causing different levels of auroral activity. Acceleration of electrons and positive ions along auroral field lines play an important role in magnetospheric physics. Electric fields that are quasi-steady during particle transit times, as well as fluctuating fields, are important for our understanding of the behaviour of the plasma in the auroral region. High-resolution data from the Swedish Viking and the Swedish/German Freja satellites have increased our knowledge considerably about the interaction processes between different particle populations and between particles and wave fields. This thesis describes acceleration processes influencing both ions and electrons and is based on in-situ measurements in the auroral acceleration/heating region, with special emphasis on; processes at very high latitudes, the role of fluctuating electric fields in producing so called electron conics, and positive ion heating transverse to the geomagnetic field lines.

Diss. (sammanfattning) Umeå : Umeå universitet, 1994, härtill 6 uppsatser.


digitalisering@umu.se
APA, Harvard, Vancouver, ISO, and other styles
41

Kluge, Thomas. "Enhanced Laser Ion Acceleration from Solids." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-102681.

Full text
Abstract:
This thesis presents results on the theoretical description of ion acceleration using ultra-short ultra-intense laser pulses. It consists of two parts. One deals with the very general and underlying description and theoretic modeling of the laser interaction with the plasma, the other part presents three approaches of optimizing the ion acceleration by target geometry improvements using the results of the first part. In the first part, a novel approach of modeling the electron average energy of an over-critical plasma that is irradiated by a few tens of femtoseconds laser pulse with relativistic intensity is introduced. The first step is the derivation of a general expression of the distribution of accelerated electrons in the laboratory time frame. As is shown, the distribution is homogeneous in the proper time of the accelerated electrons, provided they are at rest and distributed uniformly initially. The average hot electron energy can then be derived in a second step from a weighted average of the single electron energy evolution. This result is applied exemplary for the two important cases of infinite laser contrast and square laser temporal profile, and the case of an experimentally more realistic case of a laser pulse with a temporal profile sufficient to produce a preplasma profile with a scale length of a few hundred nanometers prior to the laser pulse peak. The thus derived electron temperatures are in excellent agreement with recent measurements and simulations, and in particular provide an analytic explanation for the reduced temperatures seen both in experiments and simulations compared to the widely used ponderomotive energy scaling. The implications of this new electron temperature scaling on the ion acceleration, i.e. the maximum proton energy, are then briefly studied in the frame of an isothermal 1D expansion model. Based on this model, two distinct regions of laser pulse duration are identified with respect to the maximum energy scaling. For short laser pulses, compared to a reference time, the maximum ion energy is found to scale linearly with the laser intensity for a simple flat foil, and the most important other parameter is the laser absorption efficiency. In particular the electron temperature is of minor importance. For long laser pulse durations the maximum ion energy scales only proportional to the square root of the laser peak intensity and the electron temperature has a large impact. Consequently, improvements of the ion acceleration beyond the simple flat foil target maximum energies should focus on the increase of the laser absorption in the first case and the increase of the hot electron temperature in the latter case. In the second part, exemplary geometric designs are studied by means of simulations and analytic discussions with respect to their capability for an improvement of the laser absorption efficiency and temperature increase. First, a stack of several foils spaced by a few hundred nanometers is proposed and it is shown that the laser energy absorption for short pulses and therefore the maximum proton energy can be significantly increased. Secondly, mass limited targets, i.e. thin foils with a finite lateral extension, are studied with respect to the increase of the hot electron temperature. An analytical model is provided predicting this temperature based on the lateral foil width. Finally, the important case of bent foils with attached flat top is analyzed. This target geometry resembles hollow cone targets with flat top attached to the tip, as were used in a recent experiment producing world record proton energies. The presented analysis explains the observed increase in proton energy with a new electron acceleration mechanism, the direct acceleration of surface confined electrons by the laser light. This mechanism occurs when the laser is aligned tangentially to the curved cone wall and the laser phase co-moves with the energetic electrons. The resulting electron average energy can exceed the energies from normal or oblique laser incidence by several times. Proton energies are therefore also greatly increased and show a theoretical scaling proportional to the laser intensity, even for long laser pulses.
APA, Harvard, Vancouver, ISO, and other styles
42

Champion, Ronan. "Acceleration in construction and engineering contracts." Thesis, King's College London (University of London), 2005. https://kclpure.kcl.ac.uk/portal/en/theses/acceleration-in-construction-and-engineering-contracts(caf6065f-f99b-40b6-aa76-73dada69ad47).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ptychion, Panagiota Petkaki. "Particle acceleration in dynamical collisionless reconnection." Thesis, University of Glasgow, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Robert Edward. "Ion acceleration at reforming astrophysical shocks." Thesis, University of Warwick, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bjorkhaug, M. "Flame acceleration in obstructed radial geometries." Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hanahoe, Kieran. "Simulation studies of plasma wakefield acceleration." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/simulation-studies-of-plasma-wakefield-acceleration(ac0c9742-2aed-493b-8356-e30f3db97e1e).html.

Full text
Abstract:
Plasma-based accelerators offer the potential to achieve accelerating gradients orders of magnitude higher than are typical in conventional accelerators. A Plasma Accelerator Research Station has been proposed using the CLARA accelerator at Daresbury Laboratory. In this thesis, theory and the results of particle-in-cell simulations are presented investigating experiments that could be conducted using CLARA as well as the preceding VELA and CLARA Front End. Plasma wakefield acceleration was found to be viable with both CLARA and CLARA Front End, with accelerating gradients of GV/m and 100 MV/m scale respectively. Drive-witness and tailored bunch structures based on the CLARA bunch were also investigated. Plasma focus- ing of the VELA and CLARA Front End bunches was studied in simulations, showing that substantial focusing gradient could be achieved using a passive plasma lens. A plasma beam dump scheme using varying plasma density is also presented. This scheme allows the performance of a passive plasma beam dump to be maintained as the bunch is decelerated and has some advantages over a previously proposed method.
APA, Harvard, Vancouver, ISO, and other styles
47

Ramanathan, Jairam 1979. "Analysis and acceleration for target recognition." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86833.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 80-83).
by Jairam Ramanathan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
48

Jilek, Jiri, Anil V. Khadilkar, and Nabih Alem. "Head-mounted Impact Acceleration Measurement System." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614675.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
The system measures impact accelerations imparted to a boxer's head during a boxing bout. The system is comprised of three major subsystems: 1) The acceleration data transmitter located on the boxer's body. 2) The receiving and storage subsystem. 3) The data processing subsystem.
APA, Harvard, Vancouver, ISO, and other styles
49

Kouropalatis, John. "Texture mapping acceleration using cache memories." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bleach, Gordon Phillip. "Acceleration waves in constrained thermoelastic materials." Doctoral thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15850.

Full text
Abstract:
Bibliography: pages 242-249.
We study the propagation and growth of acceleration waves in isotropic thermoelastic media subject to a broad class of thermomechanical constraints. The work is based on an existing thermodynamic theory of constrained thermoelastic materials presented by Reddy (1984) for both definite and non- conductors, but we differ by adopting a new definition of a constrained non-conductor and by investigating the consequences of isotropy. The set of constraints considered is not arbitrary but is large enough to include most constraints commonly found in practice. We also extend Reddy's (1984) work by including consideration of sets of constraints for which a set of vectors associated with the constraints is linearly dependent. These vectors play a significant role in the propagation conditions and in the growth equations described below. Propagation conditions (of Fresnel-Hadamard type) are derived for both homothermal and homentropic waves, and solutions for longitudinal and transverse principal waves are discussed. The derivations involve the determination of jumps in the time derivative of constraint multipliers which are required in the solution of the corresponding growth equations, and it is found that these multipliers cannot be separately determined if the set of constraint vectors mentioned above is linearly dependent. This difficulty forces us to restrict the constraint set for which the growth equations for homothermal and homentropic waves can be derived. The growth of plane, cylindrical and spherical waves is considered and solutions are discussed, concentrating on the influence of the constraints on the results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography