Dissertations / Theses on the topic 'Method of imaginary sources'

To see the other types of publications on this topic, follow the link: Method of imaginary sources.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Method of imaginary sources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Дудко, Андрій Володимирович. "Модуль генерації гідроакустичного сигналу в плоско-паралельному хвилеводі." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/28408.

Full text
Abstract:
Метою дипломної роботи є створення програмного продукту для генерації гідроакустичного сигналу в плоско-паралельному хвилеводі променевим методом. Об’єктом дослідження є способи та алгоритми моделювання сигналів. Було виконано огляд існуючих програмних застосунків для моделювання сигналів та ознайомитися із проблемами моделювання гідроакустичних сигналів, розроблено програмний продукт для генерації гідроакустичних сигналів, який реалізовано методом уявних джерел для розрахунку поля тиску в плоско-паралельному хвилеводі, даний метод відноситься до променевих моделей. Створена програмний продукт може бути використаний, як частина системи для моделювання гідроакустичних об’єктів та для наукових досліджень. Загальний обсяг роботи: 67 сторінок, 19 ілюстрацій, 17 бібліографічних посилань та 3 додатки.
The purpose of the thesis is to create a program product for generating a hydroacoustic signal in a plane-parallel waveguide beam method. The objects of research are the methods and algorithms of signal simulation. An overview of the existing software applications for simulation of signals and the problems of modeling of hydroacoustic signals was performed, the program software of generation hydroacoustic signals, implemented by the imaginary sources for calculating the field of pressure in a plane-parallel waveguide, was implemented, this method belongs to beam models. The created program product can be used as part of the system for modeling hydroacoustic objects and for scientific research. Total volume of work: 67 pages, 19 illustrations, 17 bibliographic references and 3 attachments.
Целью дипломной работы является создание программного продукта для генерации гидроакустических сигналов в плоско-параллельном волноводе лучевым методом. Объектом исследования являются способы и алгоритмы моделирования сигналов. Было выполнено обзор существующих программных приложений для моделирования сигналов и ознакомиться с проблемами моделирования гидроакустических сигналов, разработано программный продукт для генерации гидроакустических сигналов, который реализован методом мнимых источников для расчета поля давления в плоско-параллельном волноводе, данный метод относится к лучевым моделям. Созданная программа может быть использована как часть системы для моделирования гидроакустических объектов и для научных исследований.
APA, Harvard, Vancouver, ISO, and other styles
2

Velde, Antoine van de. "A multidimensional boundary sources method." Doctoral thesis, Universite Libre de Bruxelles, 1994. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Falk, Sofia. "May the algorithm be with you : En mixed method studie om Instagrams personliga algoritmer." Thesis, Stockholms universitet, Institutionen för mediestudier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-156939.

Full text
Abstract:
Det sociala mediet Instagram är en applikation där människor världen runt kan dela med sig av resor, måltider och den nya familjemedlemmens ankomst. Genom att kommentera, gilla, arkivera och utforska kan användaren hålla sig uppdaterad dygnet runt. När Instagram i mars 2016 meddelade att de skulle införa algoritmer, vilka profilerar och kartlägger användare, ändrades rangordningen på inläggen i användarnas flöde. Vem som nu får se vad, när och hur styrs av dessa osynliga matematiska formler. Studien syftar till att undersöka hur svenska Instagramanvändare i åldern 15-40 år upplever dessa personliga algoritmer och huruvida dessa har en inverkan på hur de använder sig av applikationen. Då Instagram är tätt förknippat med att visa upp sig själv ser jag det även intressant att undersöka vilken roll algoritmerna har för individernas syn på sig själva. Genom mixed methods kommer både en enkät och kvalitativa intervjuer att utföras för att få en genomgripande förståelse för fenomenet på flera plan. Den kvantitativa delen ämnar att skapa en mer generell uppfattning hur individerna upplever algoritmerna och hur deras användning ser ut. Detta medan den kvalitativa delen är till för att fördjupa förståelsen för relationen mellan individerna och algoritmerna. Med hjälp av teorier rörande synlighet, algoritmer och identitet är målet att få en djupare förståelse för detta tämligen nya fenomen. Analysens resultat visar att medvetenheten är måttlig och kunskapen om algoritmerna är relativt begränsad. Det finns en tydlig skillnad mellan de som har skapat egna teorier om hur algoritmerna fungerar och de som är helt omedvetna. Vidare var det tydligt att algoritmerna hade en inverkan - både medvetet och omedvetet - på individerna vad gällde olika strategier för att bättre synas och få likes. Slutligen visade det sig att de personliga algoritmerna spelar en jämförelsevis stor roll för individernas syn på sig själva i termer av validitet och reflektion.
APA, Harvard, Vancouver, ISO, and other styles
4

Koligliatis, Thanos. "A scattering method for bone density measurements with polychromatic sources." Thesis, University College London (University of London), 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Drira, Achraf. "Geoacoustic inversion : improvement and extension of the sources image method." Thesis, Brest, 2015. http://www.theses.fr/2015BRES0089/document.

Full text
Abstract:
Ce travail de thèse propose d’analyser les signaux issus d’une source omnidirectionnelle sphérique réfléchis par un milieu sédimentaire stratifié et enregistré par une antenne d’hydrophones, en vue de caractériser quantitativement les sédiments marins aux moyennes fréquences, i.e. comprises entre 1 et 10 kHz. La recherche développée dans ce manuscrit propose une méthodologie facilitant la recherche des paramètres géoacoustiques du milieu avec la méthode des sources images, ainsi qu’un ensemble de solutions techniques appropriées afin d’améliorer cette méthode d’inversion récemment développée. La méthode des sources images repose sur une modélisation physique de la réflexion des ondes émises par une source sur un milieu stratifié sous l’approximation de Born. Par conséquent, la réflexion de l’onde sur le milieu stratifié peut être représentée par une collection de sources images, symétriques de la source réelle par rapport aux interfaces, dont les positions spatiales sont liées à la vitesse des ondes acoustiques et aux épaisseurs des couches. L’étude se décline en deux volets : traitements des signaux et inversion des paramètres géoacoustiques. La première partie du travail est focalisée sur le développement de la méthode des sources images. La méthode originelle se basait sur la construction de cartes de migration et de semblance de signaux pour déterminer les paramètres d’entrée de l’algorithme d’inversion qui sont des temps de trajet et des angles d’arrivée. Afin d’éviter cette étape, nous détectons les temps d’arrivée avec l’opérateur d’énergie de Teager-Kaiser (TKEO) et nous trouvons les angles par une méthode de triangulation. Le modèle d’inversion a été ensuite intégré en prenant en compte la possibilité de déformation de l’antenne. Cette partie se termine par une nouvelle approche qui combine TKEO et des méthodes temps fréquence afin d’avoir une bonne détection du temps d’arrivée dans le cas de signaux fortement bruités. Sur le plan du modèle et de l’inversion géoacoustique, nous proposons tout d’abord une description précise du modèle direct en introduisant le concept de sources images virtuelles. Cette étape permet de mieux comprendre l’approche développée. Ensuite, nous proposons une extension de la méthode des sources image pour l’inversion de paramètres géoacoustiques supplémentaires : la densité, l’atténuation et la vitesse des ondes de cisaillement. Cette extension est basée sur les résultats de l’inversion originelle (estimation du nombre de strates, de leur épaisseur, et de la vitesse des ondes de compression) ainsi que sur l’utilisation de l’amplitude des signaux réfléchis. Ces améliorations et extensions de la méthode des sources images sont illustrées par leur application sur des signaux synthétiques et des signaux réels issus d’expérimentations en cuve et à la mer. Les résultats obtenus sont très satisfaisants, tant au niveau des performances de calcul que de la qualité des estimations fournies
This thesis aims at analyzing the signals emitted from a spherical omnidirectional source reflected by a stratified sedimentary environment and recorded by a hydrophone array in order to characterize quantitatively the marine sediments at medium frequencies, i.e. between 1 and 10 kHz. The research developed in this manuscript provides a methodology to facilitate the estimation of medium geoacoustic parameters with the image source method, and some appropriate technical solutions to improve this recently developed inversion method. The image source method is based on a physical modeling of the wave reflection emitted from a source by a stratified medium under the Born approximation. As result, the reflection of the wave on the layered medium can be represented by a set of image sources, symmetrical to the real source with respect to the interfaces, whose spatial positions are related to the sound speeds and the thicknesses of the layers. The study consists of two parts : signal processing and inversion of geoacoustic parameters. The first part of the work is focused on the development of the image source method. The original method was based on migration and semblance maps of the recorded signals to determine the input parameters of the inversion algorithm which are travel times and arrival angles. To avoid this step, we propose to determine the travel times with the Teager-Kaiser energy operator (TKEO) and the arrival angles are estimate with a triangulation approach. The inversion model is then integrated, taking into account the possible deformation of the antenna. This part concludes with a new approach that combines TKEO and time-frequency representations in order to have a good estimation of the travel times in the case of noisy signals. For the modeling and geoacoustic inversion part, we propose first an accurate description of the forward model by introducing the concept of virtual image sources. This idea provides a deeper understanding of the developed approach. Then, we propose an extension of the image sources method to the estimation of supplementary geoacoustic parameters : the density, the absorption coefficient, and the shear wave sound speed. This extension is based on the results of the original inversion (estimation of the number of layers, their thicknesses, and the pressure sound speeds) and on the use of the amplitudes of the reflected signals. These improvements and extents of the image source method are illustrated by their applications on both synthetic and real signals, the latter coming from tank and at-sea measurements. The obtained results are very satisfactory, from a computational point of view as well as for the quality of the provided estimations
APA, Harvard, Vancouver, ISO, and other styles
6

Phung, Huong Thi Hoai. "Imaging of seismic and hum sources by time reversal method." Paris 7, 2010. http://www.theses.fr/2010PA077169.

Full text
Abstract:
Etudier et comprendre les tremblements de terre est le but de beaucoup de chercheurs. De nos jours, avec le développement de la FDSN (Fédération of Digital Seismometers Networks), les sismogrammes contenant à la fois de l'information sur la source sismique et sur l'effet de propagation (structure), sont enregistrés en permanence. On peut facilement localiser un tremblement de terre par les méthodes classiques en utilisant les sismogrammes. Néanmoins, pour avoir un résultat plus précis sur la source sismique (tenseur des moments, fonction temporelle de la source), on a besoin de résoudre le problème inverse. La méthode du Renversement Temporel a permis la mise au point de nombreuses applications en contrôle non destructif, diagnostic médical et domotique dans le domaine des ondes acoustiques. Elle permet aussi de localiser les tremblements de terre et les tremblements glaciaires dans le domaine des ondes sismiques. Dans ce travail de thèse, on présente des applications du principe de retournement temporel aux sources sismiques virtuelles. Ensuite, on montre la localisation en temps et en espace, ainsi que la reconstruction du mécanisme au foyer du tremblement de terre du Haiti le 12 janvier 2010 en appliquant la méthode du renversement temporel aux sismogrammes complets et "one-bit". Le niveau d'excitation du bourdonnement (seismic hum) de la Terre est bien observé. Cependant, son origine n'est pas encore claire. L'idée est d'essayer de localiser sa source. On suppose que sa source est peut-être localisée en espace mais pas en temps. Nous concluons que la source du "hum" n'est pas locale mais au moins régionale
Studies of earthquake sources is one of purpose of many scientists. Today, with the development of FDSN (Federation of Digital Seismometers Networks), the seismograms are continuously recorded. They provide information on seismic source and on propagation (related to earth structure) effect. We can easily locate an earthquake by applying the classical methods on seismograms. However, to have a more accurate result on seismic source (seismic moment tensor, source time function), we need to solve an inverse problem. Time-reversal (hereafter referred to as TR) method has been successfully applied for acoustic waves in many fields such as imaging, underwater acoustics, non destructive testing, and for seismic waves in seismic earthquake location, and glacial earthquake imaging. In this thesis , we present the application of TR method to synthetic seismograms due to virtual earthquakes. We show the focusing of January 12th 2010 Haiti earthquake in space and time and the reconstructed focal mechanism by applying the TR method to complete and one-bit seismograms. The level of oscillation of seismic "hum" is well observed. However, the spatial and temporal origin of "hum" is still uncertain, and the idea is to use the advantage focusing source of the TR method to locate it. We assume that this source may be located in space, not in time. We show that hum source is not local but might be distributed in a very large region
APA, Harvard, Vancouver, ISO, and other styles
7

Camargo, Hugo Elias. "A Frequency Domain Beamforming Method to Locate Moving Sound Sources." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/27765.

Full text
Abstract:
A new technique to de-Dopplerize microphone signals from moving sources of sound is derived. Currently available time domain de-Dopplerization techniques require oversampling and interpolation of the microphone time data. In contrast, the technique presented in this dissertation performs the de-Dopplerization entirely in the frequency domain eliminating the need for oversampling and interpolation of the microphone data. As a consequence, the new de-Dopplerization technique is computationally more efficient. The new de-Dopplerization technique is then implemented into a frequency domain beamforming algorithm to locate moving sources of sound. The mathematical formulation for the implementation of the new de-Dopplerization technique is presented for sources moving along a linear trajectory and for sources moving along a circular trajectory, i.e. rotating sources. The resulting frequency domain beamforming method to locate moving sound sources is then validated using numerical simulations for various source configurations (e.g. emission angle, emission frequency, and source velocity), and different processing parameters (e.g. time window length). Numerical datasets for sources with linear motion as well as for rotating sources were simulated. For comparison purposes, selected datasets were also processed using traditional time domain beamforming. The results from the numerical simulations show that the frequency domain beamforming method is at least 10 times faster than the traditional time domain beamforming method with the same performance. Furthermore, the results show that as the number of microphones and/or grid points increase, the processing time for the traditional time domain beamforming method increases at a rate 20 times larger than the rate of increase in processing time of the new frequency domain beamforming method.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Meegahawatte, Danushka Hansitha. "A design method for specifying power sources for hybrid power systems." Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/1215/.

Full text
Abstract:
Many efforts have been made in recent years to address issues surrounding the use of fossil fuels for energy. However, it must be conceded that world’s dependence on fossil fuels cannot cease overnight. In reality, the switch is expected to be a relatively slow migration of technologies over many decades. During this transition period the world will need bridging technologies to aid in the transition to alternate energy sources. One such technology, which shows much promise in boosting energy efficiency while reducing emissions and costs, is the adoption of hybrid power systems. This thesis investigates the motives behind seeking alternate energy sources and discusses the future need to move away from fossil fuels and the likely role hybrid power systems will play in the future. A general outline of a hybrid power system is presented, and its key subsystems identified and discussed, paying attention to power generation, energy storage technologies and the performance of these systems. A novel method of specifying the power sources in bespoke hybrid power systems are presented. A custom software tool aimed at evaluating how different hardware configurations and output duty cycles affect the performance of a hybrid power system is then presented and used in several case studies to investigate the effectiveness of the presented method in specifying power sources for a given application. It was found that the hardware, output application and control strategy of a hybrid power system affects the overall performance of the system. Furthermore, if the output duty cycle of a hybrid power system is repetitive and predictable in nature, it was found that the hardware and control strategy of the system can be fine-tuned using simple techniques to optimise the overall system configuration and performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Bécot, François-Xavier. "Tyre noise over impedance surfaces : efficient application of the equivalent sources method." Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0036/these.pdf.

Full text
Abstract:
Le but de ce travail est de comprendre et de contrôler les mécanismes de rayonnement du pneumatique, ceci en concevant des outils de prédiction efficients pour la propagation du bruit pneumatique / chaussée au dessus de surfaces d'impédances arbitraires. Le rayonnement du pneumatique est modélisé à l'aide de la méthode des Sources Equivalentes. Un modèle d'effets de sol dus à un plan d'impédance donnée est développé pour des sources de directivités arbitraire. Par ailleurs, la solution exacte du problème bi-dimensionel est présentée. Basé sur les deux outils de prédiction précédents, un modèle itératif est développé pour le rayonnement d'un pneumatique au-dessus de surfaces d'impédance arbitraire. A l'aide de ce nouvel outil, un étude paramétrique examine les tendances du rayonnement du pneumatique au-dessus de chaussées absorbantes. Le présent travail contribue à l'étude des possibilités de réduction du bruit du trafic, notamment en utilisant des chaussées dites silencieuses
The aim of the present work is to understand and to control the mechanisms of the tyre radiation, by designing efficient prediction tools for the propagation of the tyre / road noise over arbitrary impedance surfaces. Tyre radiation is modelled using the Equivalent Sources method. A model for the ground effects induced from a given impedance plane is developed for sources having arbitrary directivity. Moreover, the exact solution to the two-dimensional problem is derived. Based on the two previous prediction tools, an iterative model is developed for the tyre radiation over an arbitrary impedance surface. Using this model, a parametrical study examines the tendencies of tyre radiation over absorbing surfaces. The present work allows to study the possibilities of traffic noise reduction, particularly by the use of so-called silent road surfaces
APA, Harvard, Vancouver, ISO, and other styles
10

Rocha, Ryan D. "A Frequency-Domain Method for Active Acoustic Cancellation of Known Audio Sources." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1240.

Full text
Abstract:
Active noise control (ANC) is a real-time process in which a system measures an external, unwanted sound source and produces a canceling waveform. The cancellation is due to destructive interference by a perfect copy of the received signal phase-shifted by 180 degrees. Existing active noise control systems process the incoming and outgoing audio on a sample-by-sample basis, requiring a high-speed digital signal processor (DSP) and analog-to-digital converters (ADCs) with strict timing requirements on the order of tens of microseconds. These timing requirements determine the maximum sample rate and bit size as well as the maximum attenuation that the system can achieve. In traditional noise cancellation systems, the general assumption is that all unwanted sound is indeterminate. However, there are many instances in which an unwanted sound source is predictable, such as in the case of a song. This thesis presents a method for active acoustic cancellation of a known audio signal using the frequency characteristics of the known audio signal compared to that of a sampled, filtered excerpt of the same known audio signal. In this procedure, we must first correctly locate the sample index for which a measured audio excerpt begins via the cross-correlation function. Next, we obtain the frequency characteristics of both the known source (WAVE file of the song) and the measured unwanted audio by taking the Fast Fourier Transform (FFT) of each signal, and calculate the effective environmental transfer function (degradation function) by taking the ratio of the two complex frequency-domain results. Finally, we attempt to recreate the environmental audio from the known data and produce an inverted, synchronized, and amplitude-matched signal to cancel the audio via destructive interference. Throughout the process, we employ many signal conditioning methods such as FIR filtering, median filtering, windowing, and deconvolution. We illustrate this frequency-domain method in Native Instruments’ LabVIEW running on the Windows operating system, and discuss its reliability, areas for improvement, and potential future applications in mobile technologies. We show that under ideal conditions (unwanted sound is a known white noise source, and microphone, loudspeaker, and environmental filter frequency responses are all perfectly flat), we can achieve a theoretical maximum attenuation of approximately 300 dB. If we replace the white noise source with an actual song and the environmental filter with a low-order linear filter, then we can achieve maximum attenuation in the range of 50-70 dB. However, in a real-world environment, with additional noise and imperfect microphones, speakers, synchronization, and amplitude-matching, we can expect to see attenuation values in the range of 10-20 dB.
APA, Harvard, Vancouver, ISO, and other styles
11

McNabb, Patrick James. "Statistical method for identification of sources of electromechanical oscillations in power systems." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/9500.

Full text
Abstract:
The use of real-time continuous dynamics monitoring often indicates dynamic behaviour that was not anticipated by model-based studies. In such cases it can be difficult to locate the sources of problems using conventional tools. This thesis details the possibility of diagnosing the causes of problems related to oscillatory stability using measurement-based data such as active power and mode decay time constant, derived from system models. The aim of this work was to identify dynamics problems independently of an analytical dynamic model, which should prove useful in diagnosing and correcting dynamics problems. New statistical techniques were applied to both dynamic models and real systems which yielded information about the causes of the long decay time constants observed in these systems. Wavelet transforms in conjunction with General Linear Models (GLMs) were used to improve the statistical prediction of decay time constants derived from the system. Logic regression was introduced as a method of establishing important interactions of loadflow variables that contribute to poor damping. The methodology was used in a number of case studies including the 0.62Hz Icelandic model mode and a 0.48Hz mode from the real Australian system. The results presented herein confirm the feasibility of this approach to the oscillation source location problem, as combinations of loadflow variables can be identified and used to control mode damping. These ranked combinations could be used by a system operator to provide more comprehensive control of oscillations in comparison to current techniques.
APA, Harvard, Vancouver, ISO, and other styles
12

Aljaism, Wadah A., University of Western Sydney, and School of Engineering and Industrial Design. "Control method for renewable energy generators." THESIS_XXX_EID_Aljaism_W.xml, 2002. http://handle.uws.edu.au:8081/1959.7/796.

Full text
Abstract:
This thesis presents a study on the design method to optimise the performance for producing green power from multiple renewable energy generators. The design method is presented through PLC (Programmable Logic Controller) theory. All the digital and analogue inputs are connected to the input cards. According to different operations conditions for each generator, the PLC will image all the inputs and outputs, from these images; a software program has been built to create a control method for multiple renewable energy generators to optimise production of green power. A control voltage will supply the output contractor from each generator via an interface relay. Three renewable generators (wind, solar, battery bank) have been used in the model system and the fourth generator is the back up diesel generator. The priority is for the wind generator due to availability of wind 24 hours a day, then solar, battery bank, and LPG or Diesel generators. Interlocking between the operations of the four contractors has been built to prevent interface between them. Change over between contractors, according to the generator's change over has also been built, so that it will delay supplying the main bus bar to prevent sudden supply to the load. Further study for controlling multiple renewable energy generators for different conditions such as controlling the multi-renewable energy generators from remote, or supplying weather forecast data from bureau of meteorology to the PLC directly as recommended.
Master of Electrical Engineering (Hons)
APA, Harvard, Vancouver, ISO, and other styles
13

Mansour, Ali. "Contribution à la séparation aveugle de sources." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0012.

Full text
Abstract:
Le probleme de separation de sources est un probleme relativement recent en traitement du signal, qui consiste a separer des sources, statistiquement independantes, observees par un reseau de capteurs. Dans cette these, plusieurs approches ont ete etudiees : deux approches directes, valables uniquement pour le melange lineaire instantane, ont ete proposees. La premiere, analytique, est basee sur les statistiques de signaux observes, l'autre geometrique, est basee sur les distributions de ces signaux, dont la densite de probabilite est supposee a support borne. Pour les signaux de meme signe de kurtosis, on a propose un algorithme adaptatif base uniquement sur les cumulants croises (2x2). Ce critere est valable pour les melanges instantanes, aussi bien pour les melanges convolutifs. L'hypothese concernant le signe de kurtosis est assez frequent dans la litterature sur la separation de sources. Des etudes sur cette hypothese, et sur sa relation avec la nature de sources, sont presentees dans cette these. Finalement, en s'inspirant des methodes d'identification aveugles et a l'aide de deux parametrisations differentes de la matrice de sylvester, on montre la possibilite de separer un melange convolutif ou le transformer en un melange instantane, en utilisant les statistiques de second ordre. Dans ce cadre, trois algorithmes de sous-espaces sont proposes.
APA, Harvard, Vancouver, ISO, and other styles
14

Alotaibi, Lafi. "Commande et optimisation d'une installation multi-sources." Thesis, Reims, 2012. http://www.theses.fr/2012REIMS039.

Full text
Abstract:
Cette thèse traite la commande et l'optimisation d'une installation photovoltaïquepour un site isolé. Ainsi, nous avons proposé un algorithme par logique flouepermettant la poursuite du point de puissance maximal afin de remédier auxinconvénients des méthodes classiques. Ensuite, nous nous sommes intéressés àl'optimisation de la structure de l'installation. En effet, dans les installationsclassiques, dans le cas de défaillance d'un panneaux, tout le bloc série devientinutilisable, ce qui réduit considérablement les capacités de production del'installation. Pour résoudre ce problème, nous avons proposé un superviseur permettant la reconfiguration automatique de l'installation de telle sorte que seul lepanneaux défaillant est mis hors connexion. Par ailleurs, pour gérer le flux depuissance et pour répondre à la demande de l'utilisateur, nous avons développé un superviseur par logique floue. Ainsi, le surplus de production est stocké systématiquement dans la batterie pour l'utiliser ensuite en cas où la demandedépasse la production. De plus, la structure proposée permet de ne solliciter la batterie en cas de besoin de ce qui permet de prolonger considérablement sa duréede vie
This thesis addresses the control and optimization of a stand-alone photovoltaicsystem. Thus, we proposed a fuzzy logic algorithm for tracking the maximum powerpoint to overcome the disadvantages of classical methods. Then we focused onoptimizing the structure of the installation. Indeed, in conventional systems, in thecase of failure of a panel, the whole serie block becomes unusable, greatly reducingthe production capacity. To resolve this problem, we proposed a supervisor for theautomatic reconfiguration of the installation so that only the failed panels is takenoffline. Furthermore, to manage the power flow and to meet user demand, wedeveloped a fuzzy supervisor. Thus, the surplus production is systematically storedin the battery for later use in cases where demand exceeds production. In addition,the proposed structure can not draining the battery in case of need thereby greatlyextend its lifetime
APA, Harvard, Vancouver, ISO, and other styles
15

Josyula, Jitendra Rama Aswadh, and Soma Sekhara Sarat Chandra Panamgipalli. "Identifying the information needs and sources of software practitioners. : A mixed method approach." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12832.

Full text
Abstract:
Context. Every day software practitioners need information for resolving a number of questions. This information need should be identified and addressed in order to successfully develop and deliver a software system. One of the ways to address these information needs is to make use of some information sources like blogs, websites, documentation etc. Identifying the needs and sources of software practitioners can improve and benefit the practitioners as well as the software development process. But the information needs and sources of software practitioners are partially studied and rather it is mostly focused on the knowledge management in software engineering. So, the current context of this study is to identify the information needs and information sources of software practitioners and also to investigate the practitioner’s perception on different information sources.            Objectives. In this study we primarily investigated some of the information needs of software practitioners and the information sources that they use to fulfill their needs. Secondly we investigated the practitioner’s perception on available information sources by identifying the aspects that they consider while using different information sources.  Methods. To achieve the research objectives this study conducted an empirical investigation by performing a survey, with two data collection techniques. A simple literature review was also performed initially to identify some of the information needs and sources of software practitioners. Then we started survey by conducting the semi-structured interviews, based on the data obtained from the literature. Moreover, an online questionnaire was designed, after conducting the preliminary analysis of the data obtained from both the interviews and literature review. Coding process of grounded theory was used for analyzing the data obtained from the interviews and descriptive statistics were used for analyzing the data obtained from the online questionnaire. The data obtained from both the qualitative and quantitative methods is triangulated by comparing the identified information needs and sources with those that are presented in the literature.  Results. From the preliminary literature review, we identified seven information needs and six information sources. Based on the results of the literature review, we then conducted interviews with software practitioners and identified nine information needs and thirteen information sources. From the interviews we also investigated the aspects that software practitioners look into, while using different information sources and thus we identified four major aspects. We then validated the results from the literature review and interviews with the help of an online questionnaire. From the online questionnaire, we finally identified the frequency of occurrence of the identified information needs and the frequency of use of different information sources.      Conclusions. We identified that the software practitioners are currently facing nine type of information needs, out of which, information on clarifying the requirements and information on produce design and architecture are the most frequently faced needs. To address these needs most of the practitioners are using the information sources like blogs and community forums, product documentation and discussion with colleagues. While the research articles are moderately used and the IT magazines and social networking sites are least used to address their needs. We also identified that most of the practitioners consider the reliability/accuracy of the information source as an extremely important factor. The identified information needs and sources along with the practitioner’s perception are clearly elucidated in the document. A future direction of this work could be, testing the applicability of the identified information needs by extending the sample population. Also, there is a scope for research on how the identified information needs can be minimized to make the information acquisition more easy for the practitioners.
APA, Harvard, Vancouver, ISO, and other styles
16

CASTRO, JOSÉ FILHO DA COSTA. "OPERATING RESERVE ASSESSMENT IN MULTI-AREA SYSTEMS WITH RENEWABLE SOURCES VIA CROSS ENTROPY METHOD." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36076@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A reserva girante é a parcela da reserva operativa provida por geradores sincronizados, e interligados à rede de transmissão, aptos a suprir a demanda na ocorrência de falhas de unidades de geração, erros na previsão da demanda, variações de capacidade de fontes renováveis ou qualquer outro fator inesperado. Dada sua característica estocástica, essa parcela da reserva operativa é mais adequadamente avaliada por meio de métodos capazes de representar as incertezas inerentes ao seu dimensionamento e planejamento. Por meio do risco de corte de carga é possível comparar e classificar distintas configurações do sistema elétrico, garantindo a não violação dos requisitos de confiabilidade. Sistemas com elevada penetração de fontes renováveis apresentam comportamento mais complexo devido ao aumento das incertezas envolvidas, à forte dependência de fatores energético-climáticos e às variações de capacidade destas fontes. Para avaliar as correlações temporais e representar a cronologia de ocorrência dos eventos no curto-prazo, um estimador baseado na Simulação Monte Carlo Quase Sequencial é apresentado. Nos estudos de planejamento da operação de curto-prazo o horizonte em análise é de minutos a algumas horas. Nestes casos, a ocorrência de falhas em equipamentos pode apresentar baixa probabilidade e contingências que causam corte de carga podem ser raras. Considerando a raridade destes eventos, as avaliações de risco são baseadas em técnicas de amostragem por importância. Os parâmetros de simulação são obtidos por um processo numérico adaptativo de otimização estocástica, utilizando os conceitos de Entropia Cruzada. Este trabalho apresenta uma metodologia de avaliação dos montantes de reserva girante em sistemas com participação de fontes renováveis, em uma abordagem multiárea. O risco de perda de carga é estimado considerando falhas nos sistemas de geração e transmissão, observando as restrições de transporte e os limites de intercâmbio de potência entre as diversas áreas elétricas.
The spinning reserve is the portion of the operational reserve provided by synchronized generators and connected to the transmission network, capable of supplying the demand considering generating unit failures, errors in load forecasting, capacity intermittency of renewable sources or any other unexpected factor. Given its stochastic characteristic, this portion of the operating reserve is more adequately evaluated through methods capable of modeling the uncertainties inherent in its design and planning. Based on the loss of load risk, it is possible to compare different configurations of the electrical system, ensuring the non-violation of reliability requirements. Systems with high penetration of renewable sources present a more complex behavior due to the number of uncertainties involved, strong dependence of energy-climatic factors and variations in the capacity of these sources. In order to evaluate the temporal correlations and to represent the chronology of occurrence of events in the short term, an estimator based on quasi-sequential Monte Carlo simulation is presented. In short-term operation planning studies, the horizon under analysis is from minutes to a few hours. In these cases, the occurrence of equipment failures may present low probability and contingencies that cause load shedding may be rare. Considering the rarity of these events, risk assessments are based on importance sampling techniques. The simulation parameters are obtained by an adaptive numerical process of stochastic optimization, using the concept of Cross Entropy. This thesis presents a methodology for evaluating the amounts of spinning reserve in systems with high penetration of renewable sources, in a multi-area approach. The risk of loss of load is estimated considering failures in the generation and transmission systems, observing the network restrictions and the power exchange limits between the different electric areas.
APA, Harvard, Vancouver, ISO, and other styles
17

Gunow, Geoffrey Alexander. "Full core 3D neutron transport simulation using the method of characteristics with linear sources." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119030.

Full text
Abstract:
Thesis: Ph. D. in Computational Nuclear Science and Engineering, Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 269-274).
The development of high fidelity multi-group neutron transport-based simulation tools for full core Light Water Reactor (LWR) analysis has been a long-standing goal of the reactor physics community. While direct transport simulations have previously been far too computationally expensive, advances in computer hardware have allowed large scale simulations to become feasible. Therefore, many have focused on developing full core neutron transport solvers that do not incorporate the approximations and assumptions of traditional nodal diffusion solvers. Due to the computational expense of direct full core 3D deterministic neutron transport methods, many have focused on 2D/1D methods which solve 3D problems as a coupled system of radial and axial transport problems. However, the coupling of radial and axial problems also introduces approximations. Instead, the work in this thesis focuses on explicitly solving the 3D deterministic neutron transport equations with the Method of Characteristics (MOC). MOC has been widely used for 2D lattice physics calculations due to its ability to accurately and efficiently simulate reactor physics problems with explicit geometric detail. The work in this thesis strives to overcome the significant computational cost of solving the 3D MOC equations by implementing efficient track generation, axially extruded ray tracing, Coarse Mesh Finite Difference (CMFD) acceleration, linear track-based source approximations, and scalable domain decomposition. Transport-corrected cross-sections are used to account for anisotropic without needing to store angular-dependent sources. Additionally, significant attention has been given to complications that arise in full core simulations with transport-corrected cross-sections. The convergence behavior of transport methods is analyzed, leading to a new strategy for stabilizing the source iteration scheme for neutron transport simulations. The methods are incorporated into the OpenMOC reactor physics code and simulation results are presented for the full core BEAVRS LWR benchmark. Parameter refinement studies and comparisons with reference OpenMC Monte Carlo solutions show that converged full core 3D MOC simulations are feasible on modern supercomputers for the first time.
by Geoffrey Alexander Gunow.
Ph. D. in Computational Nuclear Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
18

Placko, Dominique, Thierry Bore, and Tribikram Kundu. "Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method." MDPI AG, 2016. http://hdl.handle.net/10150/621954.

Full text
Abstract:
The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems-such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green's functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD). In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.
APA, Harvard, Vancouver, ISO, and other styles
19

Aljaism, Wadah. "Control method for renewable energy generators /." View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20031223.093139/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sikdar, Anamika. "An objective method for the assessment of the impacts of odourous emissions from stationary sources." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62284.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Noudjiep, Djiepkop Giresse Franck. "Feeder reconfiguration scheme with integration of renewable energy sources using a Particle Swarm Optimisation method." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2712.

Full text
Abstract:
Thesis (Master of Engineering in Electrical Engineering)--Cape Peninsula University of Technology, 2018.
A smart grid is an intelligent power delivery system integrating traditional and advanced control, monitoring, and protection systems for enhanced reliability, improved efficiency, and quality of supply. To achieve a smart grid, technical challenges such as voltage instability; power loss; and unscheduled power interruptions should be mitigated. Therefore, future smart grids will require intelligent solutions at transmission and distribution levels, and optimal placement & sizing of grid components for optimal steady state and dynamic operation of the power systems. At distribution levels, feeder reconfiguration and Distributed Generation (DG) can be used to improve the distribution network performance. Feeder reconfiguration consists of readjusting the topology of the primary distribution network by remote control of the tie and sectionalizing switches under normal and abnormal conditions. Its main applications include service restoration after a power outage, load balancing by relieving overloads from some feeders to adjacent feeders, and power loss minimisation for better efficiency. On the other hand, the DG placement problem entails finding the optimal location and size of the DG for integration in a distribution network to boost the network performance. This research aims to develop Particle Swarm Optimization (PSO) algorithms to solve the distribution network feeder reconfiguration and DG placement & sizing problems. Initially, the feeder reconfiguration problem is treated as a single-objective optimisation problem (real power loss minimisation) and then converted into a multi-objective optimisation problem (real power loss minimisation and load balancing). Similarly, the DG placement problem is treated as a single-objective problem (real power loss minimisation) and then converted into a multi-objective optimisation problem (real power loss minimisation, voltage deviation minimisation, Voltage stability Index maximisation). The developed PSO algorithms are implemented and tested for the 16-bus, the 33-bus, and the 69-bus IEEE distribution systems. Additionally, a parallel computing method is developed to study the operation of a distribution network with a feeder reconfiguration scheme under dynamic loading conditions.
APA, Harvard, Vancouver, ISO, and other styles
22

Cardoso, Tamre Porter. "A hierarchical Bayes model for combining precipitation measurements from different sources /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Dae Sin. "Monte Carlo Modeling of Carrier Dynamics in Photoconductive Terahertz Sources." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11526.

Full text
Abstract:
Carrier dynamics in GaAs-based photoconductive terahertz (THz) sources is investigated using Monte Carlo techniques to optimize the emitted THz transients. A self-consistent Monte Carlo-Poisson solver is developed for the spatio-temporal carrier transport properties. The screening contributions to the THz radiation associated with the Coulomb and radiation fields are obtained self-consistently by incorporating the three-dimensional Maxwell equations into the solver. In addition, the enhancement of THz emission by a large trap-enhance field (TEF) near the anode in semi-insulating (SI) photoconductors is investigated. The transport properties of the photoexcited carriers in photoconductive THz sources depend markedly on the initial spatial distribution of those carriers. Thus, considerable control of the emitted THz spectrum can be attained by judiciously choosing the optical excitation spot shape on the photoconductor, since the carrier dynamics that provide the source of the THz radiation are strongly affected by the ensuing screenings. The screening contributions due to the Coulomb and radiation parts of the electromagnetic field acting back on the carrier dynamics are distinguished. The dominant component of the screening field crosses over at an excitation aperture size with full width at half maximum (FWHM) of ~100 um for a range of reasonable excitation levels. In addition, the key mechanisms responsible for the TEF near the anode of SI photoconductors are elucidated in detail. For a given optical excitation power, an enhancement of THz radiation power can be obtained using a maximally broadened excitation aperture in the TEF area elongated along the anode due to the reduction in the Coulomb and radiation screening of the TEF.
APA, Harvard, Vancouver, ISO, and other styles
24

Xiang, Jianguang. "High resolution seismic imaging of the near-surface : comparison of energy sources /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0018/MQ55550.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Molavi, Tabrizi Amirhossein. "Elastic and Viscoelastic Responses of Anisotropic Media Subjected to Dislocation Sources." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1448218517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yikmaz, Riza Fikret. "Development Of Gis Based Trajectory Statistical Analysis Method To Identify Potential Sources Of Regional Air Pollution." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611969/index.pdf.

Full text
Abstract:
DEVELOPMENT OF GIS BASED TRAJECTORY STATISTICAL ANALYSIS METHOD TO IDENTIFY POTENTIAL SOURCES OF REGIONAL AIR POLLUTION Yikmaz, Riza Fikret M.Sc., Department of Geodetic and Geographic Information Technologies Supervisor: Prof. Dr. Gü
rdal Tuncel Co-supervisor: Assoc. Prof. Dr. Zuhal Akyü
rek May 2010, 186 pages Apportionment of source regions affecting a certain receptor in the regional scale is necessary information for air quality management and development of national policy for exchange of air pollutants with other countries. Source region apportionment can be studied either through numerical modeling or by using trajectory statistics that is a hybrid methodology of modeling and measurements. Each of these approaches has their advantages and disadvantages. In this study treatment of back-trajectory segments in Potential Source Contribution Function (PSCF), which is one of the tools used in trajectory statistics will be investigated, to increase the reliability of the apportionment process. In the current method run in GIS, especially two parameters gains importance. One is that the vertical locations of trajectory segments are not taken into account at present. In this study, how the evaluation of the segments in 3-D instead of 2-D could improve the results will be assessed. The other parameter that is rainfall at each segment will be included in the PSCF calculations and its effects on the spatial distribution of PSCF values will be evaluated. A user interface in Geographical Information System (GIS) will be developed for effective use of improved methodology.
APA, Harvard, Vancouver, ISO, and other styles
27

Nayak, Gurudutt A. "Development of a test method to measure "in-use" emissions from stationary and portable diesel sources." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3652.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains xiii, 123 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 105-107).
APA, Harvard, Vancouver, ISO, and other styles
28

Amini, Shahram. "Development and application of the method of distributed volumetric sources to the problem of unsteady-state." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chauhan, Apoorva. "Social Media Use During Crisis Events: A Mixed-Method Analysis of Information Sources and Their Trustworthiness." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7570.

Full text
Abstract:
This dissertation consists of three studies that examine online communications during crisis events. The first study identified and examined the information sources that provided official information online during the 2014 Carlton Complex Wildfire. Specifically, after the wildfire, a set of webpages and social media accounts were discovered that were named after the wildfire—called Crisis Named Resources (or CNRs). CNRs shared the highest percentage of wildfire-relevant information. Because CNRs are named after a crisis event, they are easier to find and appear to be dedicated and/or official sources around an event. They can, however, be created and deleted in a short time, and the creators of CNRs are often unknown, which raises questions of trust and credibility regarding the information CNRs provide. To better understand the role of CNRs in crisis response, the second study examined CNRs that were named after the 2016 Fort McMurray Wildfire. Findings showed that many CNRs were created around the wildfire, most of which either became inactive or were closed after the wildfire containment. These CNRs shared wildfire-relevant information and served a variety of purposes from information dissemination to offers of help to expressions of solidarity. Additionally, even though most CNR owners remained anonymous, these resources received good reviews and were followed by many people. These observations about CNRs laid the foundation for the third study that sought to determine the factors that influence the trustworthiness of these resources. The third study involved 17 interviews and 105 surveys with members of the public and experts in Crisis Informatics, Communication Studies, and Emergency Management. Participants were asked to evaluate the trustworthiness of CNRs that were named after the 2017 Hurricane Irma. Findings indicate that participants evaluated the trustworthiness of CNRs based on their perceptions of CNR content, information source(s), owner, and profile.
APA, Harvard, Vancouver, ISO, and other styles
30

Mann, Jasminder Jason. "The enzymatic in vitro evaluation of protein sources for monogastric animals using the pH-stat method." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28021.

Full text
Abstract:
Three experiments were conducted to study the sensitivity of the pH-stat (in vitro) method in the prediction of true digestibility (TD), as measured by amount of base added, of plant proteins, either alone or in the presence of specific additives (nitrogen-free mixture, vitamin mixture and/or mineral mixture) as part of a complete diet of plant proteins that had been subjected to various levels and forms of heating. The in vitro TD values were then compared with TD values obtained in. vivo (Wistar rats). In experiment 1, the effect of temperature (dry-heating at 80, 100, 120, 150, 180 and 240° C or autoclaving at 121° C) and time (30, 60, 120 and 240 minutes) of heat application on in vitro base consumption (BC) was measured in 3 grains (wheat, barley and sorghum) and whole defatted soybeans. The largest increase in BC measured by the pH-stat method was that of soybeans in response to 30 minutes of autoclaving. Dry heating had various effects on the BC by soybeans, depending upon temperature and time of application, but none of the treatments was as beneficial as autoclaving. Mild, dry-heating of grains at 80-120° C improved BC slightly. The improvement was most marked for wheat. Both dry-heating of grain at temperatures above 120° C and autoclaving reduced the BC significantly for all durations. In experiment 2, the effect of inclusion of non-protein dietary components (minerals, vitamins and a nitrogen-free mixture, singly and in combination) on in. vitro BC measured by the pH-stat method of wheat and fat-extracted soybeans (both proteins in the raw and autoclaved forms) was monitored. For the wheat treatments, the inclusion of a mineral mixture significantly (p<0.001>) increased digestibility. This effect was greatest with autoclaved wheat. It was concluded that, in general, the presence of minerals increased the rate of hydrolysis. With raw soybeans, the distinction between treatments was less well-defined. The treatments containing vitamin or nitrogen-free and mineral combination mixtures were digested to a significantly greater extent than the raw soybeans alone. With autoclaved soybeans, additives had no effect. This lack of response to additives may have been due to the rather large amount of base required by the autoclaved soybean protein alone. In experiment 3, a series of rat-feeding trials were conducted in conjunction with in. vitro digestions. Diets were fed to groups of Wistar rats to determine TD, Biological Value (BV), and Net Protein Utilization (NPU) in vivo. Although BV was measured it was not relevant for this work. Concurrently, the same diets were tested for in. vitro TD by the pH-stat method. Specific regression equations were developed for each protein-type tested, after it was determined that a much lower correlation coefficient was obtained when one general equation was utilized. The newly-developed equations followed the format y = a + bx, where y = TD (as a part of one), a = the y-intercept, b = slope of the function and x = ml 0.10N NaOH added during the 10-minute digestion. Regression equations, correlation coefficients (r) and standard errors for each regression (s) between in. vitro and in vivo true digestibility of proteins were as follows; Soybean, soybean (autoclaved), soybean/wheat combinations (n = 6) r = 0.93 TD = 0.7868 + 0.2175x s = 0.018 Sorghum (raw, autoclaved, 90° C, 120° C, 180° C dry-heated, steamed) (n = 6) r = 0.92 TD = 0.4575 + 1.8841x a = 0.058 Alfalfa pellets/hay in combination with either wheat or barley (n = 13) r = 0.91 TD = 0.3446 + 1.0356x s = 0.046Alfalfa hay and barley combinations (n = 5) r = 0.96 TD = 0.2360 + 1.3194x s = 0.048 Grains (19 barleys, 10 triticales, 6 sorghums, and 2 wheats) (n = 37) r = 0.74 TD = 0.7419 + 0.4759x s = 0.044 In general, it can be stated that the pH-stat method is a useful method for screening proteins for the effect of various treatments on digestibility. Damage due to abnormally severe processing conditions (i.e. heating) is readily detected by the pH-stat technique as indicated by a decrease in the amount of base consumed during enzymatic hydrolysis.
Land and Food Systems, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
31

Labare, Mathieu. "Search for cosmic sources of high energy neutrinos with the AMANDA-II detector." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210183.

Full text
Abstract:
AMANDA-II est un télescope à neutrinos composé d'un réseau tri-dimensionnel de senseurs optiques déployé dans la glace du Pôle Sud.

Son principe de détection repose sur la mise en évidence de particules secondaires chargées émises lors de l'interaction d'un neutrino de haute énergie (> 100 GeV) avec la matière environnant le détecteur, sur base de la détection de rayonnement Cerenkov.

Ce travail est basé sur les données enregistrées par AMANDA-II entre 2000 et 2006, afin de rechercher des sources cosmiques de neutrinos.

Le signal recherché est affecté d'un bruit de fond important de muons et de neutrinos issus de l'interaction du rayonnement cosmique primaire dans l'atmosphère. En se limitant à l'observation de l'hémisphère nord, le bruit de fond des muons atmosphériques, absorbés par la Terre, est éliminé.

Par contre, les neutrinos atmosphériques forment un bruit de fond irréductible constituant la majorité des 6100 événements sélectionnés pour cette analyse.

Il est cependant possible d'identifier une source ponctuelle de neutrinos cosmiques en recherchant un excès local se détachant du bruit de fond isotrope de neutrinos atmosphériques, couplé à une sélection basée sur l'énergie, dont le spectre est différent pour les deux catégories de neutrinos.

Une approche statistique originale est développée dans le but d'optimiser le pouvoir de détection de sources ponctuelles, tout en contrôlant le taux de fausses découvertes, donc le niveau de confiance d'une observation.

Cette méthode repose uniquement sur la connaissance de l'hypothèse de bruit de fond, sans aucune hypothèse sur le modèle de production de neutrinos par les sources recherchées. De plus, elle intègre naturellement la notion de facteur d'essai rencontrée dans le cadre de test d'hypothèses multiples.La procédure a été appliquée sur l'échantillon final d'évènements récoltés par AMANDA-II.

---------

MANDA-II is a neutrino telescope which comprises a three dimensional array of optical sensors deployed in the South Pole glacier.

Its principle rests on the detection of the Cherenkov radiation emitted by charged secondary particles produced by the interaction of a high energy neutrino (> 100 GeV) with the matter surrounding the detector.

This work is based on data recorded by the AMANDA-II detector between 2000 and 2006 in order to search for cosmic sources of neutrinos. A potential signal must be extracted from the overwhelming background of muons and neutrinos originating from the interaction of primary cosmic rays within the atmosphere.

The observation is limited to the northern hemisphere in order to be free of the atmospheric muon background, which is stopped by the Earth. However, atmospheric neutrinos constitute an irreducible background composing the main part of the 6100 events selected for this analysis.

It is nevertheless possible to identify a point source of cosmic neutrinos by looking for a local excess breaking away from the isotropic background of atmospheric neutrinos;

This search is coupled with a selection based on the energy, whose spectrum is different from that of the atmospheric neutrino background.

An original statistical approach has been developed in order to optimize the detection of point sources, whilst controlling the false discovery rate -- hence the confidence level -- of an observation. This method is based solely on the knowledge of the background hypothesis, without any assumption on the production model of neutrinos in sought sources. Moreover, the method naturally accounts for the trial factor inherent in multiple testing.The procedure was applied on the final sample of events collected by AMANDA-II.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
32

Picquenot, Adrien. "Introduction and application of a new blind source separation method for extended sources in X-ray astronomy." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP028.

Full text
Abstract:
Certaines sources étendues, telles que les vestiges de supernovae, présentent en rayons X une remarquable diversité de morphologie que les téléscopes de spectro-imagerie actuels parviennent à détecter avec un exceptionnel niveau de précision. Cependant, les outils d’analyse actuellement utilisés dans l’étude des phénomènes astrophysiques à haute énergie peinent à exploiter pleinement le potentiel de ces données : les méthodes d’analyse standard se concentrent sur l’information spectrale sans exploiter la multiplicité des morphologies ni les corrélations existant entre les dimensions spatiales et spectrales ; pour cette raison, leurs capacités sont souvent limitées, et les mesures de paramètres physiques peuvent être largement contaminées par d’autres composantes.Dans cette thèse, nous explorerons une nouvelle méthode de séparation de source exploitant pleinement les informations spatiales et spectrales contenues dans les données X, et leur corrélation. Nous commencerons par présenter son fonctionnement et les principes mathématiques sur lesquels il repose, puis nous étudierons ses performances sur des modèles de vestiges de supernovae. Nous nous pencherons ensuite sur la vaste question de la quantification des erreurs, domaine encore largement inexploré dans le milieu bouillonnant de l’analyse de données. Enfin, nous appliquerons notre méthode à l’étude de trois problèmes physiques : les asymétries dans la distribution des éléments lourds du vestige Cassiopeia A, les structures filamentaires dans l’émission synchrotron du même vestige, et la contrepartie X des structures filamentaires visibles en optique dans l’amas de galaxies Perseus
Some extended sources, among which we find the supernovae remnants, present an outstanding diversity of morphologies that the current generation of spectro-imaging telescopes can detect with an unprecedented level of details. However, the data analysis tools currently in use in the high energy astrophysics community fail to take full advantage of these data : most of them only focus on the spectral information without using the many spatial specificities or the correlation between the spectral and spatial dimensions. For that reason, the physical parameters that are retrieved are often widely contaminated by other components. In this thesis, we will explore a new blind source separation method exploiting fully both spatial and spectral information with X-ray data, and their correlations. We will begin with an exposition of the mathematical concepts on which the algorithm rely, and particularly on the wavelet transforms. Then, we will benchmark its performances on supernovae remnants models, and we will investigate the vast question of the error bars on non-linear estimators, still largely unanswered yet essential for data analysis and machine learning methods. Finally, we will apply our method to the study of three physical problems : the asymmetries in the heavy elements distribution in the supernova remnant Cassiopeia A, the filamentary structures in the synchrotron of the same remnant and the X-ray counterpart to optical filamentary structures in the Perseus galaxy cluster
APA, Harvard, Vancouver, ISO, and other styles
33

Damon, Raphael Wesley. "Determination of the photopeak detection efficiency of a HPGe detector, for volume sources, via Monte Carlo simulations." Thesis, University of the Western Cape, 2005. http://etd.uwc.ac.za/index.php?module=etd&amp.

Full text
Abstract:
The Environmental Radioactivity Laboratory (ERL) at iThemba LABS undertakes experimental work using a high purity germanium (HPGe) detector for laboratory measurements. In this study the Monte Carlo transport code, MCNPX, which is a general-purpose Monte Carlo N &minus
Particle code that extends the capabilities of the MCNP code, developed at the Los Alamos National Laboratory in New Mexico, was used. The study considers how various parameters such as (1) coincidence summing, (2) volume, (3) atomic number (Z) and (4) density, affects the absolute photopeak efficiency of the ERL&rsquo
s HPGe detector in a close geometry (Marinelli beaker) for soil, sand, KCl and liquid samples. The results from these simulations are presented here, together with an intercomparison exercise of two MC codes (MCNPX and a C++ program developed for this study) that determine the energy deposition of a point source in germanium spheres of radii 1 cm and 5 cm.

A sensitivity analysis on the effect of the detector dimensions (dead layer and core of detector crystal) on the photopeak detection efficiency in a liquid sample and the effect of moisture content on the photopeak detection efficiency in sand and soil samples, was also carried out. This study has shown evidence that the dead layer of the ERL HPGe detector may be larger than stated by the manufacturer, possibly due to warming up of the detector crystal. This would result in a decrease in the photopeak efficiency of up to 8 % if the dead layer of the crystal were doubled from its original size of 0.05 cm. This study shows the need for coincidence summing correction factors for the gamma lines (911.1 keV and 968.1 keV) in the 232Th series for determining accurate activity concentrations in environmental samples. For the liquid source the gamma lines, 121.8 keV, 244.7 keV, 444.1 keV and 1085.5 keV of the 152Eu series, together with the 1173.2 keV and 1332.5 keV gamma lines of the 60Co, are particularly prone to coincidence summing. In the investigation into the effects of density and volume on the photopeak efficiency for the KCl samples, it has been found that the simulated results are in good agreement with experimental data. For the range of sample densities that are dealt with by the ERL it has been found that the drop in photopeak efficiency is less than 5 %. This study shows that the uncertainty of the KCl sample activity measurement due to the effect of different filling volumes in a Marinelli beaker is estimated in the range of 0.6 % per mm and is not expected to vary appreciably with photon energy. In the case of the effect of filling height on the efficiency for the soil sample, it was found that there is a large discrepancy in the trends of the simulated and experimental curves. This discrepancy could be a result of the use of only one sand sample in this study and therefore the homogeneity of the sample has to be investigated. The effect of atomic number has been found to be negligible for the soil and sand compositions for energies above 400 keV, however if the composition of the heavy elements is not properly considered when simulating soil and sand samples, the effect of atomic number on the absolute photopeak efficiency in the low energy (<
400 keV) region can make a 14 % difference.
APA, Harvard, Vancouver, ISO, and other styles
34

Guse, Paige Marie. "VOC Interference with Standard Diesel Particulate Analysis for Mine Samples: Exploring Sources and Possible Solutions." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/97993.

Full text
Abstract:
Exposure to diesel engine exhaust is linked to chronic and acute illness. In underground mines, workers can be exposed to high concentrations for extended periods of time. Therefore, Mine Safety and Health Administration (MSHA) enforces personal exposure and engine emission limits. These regulations target just the solid portion of diesel exhaust, known as diesel particulate matter (DPM). The majority of DPM mass is attributed to particulate organic carbon (POC) and elemental carbon (EC). Total carbon (TC) is the sum of POC and EC and currently used as the surrogate to represent DPM as a whole. The NIOSH Method 5040 is the standard sample collection and analysis procedure. It outlines collection of submicron particulate matter samples on a quartz filter then measurement of POC and EC using a thermal-optical analysis. Error in DPM measurement occurs when volatile organic carbon (VOC) sorbs onto the particulate matter deposit and filter resulting in a positive sampling artifact. To correct for this, a dynamic blank method with two quartz filters (i.e., primary and secondary) in tandem is used. However, the accuracy of the dynamic blank correction method is dependent on equal sorption of VOC onto each filter. Observed instances of higher VOC on the secondary filter result in underestimated POC measurements and in some cases negative POC. The work presented in this thesis investigates the sources of VOC interference in particulate matter sampling and possible solutions. Three existing datasets containing information from blank samples and laboratory and field DPM samples were analyzed to look into instances of higher VOC sorption onto the secondary filter. Negative total POC results were limited to blank samples, but negative results for the POC of individual isotherms were observed in blank and DPM samples. A follow-up study looked into the possibility of sampling materials as a source of VOC that preferentially sorbs onto the secondary filter. Blank samples were assembled to test five sampling materials (i.e., two types of sample cassette, cellulose support pads, impactor cassettes, and impactors). In addition, sample storage conditions (i.e., temperature and duration) were tested for their impact on VOC sorption. It was discovered that all of the sample materials tested contributed VOC and, as expected, higher storage temperatures and longer storage durations increase the amount of VOC. Preferential sorption onto the secondary filter was observed in most conditions as well. A field study explored thermal separation of VOC and POC as a possible alternative to the dynamic blank correction method. Two sets of DPM samples were collected from two locations in an underground stone mine and one set of ambient particulate matter samples was collected from a highly trafficked truck stop. The temperature of 175°C was used for this preliminary investigation. The effectiveness of a temperature separation may depend on sample location. To better understand VOC and POC evolution characteristic, further testing with a wide range of sample mass and composition as well as different temperatures is suggested. It seems unlikely that a correction method using a separation temperature would be more effective than the standard dynamic blank in occupational DPM monitoring. The work presented in this thesis highlights the difficulty in accurately measuring POC.
Master of Science
Diesel Particulate matter (DPM) is the solid portion of diesel exhaust and can cause chronic and acute illness. Underground miners can regularly be exposed to high concentrations of DPM over long periods of time, therefore DPM must be monitored. Total Carbon (TC) is the sum of particulate organic and elemental carbon (POC and EC) and is used as the surrogate measurement to represent DPM. The standard method of DPM sample analysis is subject to volatile organic carbon (VOC) interference, therefore a dynamic blank correction is used. However, in some cases, the dynamic blank over- or under-corrects. This thesis presents studies to better understand the source(s) of VOC interference and possible solutions. Three existing datasets containing information from blank samples and laboratory and field DPM samples were investigated for instances of VOC interference resulting in an overcorrection. Such instances were limited to blank and low mass samples. A field study looked into the possibility of sampling materials as a source of VOC that may cause overcorrection when using the dynamic blank method. Blank samples were assembled to test five sampling materials as well as various sample storage conditions. It was discovered that all of the sample materials tested contributed VOC and, as expected, higher storage temperatures and longer storage durations increase the amount of VOC. A second field study explored thermal separation of VOC and POC as a possible alternative to the dynamic blank correction method. Two sets of DPM samples were collected from two locations in an underground stone mine and one set of ambient particulate matter samples was collected from a highly trafficked truck stop. The temperature of 175°C was used for this preliminary investigation. Results indicate that the effectiveness of temperature separation may depend on sample concentration and composition. To better understand VOC and POC evolution characteristic, further testing with a wide range of sample mass and composition, as well as, different temperatures is suggested. The work presented in this thesis highlights the difficulty in accurately measuring POC.
APA, Harvard, Vancouver, ISO, and other styles
35

Chun, Seokjoon. "Using MIMIC Methods to Detect and Identify Sources of DIF among Multiple Groups." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5352.

Full text
Abstract:
This study investigated the efficacy of multiple indicators, multiple causes (MIMIC) methods in detecting uniform and nonuniform differential item functioning (DIF) among multiple groups, where the underlying causes of DIF was different. Three different implementations of MIMIC DIF detection were studied: sequential free baseline, free baseline, and constrained baseline. In addition, the robustness of the MIMIC methods against the violation of its assumption, equal factor variance across comparison groups, was investigated. We found that the sequential-free baseline methods provided similar Type I error and power rates to the free baseline method with a designated anchor, and much better Type I error and power rates than the constrained baseline method across four groups, resulting from the co-occurrence background variables. But, when the equal factor variance assumption was violated, the MIMIC methods yielded the inflated Type I error. Also, the MIMIC procedure had problems correctly identifying the sources DIF, so further methodological developments are needed.
APA, Harvard, Vancouver, ISO, and other styles
36

Mathey, Aimeric. "An application of the Value Stream Mapping method in order to identify sources of wastes and opportunities for improvements." Thesis, KTH, Industriell produktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122576.

Full text
Abstract:
Nowadays, the global economy imposes great competition between companies that creates an intense pressure. Due to this increasing competition, companies have to keep improving their process for becoming more efficient to lower costs while improving their quality level and providing a better service. A company that does not improve every year is a company that is dying. Although Kraft Foods Inc. is the world’s second largest food company, they also need to keep improving in order to maintain its competitive position on the market and this is why they have been implementing the Lean culture in France since 2007. They want to become more efficient by eliminating all the activities that the customer is not willing to pay for, defined as waste. Providing more value with fewer resources, goal of Lean manufacturing, should make Kraft Foods France even more competitive. However, in order to become Leaner, the nine plants of Kraft Foods Biscuit needed a structured method because the first difficulty of eliminating wastes lies in identifying them. The main purpose of this thesis was to develop one of the most important Lean tools to identify wastes called “Value Stream Mapping”. This master thesis will therefore explain and describe how mapping the value stream can help identifying and eliminating the wastes but also can be a priceless support to share a common vision of the value stream among managers. This tool should be seen as the starting point of any improvements projects since it allows identifying opportunities for improvement that will improve the bottom line of the company. VSM should be used to challenge the status quo and the behaviors of all employees in order to improve Quality, Cost, Delivery, Safety, Sustainability and Morale. This master thesis will deal with the two main missions I led in order to reach this goal. The first mission was to develop a standard of the Value Stream Mapping for Kraft Foods France. I was asked to participate in the structuralizing and in the standardization of the VSM tool for the company. The tool was first analyzed and then a taylor-made VSM tool was developed to meet the characteristics and needs of the food industry. The last step of this mission was to train managers and engineers so that they can lead VSM projects themselves, in any plants, that can help them identifying improvements to be done. The second mission was to apply this method in different production lines to show managers the effectiveness of the tool for identifying rooms for improvement that could increase the productivity of theses studied production lines. For example, the project at Granville’s plant, described in the last part of the thesis, shows what types of improvements can be identified thanks to a Value Stream Mapping project since this project led to a productivity improvement of 40 k€. On completion of this thesis, I wish to have contributed to the emergence of the good use of Value Stream Mapping tool at Kraft Foods that will help the company to keep improving and focusing on customers.
APA, Harvard, Vancouver, ISO, and other styles
37

Mariano, Valeria. "A study of tetracycline resistant Escherichia coli in impala (Aepyceros melampus) and their water sources." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-02192009-140903/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

COELHO, TALITA S. "Desenvolvimento de um sistema de dosimetria para aplicadores de betaterapia de 90Sr+90Y." reponame:Repositório Institucional do IPEN, 2010. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9568.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:28:08Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:01:44Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
39

Ungureanu, Alina. "Synthèse de sources rayonnantes large bande, par la méthode TLM inverse." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00767009.

Full text
Abstract:
Cette thèse porte sur la synthèse des sources électromagnétiques (EM) rayonnantes, par la méthode TLM (Transmission Line Matrix) inverse. Les applications large-bande sont particulièrement visées. L'objectif est d'utiliser la théorie du retournement temporel des ondes EM, afin d'implémenter et développer une nouvelle méthode de synthèse des sources rayonnantes, à partir d'un diagramme de rayonnement connu. La retro-propagation des ondes est réalisée numériquement, par la méthode TLM inverse, en trois dimensions (3D), à nœuds symétriques condensés (SCN). L'algorithme proposé est utilisé pour retrouver des sources EM primaires, ponctuelles et réparties, émettant des signaux à large-bande [26GHz - 34GHz] et placées dans l'espace libre (sans pertes, homogène et non-dispersif). Les bases, le potentiel et les limites de cette approche inverse sont étudiés. Une étape supplémentaire est ajoutée afin d'améliorer la résolution spatiale de la reconstruction des sources ponctuelles et réparties. Une résolution inferieure à la demi-longueur d'onde de l'excitation est ainsi obtenue. La reconstruction des sources secondaires 1D et 2D, induites sur les surface métalliques des antennes est ensuite étudiée. Ces études ont abouti au développement d'un nouvel outil de simulation, basé sur une méthode hybride TLM-analytique. La synthèse des sources induites sur la surface d'une antenne-monopôle est ainsi réalisée, à partir du CL mesuré. L'orientation et la position des sources sont trouvées. Les avantages et les limitations de la technique sont enfin discutés.
APA, Harvard, Vancouver, ISO, and other styles
40

Carmona, Vasquez Leonardo R. "Numerical Modeling of Lifting Flows in the Presence of a Free Surface." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1426.

Full text
Abstract:
This thesis work started as an attempt to create a computational tool to model hydrodynamics problems involving lifting flows. The method employed to solve the problem is potential flow theory. Despite the fast evolution of computers and the latest developments in Navier-Stokes solvers, such as the Ranse methods; potential flow theory offers the possibility to create or use existing computational tools, which allow us modeling hydrodynamics problems in a simpler manner. Navier-Stokes solver can be very expensive from the computational point of view, and require a high level of expertise in order to achieve reliable models. Based on the above, we have developed a lifting flow modeling tool that we hope can serve as the starting point of a more elaborated method, and a valuable alternative, for the solution of different hydrodynamics problems. Key words highlighting important concepts related to this thesis work are: Vortex, circulation, potential flow, panel methods, Sources, doublets.
APA, Harvard, Vancouver, ISO, and other styles
41

Pereira, Antonio. "Acoustic imaging in enclosed spaces." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0066/document.

Full text
Abstract:
Ce travail de recherche porte sur le problème de l'identification des sources de bruit en espace clos. La motivation principale était de proposer une technique capable de localiser et quantifier les sources de bruit à l'intérieur des véhicules industriels, d'une manière efficace en temps. Dans cette optique, la méthode pourrait être utilisée par les industriels à des fins de réduction de bruit, et donc construire des véhicules plus silencieux. Un modèle simplifié basé sur la formulation par sources équivalentes a été utilisé pour résoudre le problème. Nous montrerons que le problème est mal conditionné, dans le sens où il est très sensible face aux erreurs de mesure, et donc des techniques dites de régularisation sont nécessaires. Une étude détaillée de cette question, en particulier le réglage de ce qu'on appelle de paramètre de régularisation, a été important pour assurer la stabilité de la solution. En particulier, un critère de régularisation basé sur une approche bayésienne s'est montré très robuste pour ajuster le paramètre de régularisation de manière optimale. L'application cible concernant des environnements intérieurs relativement grands, nous a imposé des difficultés supplémentaires, à savoir: (a) le positionnement de l'antenne de capteurs à l'intérieur de l'espace; (b) le nombre d'inconnues (sources potentielles) beaucoup plus important que le nombre de positions de mesure. Une formulation par pondération itérative a ensuite été proposé pour surmonter les problèmes ci-dessus de manière à: (1) corriger pour le positionnement de l'antenne de capteurs dans l'habitacle ; (2) obtenir des résultats corrects en terme de quantification des sources identifiées. Par ailleurs, l'approche itérative nous a conduit à des résultats avec une meilleure résolution spatiale ainsi qu'une meilleure dynamique. Plusieurs études numériques ont été réalisées afin de valider la méthode ainsi que d'évaluer sa sensibilité face aux erreurs de modèle. En particulier, nous avons montré que l'approche est affectée par des conditions non-anéchoïques, dans le sens où les réflexions sont identifiées comme des vraies sources. Une technique de post-traitement qui permet de distinguer entre les chemins directs et réverbérants a été étudiée. La dernière partie de cette thèse porte sur des validations expérimentales et applications pratiques de la méthode. Une antenne sphérique constituée d'une sphère rigide et 31 microphones a été construite pour les tests expérimentaux. Plusieurs validations académiques ont été réalisées dans des environnements semi-anéchoïques, et nous ont illustré les avantages et limites de la méthode. Enfin, l'approche a été testé dans une application pratique, qui a consisté à identifier les sources de bruit ou faiblesses acoustiques à l'intérieur d'un bus
This thesis is concerned with the problem of noise source identification in closed spaces. The main motivation was to propose a technique which allows to locate and quantify noise sources within industrial vehicles, in a time-effective manner. In turn, the technique might be used by manufacturers for noise abatement purposes such as to provide quieter vehicles. A simplified model based on the equivalent source formulation was used to tackle the problem. It was shown that the problem is ill-conditioned, in the sense that it is very sensitive to errors in measurement data, thus regularization techniques were required. A detailed study of this issue, in particular the tuning of the so-called regularization parameter, was of importance to ensure the stability of the solution. In particular, a Bayesian regularization criterion was shown to be a very robust approach to optimally adjust the regularization parameter in an automated way. The target application concerns very large interior environments, which imposes additional difficulties, namely: (a) the positioning of the measurement array inside the enclosure; (b) a number of unknowns ("candidate" sources) much larger than the number of measurement positions. An iterative weighted formulation was then proposed to overcome the above issues by: first correct for the positioning of the array within the enclosure and second iteratively solve the problem in order to obtain a correct source quantification. In addition, the iterative approach has provided results with an enhanced spatial resolution and dynamic range. Several numerical studies have been carried out to validate the method as well as to evaluate its sensitivity to modeling errors. In particular, it was shown that the approach is affected by non-anechoic conditions, in the sense that reflections are identified as "real" sources. A post-processing technique which helps to distinguish between direct and reverberant paths has been discussed. The last part of the thesis was concerned with experimental validations and practical applications of the method. A custom spherical array consisting of a rigid sphere and 31 microphones has been built for the experimental tests. Several academic experimental validations have been carried out in semi-anechoic environments, which illustrated the advantages and limits of the method. Finally, the approach was tested in a practical application, which consisted in identifying noise sources inside a bus at driving conditions
APA, Harvard, Vancouver, ISO, and other styles
42

Amailland, Sylvain. "Caractérisation de sources acoustiques par imagerie en écoulement d'eau confiné." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1037/document.

Full text
Abstract:
Les exigences en matière de bruit rayonné par les navires de la Marine ou de recherche engendrent le développement de nouvelles méthodes pour améliorer leurs caractérisations. Le propulseur, qui est la source la plus importante en champ lointain, est généralement étudié en tunnel hydrodynamique. Cependant, compte tenu de la réverbération dans le tunnel et du niveau élevé du bruit de couche limite turbulente (CLT), la caractérisation peut s’ avérer délicate. L'objectif de la thèse est d'améliorer les capacités de mesures acoustiques du Grand Tunnel Hydrodynamique (GTH) de la DGA en matière de bruits émis par les maquettes testées dans des configurations d'écoulement.Un modèle de propagation basé sur la théorie des sources images est utilisé afin de prendre en compte le confinement du tunnel. Les coefficients de réflexion associés aux parois du tunnel sont identifiés par méthode inverse et à partir de la connaissance de quelques fonctions de transfert. Un algorithme de débruitage qui repose sur l’ Analyse en Composantes Principales Robuste est également proposé. Il s'agit de séparer, de manière aveugle ou semi-aveugle, l’ information acoustique du bruit de CLT en exploitant, respectivement, la propriété de rang faible et la structure parcimonieuse des matrices interspectrales du signal acoustique et du bruit. Ensuite, une technique d'imagerie basée sur la méthode des sources équivalentes est appliquée afin de localiser et quantifier des sources acoustiques corrélées ou décorrélées. Enfin, la potentialité des techniques proposées est évaluée expérimentalement dans le GTH en présence d'une source acoustique et d'un écoulement contrôlé
The noise requirements for naval and research vessels lead to the development of new characterization methods. The propeller, which is the most important source in the far field, is usually studied in a water tunnel. However, due to the reverberation in the tunnel and the high level of flow noise, the characterization may be difficult. The aim of the thesis is to improve the measurement capabilities of the DGA Hydrodynamic tunnel (GTH) in terms of noise radiated by models in flow configurations.The propagation model is described through the image source method. Unfortunately, the reflection coefficients of the tunnel walls are generally unknown and it is proposed to estimate these parameters using an inverse method and the knowledge of some reference transfer functions. The boundary layer noise (BLN) may be stronger than the acoustic signal, therefore a Robust Principal Component Analysis is introduced in order to separate, blindly or semi-blindly, the acoustic signal from the noise. This algorithm is taking advantage of the low rank and sparse structure of the acoustic and the BLN cross-spectrum matrices. Then an acoustic imaging technique based on the equivalent source method is applied in order to localize and quantify correlated or decorrelated sources. Finally, the potentiality of the proposed techniques is evaluated experimentally in the GTH in the presence of an acoustic source and a controlled flow
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Liangfen. "Characterisation of air-borne sound sources using surface coupling techniques." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI028/document.

Full text
Abstract:
La thèse se base sur la recherche des possibilités de caractérisation du son aérien de sources sonores arbitraires. A cette fin, une approche particulière est étudiée à l’endroit où la caractérisation de la source est faite via une surface d’interface qui enveloppe totalement ou partiellement la source physique. Deux descripteurs qui dépendent de la fréquence sont definis au travers d’une telle surface: la pression sonore bloquée et l’impédance de la source. Le précédent représente la pression sonore créée par le système d’exploitation source qui agit sur la surface enveloppante quand elle est rendue immobile. Cette dernière représente le rapport des amplitudes de réponse de pression et les amplitudes de vitesse d’excitation normales au travers de la surface. La surface enveloppante définit un volume d’air qui contient la source physique appelée l’espace source. Les deux descripteurs définis sur l’espace source, la pression bloquée et l’impédance de la source sont montrés comme étant intrinsèques à la source, c’est-à-dire indépendants de l’espace acoustique environnant. Une fois définis, ces descripteurs permettent de trouver la pression sonore et la vitesse particulaire normale à la surface de l’interface quand l’espace source est couplé à un espace récepteur arbitraire, c’est-à-dire une pièce. Cela permet alors la prédiction du son dans l’espace récepteur. Les conditions de couplage nécessitent que l’espace récepteur soit caractérisé en utilisant la même surface enveloppante telle que l’espace source. En acceptant de garder à l’esprit la simplicité de la mesure, la surface enveloppante a été conçue vu qu’elle comporte une ou plusieurs surfaces rectangulaires planes. Le défi de la recherche était alors d’obtenir une impédance significative de la surface au travers de la surface plane rectangulaire (continue) ainsi que celle de la pression bloquée compatible avec la formulation de l’impédance. Cela a conduit à une décomposition dans l’espace de la pression sonore et de la vitesse des particules au sein du nombre fini des composants, chacun défini par une amplitude complexe et une distribution dans l’espace particulière. De cette façon, la pression bloquée se réduit à un vecteur d’amplitude de pression complexe, tandis que l’impédance devient une matrice de pression et des rapports d’amplitudes complexes de la vitesse de défauts de de décompositions ont été recherchés dans le détail: la méthode harmonique de surface et la méthode du patch. Le premier se rapproche de la pression de surface et de la vitesse normale par des combinaisons de fonctions de surface trigonométriques en 2D tandis que ce dernier partage la surface en petites parcelles et intervient sur chaque parcelle de façon discrète en utilisant les valeurs moyennes du patch
The thesis investigates possibilities of air-borne sound characterisation of arbitrary sound sources. To this end a particular approach is studied where the source characterisation is done via an interface surface which fully or partially envelopes the physical source. Two frequency dependent descriptors are defined across such a surface: the blocked sound pressure and the source impedance. The former represents the sound pressure created by the operating source which acts on the enveloping surface when this is made immobile. The latter represents the ratio of pressure response amplitudes and normal velocity excitation amplitudes across the surface. The enveloping surface defines an air volume containing the physical source, called the source space. The two source descriptors defined on the source space, the blocked pressure and the source impedance, are shown to be intrinsic to the source, i.e. independent of the surrounding acoustical space. Once defined, these descriptors allow one to find the sound pressure and normal particle velocity at the interface surface when the source space is coupled to an arbitrary receiver space, i.e. a room. This in turn allows for sound prediction in the receiver space. The coupling conditions require that the receiver space is characterised using the same enveloping surface as the source space. Bearing the measurement simplicity in mind, the enveloping surface has been conceived as consisting of one or several rectangular plane surfaces. The research challenge was then to obtain meaningful surface impedance across a (continuous) rectangular plane surface as well as the blocked pressure compatible with impedance formulation. This has led to a spatial decomposition of sound pressure and particle velocity into finite number of components, each defined by a complex amplitude and a particular spatial distribution. In this way the blocked pressure reduces to a vector of complex pressure amplitudes while the impedance becomes a matrix of pressure and velocity complex amplitude ratios. Two decomposition methods have been investigated in detail: the surface harmonic method and the patch method. The former approximates the surface pressure and normal velocity by combinations of 2D trigonometric surface functions while the latter splits the surface into small patches and treats each patch in a discrete way, using patch-averaged values
APA, Harvard, Vancouver, ISO, and other styles
44

CINTRA, FELIPE B. de. "Avaliacao da metodologia de calculo de dose em microdosimetria com fontes de eletrons com o uso do codigo MCNP5." reponame:Repositório Institucional do IPEN, 2010. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9619.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:28:44Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:57:15Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
45

Carpentier, Justine. "Identification des sources aéroacoustiques à partir de mesures vibratoires sur vitrages automobiles." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1019.

Full text
Abstract:
Dans sa recherche constante pour améliorer le confort des usagers, l'industrie automobile cherche à réduire les nuisances sonores dans les habitacles automobiles. Une des principales sources responsables de la gêne ressentie est l'écoulement turbulent qui se développe autour de la voiture. Celui-ci est caractérisé par des variations de pressions pariétales particulièrement importantes et localisées en particulier sur les vitrages avants du véhicule. L'objectif de cette étude est de mesurer et de caractériser la sollicitation aéroacoustique subie par le vitrage à l'aide d'une méthode vibratoire inverse nommée RIC (Résolution Inverse Corrigée). Le principe est de mesurer le déplacement de la vitre et de le réinjecter dans l'équation inverse du mouvement de la structure pour calculer la pression pariétale. Cette technique repose sur la discrétisation d'un schéma aux différences finies judicieusement choisi suivant le filtrage visé. Il est alors possible de contrôler et choisir le filtrage apporté par le schéma en modifiant les coefficients qui le constituent. La démarche présentée repose sur la synthèse de filtres numériques et différentes approches sont proposées. Les différents schémas aux différences finies calculés à partir de ces méthodes sont appliqués, par simulations numériques et expérimentalement, sur le cas simple d'une plaque et sur les différents vitrages d'un véhicule placé en soufflerie
In a constant search for user comfort, the automotive industry tries to reduce the annoying noises inside the passenger compartment of cars. These noises are mainly caused by the turbulence developed on the car window glass.Turbulent flow is characterized by particularly high wall pressure variations on the windows of the vehicle. The aim of this study is to measure the aeroacoustic load on the car window glass using the vibratory reverse method called Force Analysis Technique (FAT). The principle of this method is based on measuring the plate displacement field which is injected into the motion equation of the plate in order to calculate the force distribution exciting the structure. In order to do so, spatial derivatives are calculated by approximation using a judiciously selected finite difference model. It becomes possible to control and choose the filtering realized by the finite difference scheme by changing its coefficients. This technique is based on digital filter synthesis and different approaches are proposed. New finite different schemes are then applied on a plate and on car window glasses by computer simulations and experiments. Experiments are realized in an anechoïc wind tunel on a real car
APA, Harvard, Vancouver, ISO, and other styles
46

Vergara, Blanco Alejandro. "Administrative Law and legal method. The role of the legal doctrine." THĒMIS-Revista de Derecho, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/107340.

Full text
Abstract:
Legal education is not a subject of much discussion; however, it is a fundamental matter in the formation of lawyers, and because of that, it is important for students and teachers. In the present article, the author concentrates on the instruction of Administrative Law, focusing on the role of the legal doctrine in this regard and concluding that the form and method of Administrative Law must be specific for this discipline.
La enseñanza del Derecho es un tema sobre el cual no se debate mucho; sin embargo, es un asunto fundamental en la formación del abogado, por lo que es de importancia para alumnos y profesores. En el presente artículo, el autor se centra en la instrucción del Derecho Administrativo, enfocándose en el rol que tiene en ello la Doctrina y concluyendo que la forma y el método del Derecho Administrativo deben ser propios de esa disciplina.
APA, Harvard, Vancouver, ISO, and other styles
47

Cerda-Arias, José Luis [Verfasser]. "Planning method for integration and expansion of renewable energy sources with special attention to security supply in distribution system / José Luis Cerda-Arias." Aachen : Shaker, 2012. http://d-nb.info/1069047279/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mansouri, Wafa. "Problèmes inverses de localisation de sources et d'identification de puits et de paramètres." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI078.

Full text
Abstract:
Ce travail porte sur le développement d'algorithmes et l'application de méthodes numériques pour la résolution des problèmes inverses d'estimation de paramètres, d'identification de conditions aux limites et d'identification de sources dans un milieu poreux. Ces outils seront d'une grande utilité pour l'aide à la gestion des ressources en eaux souterraines et à leur préservation quant aux dégradations. L'objectif de cette thèse est de résoudre ces problèmes inverses en se basant sur différentes approches : Une résolution basée sur l'optimisation de forme topologique qui est la recherche de la géométrie d'un objet qui soit optimale vis à vis d'un critère donné, et ce sans aucun a priori sur sa topologie, c'est-à-dire sur le nombre de "trous" qu'il peut contenir. Sachant que ces trous représentent les puits recherchés. Pour ce faire, nous avons adopté la méthode du gradient topologique, qui consiste à étudier le comportement d'une fonction objectif lors de la création d'un petit trou à l'intérieur du domaine. Une résolution basée sur la minimisation d'une fonctionnelle d'erreur en énergie en utilisant des données surabondantes sur une partie de la frontière du domaine afin de compléter les données sur toute la frontière du domaine et de déterminer les positions, les débits et le nombre de puits existants à l'intérieur du domaine. Une résolution par le couplage de la méthode de paramétrisation adaptative qui a l'avantage de minimiser le nombre des inconnus de paramètres permettant d'interpréter au mieux les données disponibles et la méthode du gradient topologique. Ce couplage nous permet à la fois d'identifier les zones géologiques, de déterminer les valeurs de la transmissivité hydraulique dans chaque zone et de localiser les positions des puits
This work deals with the development of algorithms and application of numerical methods for solving inverse problems of parameters estimation, identification of boundary conditions and localisation of sources in porous media. These tools will be usefull in the management of groundwater resources and their preservation as to damage. The objective of this thesis is to solve the inverse problem based on different approaches: A resolution based on topological shape optimization is to find an optimal design without any priori assumption about its topology, that is, about the number of holes it may contain. Knowing that these holes represent the searched wells. To do this, we have adopted the method of topological gradient, which is to study the behavior of an objective function when creating a small hole inside the domain. A resolution based on the minimization of a constitutive law gap functional by using overspecified data on a part of the boundary of the domain to complete the data on all the boundary of the domain and determine the positions, the flows and the number of existing wells inside the domain. A resolution by the coupling of the adaptive parameterization method which has the advantage to minimize the number of the unknowns of parameters allowing to interpret at best the available data and the method of the topological gradient. This coupling allows us at the same time to identify the geological zones, to determine the values of the hydraulic transmissivity in every zone and to locate wells' positions
APA, Harvard, Vancouver, ISO, and other styles
49

Tian, Yuan. "Modélisation des sources de bruit d'une éolienne et propagation à grande distance." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY003/document.

Full text
Abstract:
L'objectif de ce travail est de modéliser les sources et la propagation atmosphérique du bruit généré par les éoliennes afin de mieux comprendre les caractéristiques de ce bruit à grande distance et d'aider les fabricants d'éoliennes et les développeurs de parc à respecter la réglementation. En couplant des modèles physiques de source aéroacoustique et de propagation, nous sommes capables de prédire les spectres de bruit, ainsi que la directivité et les modulations d'amplitude associées, pour différentes conditions atmosphériques. Le bruit aérodynamique large bande, à savoir le bruit d'impact de turbulence,le bruit de bord de fuite et le bruit de décrochage, est généralement dominant pour les éoliennes modernes. Le modèle analytique d'Amiet est choisi pour prédire le bruit d'impact de turbulence et le bruit de bord de fuite, en considérant plusieurs améliorations par rapport à la théorie initial : 1, une correction empirique pour l'épaisseur du bord d'attaque est introduite dans le calcul du bruit d'impact de turbulence ; 2, un modèle spectral des fluctuations de pression pariétale proposé récemment pour un écoulement avec gradient de pression défavorable est utilisé dans le calcul du bruit de bord de fuite. Ces modèles sont validés par comparaison avec des mesures de la littérature en soufflerie avec des profils fixes.Le modèle d'Amiet est ensuite appliqué à une éolienne complète pour prédire le bruit émis en champ proche. L'effet de la rotation des pales et l'effet Doppler sont pris en compte. On utilise d'abord des profils de vent constant sans turbulence, puis l'effet du cisaillement du vent et de la turbulence atmosphérique sont inclus à l'aide de la théorie de la similitude de Monin-Obukhov. De bons accords sont obtenus avec des mesures sur site éolien lorsque l'on considère à la fois les bruits de bord de fuite et d'impact de turbulence. On retrouve à l'aide du modèle les caractéristiques classiques du bruit des éoliennes, comme la directivité et les modulations d'amplitude. Des comparaisons avec un modèle semi-empirique montrent que le bruit de décrochage peut être significatif dans certains conditions.L'étape suivante consiste à coupler la théorie d'Amiet avec des modèles de propagation pour estimer le bruit à un récepteur en champ lointain. On étudie dans un premier temps un modèle analytique de propagation en conditions homogènes au-dessus d'un sol d'impédance finie. On montre que l'effet de sol modifie la forme des spectres de bruit, et augmente les modulations d'amplitude dans certains tiers d'octave. Dans un second temps, une méthode pour coupler le modèle de source à un code d'équation parabolique est proposée et validée pour prendre en compte les effets de réfraction atmosphérique. En fonction de la direction de propagation, les niveaux de bruit varient car l'effet de sol est influencé par les gradients de vent et car une zone d'ombre est présente dans la direction opposée au vent. On discute pour finir l'approximation de source ponctuelle à l'aide des modèles de propagation analytique et numérique
The purpose of this work is to model wind turbine noise sources and propagation in the atmosphere in order to better understand the characteristics of wind turbine noise at long range and to help wind turbine manufacturers and wind farm developers meet the noise regulations. By coupling physically-based aeroacoustic source and propagation models, we are able to predict wind turbine noise spectra, directivity and amplitude modulation in various atmospheric conditions.Broadband noise generated aerodynamically, namely turbulent inflow noise, trailing edge noise and separation/stall noise, is generally dominant for a modern wind turbine. Amiet's analytical model is chosen to predict turbulent inflow noise and trailing edge noise, considering several improvements to the original theory: 1, an empirical leading edge thickness correction is introduced in the turbulent inflow noise calculation; 2, a wall pressure fluctuation spectrum model proposed recently for adverse pressure gradient flow is used in the trailing edge noise predictions. The two models are validated against several wind tunnel experiments from the literature using fixed airfoils.Amiet's model is then applied on a full-size wind turbine to predict the noise emission level in the near field. Doppler effect and blade rotation are taken into account. Cases with constant wind profiles and no turbulence are used first, then wind shear and atmospheric turbulence effects obtained from Monin-Obukhov similarity theory are included. Good agreements against field measurements are found when both turbulent inflow noise and trailing edge noise are considered. Classical features of wind turbine noise, such as directivity and amplitude modulation, are recovered by the calculations. Comparisons with a semi-empirical model show that separation noise might be significant in some circumstances.Next, Amiet's theory is coupled with propagation models to estimate noise immission level in the far-field. An analytical model for the propagation over an impedance ground in homogeneous conditions is studied first. The ground effect is shown to modify the shape of the noise spectra, and to enhance the amplitude modulation in some third octave bands. A method to couple the source model to a parabolic equation code is also proposed and validated to take into account atmospheric refraction effects. Depending on the propagation direction, noise levels vary because the ground effect is influenced by wind shear and a shadow zone is present upwind. Finally, the point source assumption is reviewed considering both the analytical and numerical propagation models
APA, Harvard, Vancouver, ISO, and other styles
50

Pujol, Hadrien. "Antennes microphoniques intelligentes : localisation de sources acoustiques par Deep Learning." Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAC025.

Full text
Abstract:
Pour ma thèse de doctorat, je propose d’explorer la piste de l’apprentissage supervisé, pour la tâche de localisation de sources acoustiques. Pour ce faire, j’ai développé une nouvelle architecture de réseau de neurones profonds. Mais, pour optimiser les millions de variables d’apprentissages de ce réseau, une base de données d’exemples conséquente est nécessaire. Ainsi, deux approches complémentaires sont proposées pour constituer ces exemples. La première est de réaliser des simulations numériques d’enregistrements microphoniques. La seconde, est de placer une antenne de microphones au centre d’une sphère de haut-parleurs qui permet de spatialiser les sons en 3D, et d’enregistrer directement sur l’antenne de microphones les signaux émis par ce simulateur expérimental d’ondes sonores 3D. Le réseau de neurones a ainsi pu être testé dans différentes conditions, et ses performances ont pu être comparées à celles des algorithmes conventionnels de localisation de sources acoustiques. Il en ressort que cette approche permet une localisation généralement plus précise, mais aussi beaucoup plus rapide que les algorithmes conventionnels de la littérature
For my PhD thesis, I propose to explore the path of supervised learning, for the task of locating acoustic sources. To do so, I have developed a new deep neural network architecture. But, to optimize the millions of learning variables of this network, a large database of examples is needed. Thus, two complementary approaches are proposed to constitute these examples. The first is to carry out numerical simulations of microphonic recordings. The second one is to place a microphone antenna in the center of a sphere of loudspeakers which allows to spatialize the sounds in 3D, and to record directly on the microphone antenna the signals emitted by this experimental 3D sound wave simulator. The neural network could thus be tested under different conditions, and its performances could be compared to those of conventional algorithms for locating acoustic sources. The results show that this approach allows a generally more precise localization, but also much faster than conventional algorithms in the literature
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography