To see the other types of publications on this topic, follow the link: Spike phase/time analysis.

Dissertations / Theses on the topic 'Spike phase/time analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Spike phase/time analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

TROMBIN, FEDERICA. "Mechanisms of ictogenesis in an experimental model of temporal lobe seizures." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/11032.

Full text
Abstract:
Epilepsy is not a single disorder, but presents with a surrounding of symptoms that are not always of immediate identification and classification. About 50 million people worldwide have epilepsy. Seizures are more likely to occur in young children, or people over the age of 65 years. The mainstay of treatment of epilepsy is preventive anticonvulsant medication with anti epileptic drugs (AED). Despite the proven efficacy of most of these drugs, it is estimated that over 30% of people with epilepsy do not reach complete seizure control, and this category of patients is eligible for surgical therapy. Among them, people suffering from focal seizure and in particular temporal lobe epilepsy are candidates for surgery. In recent years, surgical ablation of the epileptogenic focus has been rewarded as the best way to cure seizures in patients with intractable focal epilepsy. Diagnostic scalp and intracranial stereo-EEG recordings can provide direct information from the epileptogenic focus and surrounding areas in order to circumscribe the zone to be surgically removed. Data obtained from the analysis of the patients' EEG brought to the identification of specific ictal patterns which in turn helped to better classify the already clinically defined seizure types. These patterns can be reproduced in animal models of epilepsy and/or seizures. Focal seizures in the temporal lobe of the isolated in vitro guinea pig brain can be induced by perfusion of proconvulsant drugs. The electrophysiological recordings from the limbic structures of this animal model inform about the mechanisms leading to seizure onset (ictogenesis) and their progression. These phenomena are being studied both from a neuro-physiological and functional point of view; also histology and other anatomo-functional techniques give us a global idea of the activities occurring in different brain compartments during seizure-like events. The ultimate goal of this research will be to further clarify the causes for which a focal seizure is generated and the regulatory mechanisms that govern the different patterns similar to those identified in humans. Intracellular recordings from principal neurons in the superficial and deep layers of the entorhinal cortex showed a different involvement of these two regions in seizure initiation and development. We demonstrate that at seizure onset there is a strong activation of GABAergic interneuron (Gnatkovsky et al., 2008). This finding points to a primary role of GABAergic inhibition in seizure generation. We further showed that slow potentials recorded during the first steps of ictal activity are a typical sign of modifications of ionic composition of the extracellular medium and describe very well the shape of low voltage shifts with fast activity (Trombin et al., in preparation). Spikes shape identified by intracellular recordings during seizures was also analyzed to evaluate the epileptogenic network. The correlation of AP changes during seizures with the field potential and the increase in extracellular [K] clearly indicates both neuronal and non-neuronal processes, take place during the initiation and the termination of a seizure (Trombin et al, in preparation). Taken together all these data point out a multi-factorial scenario in which inhibitory networks play a crucial role in seizure generation, in association with changes in glial function and extracellular homeostasis. The impairment of one of these elements can be a triggering event in the development of seizures (ictogenesis), and can start in turn a cascade of permanent modifications that maintain an hyper-excitability condition, leading to epileptogenesis. The precise knowledge of each passage needed to transform a normal tissue into an epileptogenic one is a fundamental achievement in order to recognize and classify the different syndromic manifestations of epilepsy. Further, the possibility to interfere with one of the above mentioned processes is of evident relevance for the modulation of seizure beginning and establishment.
APA, Harvard, Vancouver, ISO, and other styles
2

Na, Yu. "Stochastic phase dynamics in neuron models and spike time reliability." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7383.

Full text
Abstract:
The present thesis is concerned with the stochastic phase dynamics of neuron models and spike time reliability. It is well known that noise exists in all natural systems, and some beneficial effects of noise, such as coherence resonance and noise-induced synchrony, have been observed. However, it is usually difficult to separate the effect of the nonlinear system itself from the effect of noise on the system's phase dynamics. In this thesis, we present a stochastic theory to investigate the stochastic phase dynamics of a nonlinear system. The method we use here, called ``the stochastic multi-scale method'', allows a stochastic phase description of a system, in which the contributions from the deterministic system itself and from the noise are clearly seen. Firstly, we use this method to study the noise-induced coherence resonance of a single quiescent ``neuron" (i.e. an oscillator) near a Hopf bifurcation. By calculating the expected values of the neuron's stochastic amplitude and phase, we derive analytically the dependence of the frequency of coherent oscillations on the noise level for different types of models. These analytical results are in good agreement with numerical results we obtained. The analysis provides an explanation for the occurrence of a peak in coherence measured at an intermediate noise level, which is a defining feature of the coherence resonance. Secondly, this work is extended to study the interaction and competition of the coupling and noise on the synchrony in two weakly coupled neurons. Through numerical simulations, we demonstrate that noise-induced mixed-mode oscillations occur due to the existence of multistability states for the deterministic oscillators with weak coupling. We also use the standard multi-scale method to approximate the multistability states of a normal form of such a weakly coupled system. Finally we focus on the spike time reliability that refers to the phenomenon: the repetitive application of a stochastic stimulus to a neuron generates spikes with remarkably reliable timing whereas repetitive injection of a constant current fails to do so. In contrast to many numerical and experimental studies in which parameter ranges corresponding to repetitive spiking, we show that the intrinsic frequency of extrinsic noise has no direct relationship with spike time reliability for parameters corresponding to quiescent states in the underlying system. We also present an ``energy" concept to explain the mechanism of spike time reliability. ``Energy" is defined as the integration of the waveform of the input preceding a spike. The comparison of ``energy" of reliable and unreliable spikes suggests that the fluctuation stimuli with higher ''energy" generate reliable spikes. The investigation of individual spike-evoking epochs demonstrates that they have a more favorable time profile capable of triggering reliably timed spike with relatively lower energy levels.
APA, Harvard, Vancouver, ISO, and other styles
3

Brinson, L. C. Knauss Wolfgang Gustav. "Time-temperature response of multi-phase viscoelastic solids through numerical analysis /." Diss., Pasadena, Calif. : California Institute of Technology, 1990. http://resolver.caltech.edu/CaltechETD:etd-10292003-112909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vannicola, Catherine Marie. "Analysis of medical time series data using phase space analysis a complex systems approach /." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thakkar, Kairavee K. "A Geometric Analysis of Time Varying Electroencephalogram Vectors." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613745734396658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nasr, Walid. "Analysis and Approximations of Time Dependent Queueing Models." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/26090.

Full text
Abstract:
Developing equations to compute congestion measures for the general G/G/s/c queueing model and networks of such nodes has always been a challenge. One approach to analyzing such systems is to approximate the model-specified general input processes and distributions by processes and distributions from the more computationally friendly family of phase-type processes and distributions. We develop numerical approximation methods for analysis of general time-dependent queueing nodes by introducing new approximations for the time-dependent first two moments of the number-in-system and departure-count processes.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Sole, Christopher J. "Analysis of Countermovement Vertical Jump Force-Time Curve Phase Characteristics in Athletes." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2549.

Full text
Abstract:
The purposes of this dissertation were to examine the phase characteristics of the countermovement jump force-time curve between athletes based on jumping ability, examine the influence of maximal muscular strength on the countermovement jump force-time curve phase characteristics of athletes, and to examine the behavior of the countermovement jump force-time curve phase characteristics over the course of a training process in athletes of varying strength levels. The following are the major findings of these dissertations. The analysis of athletes by jumping ability suggested that proficient jumpers are associated with greater relative phase magnitude and phase impulse throughout the phases contained in the positive impulse of the countermovement jump force-time curve. Additionally, phase duration was not found to differ between athletes based on jumping ability or between male and female athletes. The analysis of athletes based on maximal muscular strength suggested that only unweighted phase duration differs between strong and less-strong athletes. Interestingly, in both investigations based on jumping ability and maximal strength indicated the relative shape of the stretching phase representing the rise in positive force was related to an athlete’s jumping ability (jump height). The results of the longitudinal analysis of countermovement jump force-time phase characteristics identified that these variables can be frequently assessed throughout a training process to provide information of regarding an athlete performance state. Furthermore, based on the contrasting behaviors of many of the countermovement jump force-time curve phase characteristics over time, an athlete’s level of muscular strength may influence how these characteristics are expressed in the context of a training process.
APA, Harvard, Vancouver, ISO, and other styles
8

Malvestio, Irene. "Detection of directional interactions between neurons from spike trains." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/666226.

Full text
Abstract:
An important problem in neuroscience is the assessment of the connectivity between neurons from their spike trains. One recent approach developed for the detection of directional couplings between dynamics based on recorded point processes is the nonlinear interdependence measure L. In this thesis we first use the Hindmarsh-Rose model system to test L in the presence of noise and for different spiking regimes of the dynamics. We then compare the performance of L against the linear cross-correlogram and two spike train distances. Finally, we apply all measures to neuronal spiking data from an intracranial whole-night recording of a patient with epilepsy. When applied to simulated data, L proves to be versatile, robust and more sensitive than the linear measures. Instead, in the real data the linear measures find more connections than L, in particular for neurons in the same brain region and during slow wave sleep.
Un problema important en la neurociència és determinar la connexió entre neurones utilitzant dades dels seus trens d’impulsos. Un mètode recent que afronta la detecció de connexions direccionals entre dinàmiques utilitzant processos puntuals és la mesura d’interdependència no lineal L. En aquesta tesi, utilitzem el model de Hindmarsh-Rose per testejar L en presència de soroll i per diferents règims dinàmics. Després comparem el desempenyorament de L en comparació al correlograma lineal i a dues mesures de trens d’impulsos. Finalment, apliquem totes aquestes mesures a dades d’impulsos de neurones obtingudes de senyals intracranials electroencefalogràfiques gravades durant una nit a un pacient amb epilèpsia. Quan utilitzem dades simulades, L demostra que és versàtil, robusta i més sensible que les mesures lineals. En canvi, utilitzant dades reals, les mesures lineals troben més connexions que L, especialment entre neurones en la mateixa àrea del cervell i durant la fase de son d’ones lentes.
APA, Harvard, Vancouver, ISO, and other styles
9

Love, Christina Elena. "Design and Analysis for the DarkSide-10 Two-Phase Argon Time Projection Chamber." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214821.

Full text
Abstract:
Physics
Ph.D.
Astounding evidence for invisible "dark" matter has been found from galaxy clusters, cosmic and stellar gas motion, gravitational lensing studies, cosmic microwave background analysis, and large scale galaxy surveys. Although all studies indicate that there is a dominant presence of non-luminous matter in the universe (about 22 percent of the total energy density with 5 times more dark matter than baryonic matter), its identity and its "direct" detection (through non-gravitational effects) has not yet been achieved. Dark matter in the form of massive, weakly interacting particles (WIMPs) could be detected through their collisions with target nuclei. This requires detectors to be sensitive to very low-energy (less than 100 keV) nuclear recoils with very low expected rates (a few interactions per year per ton of target). Reducing the background in a direct dark matter detector is the biggest challenge. A detector capable of seeing such low-energy nuclear recoils is difficult to build because of the necessary size and the radio- and chemical- purity. Therefore it is imperative to first construct small-scale prototypes to develop the necessary technology and systems, before attempting to deploy large-scale detectors in underground laboratories. Our collaboration, the DarkSide Collaboration, utilizes argon in two-phase time projection chambers (TPCs). We have designed, built, and commissioned DarkSide-10, a 10 kg prototype detector, and are designing and building DarkSide-50, a 50 kg dark matter detector. The present work is an account of my contribution to these efforts. The two-phase argon TPC technology allows powerful discrimination between dark matter nuclear recoils and background events. Presented here are simulations, designs, and analyses involving the electroluminescence in the gas phase from extracted ionization charge for both DarkSide-10 and DarkSide-50. This work involves the design of the HHV systems, including field cages, that are responsible for producing the electric fields that drift, accelerate, and extract ionization electrons. Detecting the ionization electrons is an essential element of the background discrimination and gives event location using position reconstruction. Based on using COMSOL multiphysics software, the TPC electric fields were simulated. For DarkSide-10 the maximum radial displacement a drifting electron would undergo was found to be 0.2 mm and 1 mm for DarkSide-50. Using the electroluminescence signal from an optical Monte Carlo, position reconstruction in these two-phase argon TPCs was studied. Using principal component analysis paired with a multidimensional fit, position reconstruction resolution for DarkSide-10 was found to be less than 0.5 cm and less than 2.5 cm for DarkSide-50 for events occurring near the walls. DarkSide-10 is fully built and has gone through several campaigns of operation and upgrading both at Princeton University and in an underground laboratory (Gran Sasso National Laboratory in Assergi, Italy). Key DarkSide two-phase argon TPC technologies, such as a successful HHV system, have been demonstrated. Specific studies from DarkSide-10 data including analysis of the field homogeneity and the field dependence on the electroluminescence signal are reported here.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
10

McCullough, Michael Paul. "Phase space reconstruction : methods in applied economics and econometrics /." Online access for everyone, 2008. http://www.dissertations.wsu.edu/Dissertations/Spring2008/M_McCullough_122707.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mehmetcik, Erdal. "Speech Enhancement Utilizing Phase Continuity Between Consecutive Analysis Windows." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613605/index.pdf.

Full text
Abstract:
It is commonly accepted that the induced noise on DFT phase spectrum has a negligible effect on speech intelligibility for short durations of analysis windows, as the early intelligibility studies pointed out. This fact is confirmed by recent intelligibility studies as well. Based on this phenomenon, classical speech enhancement algorithms do not modify DFT phase spectrum and only make changes in the DFT magnitude spectrum. However, in recent studies it is also indicated that these classical speech enhancement algorithms are not capable of improving the intelligibility scores of noise degraded speech signals. In other words, the contained information in a noise degraded signal cannot be increased by classical enhancement methods. Instead the ease of listening, i.e. quality, can be improved. Hence additional effort can be made to increase the amount of quality improvement using both DFT magnitude and DFT phase. Therefore if the performances of the classical methods are to be improved in terms of speech quality, the effect of DFT phase on speech quality needs to be studied. In this work, the contribution of DFT phase on speech quality is investigated through some simulations using an objective quality assessment criterion. It is concluded from these simulations that, the phase spectrum has a significant effect on speech quality for short durations of analysis windows. Furthermore, phase values of low frequency components are found to have the largest contribution to this quality improvement. Under the motivation of these results, a new enhancement method is proposed which modifies the phase of certain low frequency components as well as the magnitude spectrum. The proposed algorithm is implemented in MATLAB environment. The results indicate that the proposed system improves the performance of the classical methods in terms of speech quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Maraun, Douglas. "What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis." Phd thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=981698980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Frigui, Imed. "Analysis of a time-limited polling system with Markovian arrival process and phase type service." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23604.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

CASTELLANI, FEDERICO, and ANDREA GEREGOVA. "THE MARKETING IN EASTERN EUROPE. AN ANALYSIS FROM THE COMMUNIST PHASE TO THE PRESENT TIME." Thesis, Umeå universitet, Företagsekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-124856.

Full text
Abstract:
The purpose of this thesis is to deliver an integrated overview on the development of marketing in the Eastern European countries from the communist era to the present time and answer the main research question: How has marketing in the Eastern European countries changed from the communist era to the present time? In addition to this, three propositions are composed and further investigated.   The research philosophy of this thesis is based on a subjectivist ontological view and an interpretivist epistemological approach. The deductive research approach studies the research question, while adopting a qualitative research method. In addition, the practical research adopted for the purpose of this thesis is based on conducting multiple semi structured interviews with respondents, representing a diverse sample of firms. The firms are divided according to their country of origin, providing an inside and an outside view on the development of marketing in the Eastern European countries. At the same time, all the firms interviewed fulfill the criteria of being present on the Eastern European market. The final results are gained by the combination of the primary data, collected during the interviews, and secondary data, gained from a literature review undertaken by the authors.   The theoretical contribution of this thesis is represented by the empirical findings. They provide a complex overview on the most important political, economic, social and cultural changes in the Eastern Europe from the communist era to the present time, while linking them to the development of marketing within this selected geographical area.
APA, Harvard, Vancouver, ISO, and other styles
15

Tsoupidi, Rodothea Myrsini. "Two-phase WCET analysis for cache-based symmetric multiprocessor systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222362.

Full text
Abstract:
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence.
Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.
APA, Harvard, Vancouver, ISO, and other styles
16

Osmanoglu, Batuhan. "Applications and Development of New Algorithms for Displacement Analysis Using InSAR Time Series." Scholarly Repository, 2011. http://scholarlyrepository.miami.edu/oa_dissertations/622.

Full text
Abstract:
Time series analysis of Synthetic Aperture Radar Interferometry (InSAR) data has become an important scientific tool for monitoring and measuring the displacement of Earth’s surface due to a wide range of phenomena, including earthquakes, volcanoes,landslides, changes in ground water levels, and wetlands. Time series analysis is a product of interferometric phase measurements, which become ambiguous when the observed motion is larger than half of the radar wavelength. Thus, phase observations must first be unwrapped in order to obtain physically meaningful results. Persistent Scatterer Interferometry (PSI), Stanford Method for Persistent Scatterers (StaMPS), Short Baselines Interferometry (SBAS) and Small Temporal Baseline Subset (STBAS)algorithms solve for this ambiguity using a series of spatio-temporal unwrapping algorithms and filters. In this dissertation, I improve upon current phase unwrapping algorithms, and apply the PSI method to study subsidence in Mexico City. PSI was used to obtain unwrapped deformation rates in Mexico City (Chapter 3),where ground water withdrawal in excess of natural recharge causes subsurface, clay-rich sediments to compact. This study is based on 23 satellite SAR scenes acquired between January 2004 and July 2006. Time series analysis of the data reveals a maximum line-of-sight subsidence rate of 300mm/yr at a high enough resolution that individual subsidence rates for large buildings can be determined. Differential motion and related structural damage along an elevated metro rail was evident from the results. Comparison of PSI subsidence rates with data from permanent GPS stations indicate root mean square(RMS) agreement of 6.9 mm/yr, about the level expected based on joint data uncertainty.The Mexico City results suggest negligible recharge, implying continuing degradation and loss of the aquifer in the third largest metropolitan area in the world. Chapters 4 and 5 illustrate the link between time series analysis and three-dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path.Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method,however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform solution, which is becoming an important feature with continuously growing datasets.
APA, Harvard, Vancouver, ISO, and other styles
17

Ghumman, Chaudhry Amjad Ali. "Time-of-flight secondary ion mass spectrometry: new application for urinary stones analysis." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/8796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ramsey, Philip J. "UMVU estimation of phase and group delay with small samples." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54400.

Full text
Abstract:
Group delay between two univariate time series is a measure, in units of time, of how one series leads or lags the other at specific frequencies. The only published method of estimating group delay is Hannan and Thomson (1973); however, their method is highly asymptotic and does not allow inference to be performed on the group delay parameter in finite samples. In fact, spectral analysis in general does not allow for small sample inference which is a difficulty with the frequency domain approach to time series analysis. The reason that no statistical inference may be performed in small samples is the fact that distribution theory for spectral estimates is highly asymptotic and one can never be certain in a particular application what finite sample size is required to justify the asymptotic result. In the dissertation the asymptotic distribution theory is circumvented by use of the Box-Cox power transformation on the observed sample phase function. Once transformed, it is assumed that the sample phase is approximately normally distributed and the relationship between phase and frequency is modelled by a simple linear regression model. ln order to estimate group delay it is necessary to inversely transform the predicted values to the original scale of measurement and this is done by expanding the inverse Box-Cox transformation function in a Taylor Series expansion. The group delay estimates are generated by using the derivative of the Taylor Series expansion for phase. The UMVUE property comes from the fact that the Taylor Series expansions are functions of complete, sufficient statistics from the transformed domain and the Lehmann-Scheffe' result (1950) is invoked to justify the UMVUE property.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Ratan, Naren. "Complex phase space representation of plasma waves : theory and applications." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:af5654e3-3137-4d9a-b41d-574cd72103b2.

Full text
Abstract:
This thesis presents results on the description of plasma waves in terms of wavepackets. The wave field is decomposed into a distribution of wavepackets in a space of position, wavevector, time, and frequency. A complex structure joining each pair of Fourier conjugate variables into a single complex coordinate allows the efficient derivation of equations of motion for the phase space distribution by exploiting its analytic properties. The Wick symbol calculus, a mathematical tool generalizing many convenient properties of the Fourier transform to a local setting, is used to derive new exact phase space equations which maintain full information on the phase of the waves and include effects nonlocal in phase space such as harmonic generation. A general purpose asymptotic expansion of the Wick symbol product formula is used to treat dispersion, refraction, photon acceleration, and ponderomotive forces. Examples studied include the nonlinear Schrödinger equation, mode conversion, and the Vlasov equation. The structure of partially coherent wave fields is understood in terms of zeros in the phase space distribution caused by dislocations in its complex phase which are shown to be correlated with the field entropy. Simulations of plasma heating by crossing electron beams are understood by representing the resulting plasma waves in phase space. The local coherence properties of the beam driven Langmuir waves are studied numerically.
APA, Harvard, Vancouver, ISO, and other styles
20

Cappiello, Grazia. "A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5180/.

Full text
Abstract:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
APA, Harvard, Vancouver, ISO, and other styles
21

Schwabedal, Justus Tilmann Caspar. "Phase dynamics of irregular oscillations." Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/5011/.

Full text
Abstract:
In der vorliegenden Dissertation wird eine Beschreibung der Phasendynamik irregulärer Oszillationen und deren Wechselwirkungen vorgestellt. Hierbei werden chaotische und stochastische Oszillationen autonomer dissipativer Systeme betrachtet. Für eine Phasenbeschreibung stochastischer Oszillationen müssen zum einen unterschiedliche Werte der Phase zueinander in Beziehung gesetzt werden, um ihre Dynamik unabhängig von der gewählten Parametrisierung der Oszillation beschreiben zu können. Zum anderen müssen für stochastische und chaotische Oszillationen diejenigen Systemzustände identifiziert werden, die sich in der gleichen Phase befinden. Im Rahmen dieser Dissertation werden die Werte der Phase über eine gemittelte Phasengeschwindigkeitsfunktion miteinander in Beziehung gesetzt. Für stochastische Oszillationen sind jedoch verschiedene Definitionen der mittleren Geschwindigkeit möglich. Um die Unterschiede der Geschwindigkeitsdefinitionen besser zu verstehen, werden auf ihrer Basis effektive deterministische Modelle der Oszillationen konstruiert. Hierbei zeigt sich, dass die Modelle unterschiedliche Oszillationseigenschaften, wie z. B. die mittlere Frequenz oder die invariante Wahrscheinlichkeitsverteilung, nachahmen. Je nach Anwendung stellt die effektive Phasengeschwindigkeitsfunktion eines speziellen Modells eine zweckmäßige Phasenbeziehung her. Wie anhand einfacher Beispiele erklärt wird, kann so die Theorie der effektiven Phasendynamik auch kontinuierlich und pulsartig wechselwirkende stochastische Oszillationen beschreiben. Weiterhin wird ein Kriterium für die invariante Identifikation von Zuständen gleicher Phase irregulärer Oszillationen zu sogenannten generalisierten Isophasen beschrieben: Die Zustände einer solchen Isophase sollen in ihrer dynamischen Entwicklung ununterscheidbar werden. Für stochastische Oszillationen wird dieses Kriterium in einem mittleren Sinne interpretiert. Wie anhand von Beispielen demonstriert wird, lassen sich so verschiedene Typen stochastischer Oszillationen in einheitlicher Weise auf eine stochastische Phasendynamik reduzieren. Mit Hilfe eines numerischen Algorithmus zur Schätzung der Isophasen aus Daten wird die Anwendbarkeit der Theorie anhand eines Signals regelmäßiger Atmung gezeigt. Weiterhin zeigt sich, dass das Kriterium der Phasenidentifikation für chaotische Oszillationen nur approximativ erfüllt werden kann. Anhand des Rössleroszillators wird der tiefgreifende Zusammenhang zwischen approximativen Isophasen, chaotischer Phasendiffusion und instabilen periodischen Orbits dargelegt. Gemeinsam ermöglichen die Theorien der effektiven Phasendynamik und der generalisierten Isophasen eine umfassende und einheitliche Phasenbeschreibung irregulärer Oszillationen.
Many natural systems embedded in a complex surrounding show irregular oscillatory dynamics. The oscillations can be parameterized by a phase variable in order to obtain a simplified theoretical description of the dynamics. Importantly, a phase description can be easily extended to describe the interactions of the system with its surrounding. It is desirable to define an invariant phase that is independent of the observable or the arbitrary parameterization, in order to make, for example, the phase characteristics obtained from different experiments comparable. In this thesis, we present an invariant phase description of irregular oscillations and their interactions with the surrounding. The description is applicable to stochastic and chaotic irregular oscillations of autonomous dissipative systems. For this it is necessary to interrelate different phase values in order to allow for a parameterization-independent phase definition. On the other hand, a criterion is needed, that invariantly identifies the system states that are in the same phase. To allow for a parameterization-independent definition of phase, we interrelate different phase values by the phase velocity. However, the treatment of stochastic oscillations is complicated by the fact that different definitions of average velocity are possible. For a better understanding of their differences, we analyse effective deterministic phase models of the oscillations based upon the different velocity definitions. Dependent on the application, a certain effective velocity is suitable for a parameterization-independent phase description. In this way, continuous as well pulse-like interactions of stochastic oscillations can be described, as it is demonstrated with simple examples. On the other hand, an invariant criterion of identification is proposed that generalizes the concept of standard (Winfree) isophases. System states of the same phase are identified to belong to the same generalized isophase using the following invariant criterion: All states of an isophase shall become indistinguishable in the course of time. The criterion is interpreted in an average sense for stochastic oscillations. It allows for a unified treatment of different types of stochastic oscillations. Using a numerical estimation algorithm of isophases, the applicability of the theory is demonstrated by a signal of regular human respiration. For chaotic oscillations, generalized isophases can only be obtained up to a certain approximation. The intimate relationship between these approximate isophase, chaotic phase diffusion, and unstable periodic orbits is explained with the example of the chaotic roes oscillator. Together, the concept of generalized isophases and the effective phase theory allow for a unified, and invariant phase description of stochastic and chaotic irregular oscillations.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Yong. "New Calibration Approaches in Solid Phase Microextraction for On-Site Analysis." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1285.

Full text
Abstract:
Calibration methods for quantitative on-site sampling using solid phase microextraction (SPME) were developed based on diffusion mass transfer theory. This was investigated using adsorptive polydimethylsiloxane/divinylbenzene (PDMS/DVB) and Carboxen/polydimethylsiloxane (CAR/PDMS) SPME fiber coatings with volatile aromatic hydrocarbons (BTEX: benzene, toluene, ethylbenzene, and o-xylene) as test analytes. Parameters that affected the extraction process (sampling time, analyte concentration, water velocity, and temperature) were investigated. Very short sampling times (10-300 s) and sorbents with a strong affinity and large capacity were used to ensure a 'zero sink' effect calibrate process. It was found that mass uptake of analyte changed linearly with concentration. Increase of water velocity increased mass uptake, though the increase is not linear. Temperature did not affect mass uptake significantly under typical field sampling conditions. To further describe rapid SPME analysis of aqueous samples, a new model translated from heat transfer to a circular cylinder in cross flow was used. An empirical correlation to this model was used to predict the mass transfer coefficient. Findings indicated that the predicted mass uptake compared well with experimental mass uptake. The new model also predicted rapid air sampling accurately. To further integrate the sampling and analysis processes, especially for on-site or in-vivo investigations where the composition of the sample matrix is very complicated and/or agitation of the sample matrix is variable or unknown, a new approach for calibration was developed. This involved the loading internal standards onto the extraction fiber prior to the extraction step. During sampling, the standard partially desorbs into the sample matrix and the rate at which this process occurs, was for calibration. The kinetics of the absorption/desorption was investigated, and the isotropy of the two processes was demonstrated, thus validating this approach for calibration. A modified SPME device was used as a passive sampler to determine the time-weighted average (TWA) concentration of volatile organic compounds (VOCs) in air. The sampler collects the VOCs by the mechanism of molecular diffusion and sorption on to a coated fiber as collection medium. This process was shown to be described by Fick's first law of diffusion, whereby the amount of analyte accumulated over time enable measurement of the TWA concentration to which the sampler was exposed. TWA passive sampling with a SPME device was shown to be almost independent of face velocity, and to be more tolerant of high and low analyte concentrations and long and short sampling times, because of the ease with which the diffusional path length could be changed. Environmental conditions (temperature, pressure, relative humidity, and ozone) had little or no effect on sampling rate. When the SPME device was tested in the field and the results compared with those from National Institute of Occupational Health and Safety (NIOSH) method 1501 good agreement was obtained. To facilitate the use of SPME for field sampling, a new field sampler was designed and tested. The sampler was versatile and user-friendly. The SPME fiber can be positioned precisely inside the needle for TWA sampling, or exposed completely outside the needle for rapid sampling. The needle is protected within a shield at all times hereby eliminating the risk of operator injury and fiber damage. A replaceable Teflon cap is used to seal the needle to preserve sample integrity. Factors that affect the preservation of sample integrity (sorbent efficiency, temperature, and sealing materials) were studied. The use of a highly efficient sorbent is recommended as the first choice for the preservation of sample integrity. Teflon was a good material for sealing the fiber needle, had little memory effect, and could be used repeatedly. To address adsorption of high boiling point compounds on fiber needles, several kinds of deactivated needles were evaluated. RSC-2 blue fiber needles were the more effective. A preliminary field sampling investigation demonstrated the validity of the new SPME device for field applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Yang. "Performance Analysis of Point Source Model with Coincident Phase Centers in FDTD." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/214.

Full text
Abstract:
The Finite Difference Time Domain (FDTD) Method has been a powerful tool in numerical simulation of electromagnetic (EM) problems for decades. In recent years, it has also been applied to biomedical research to investigate the interaction between EM waves and biological tissues. In Wireless Body Area Networks (WBANs) studies, to better understand the localization problem within the body, an accurate source/receiver model must be investigated. However, the traditional source models in FDTD involve effective volume and may cause error in near field arbitrary direction. This thesis reviews the basic mathematical and numerical foundation of the Finite Difference Time Domain method and the material properties needed when modeling a human body in FDTD. Then Coincident Phase Centers (CPCs) point sources models have been introduced which provide nearly the same accuracy at the distances as small as 3 unit cells from the phase center. Simultaneously, this model outperforms the usual sources in the near field when an arbitrary direction of the electric or magnetic dipole moment is required.
APA, Harvard, Vancouver, ISO, and other styles
24

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15985/1/Simon_Sando_Thesis.pdf.

Full text
Abstract:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
APA, Harvard, Vancouver, ISO, and other styles
25

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15985/.

Full text
Abstract:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
APA, Harvard, Vancouver, ISO, and other styles
26

Allen, Jake. "Comparison of Time Series and Functional Data Analysis for the Study of Seasonality." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1349.

Full text
Abstract:
Classical time series analysis has well known methods for the study of seasonality. A more recent method of functional data analysis has proposed phase-plane plots for the representation of each year of a time series. However, the study of seasonality within functional data analysis has not been explored extensively. Time series analysis is first introduced, followed by phase-plane plot analysis, and then compared by looking at the insight that both methods offer particularly with respect to the seasonal behavior of a variable. Also, the possible combination of both approaches is explored, specifically with the analysis of the phase-plane plots. The methods are applied to data observations measuring water flow in cubic feet per second collected monthly in Newport, TN from the French Broad River. Simulated data corresponding to typical time series cases are then used for comparison and further exploration.
APA, Harvard, Vancouver, ISO, and other styles
27

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16200/1/Luke_Rankine_Thesis.pdf.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
28

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16200/.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
29

Kendziorra, Carsten [Verfasser]. "Implementation of a phase detection algorithm for dynamic cardiac computed tomography analysis based on time dependent contrast agent distribution / Carsten Kendziorra." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2016. http://d-nb.info/1111558744/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Peterson, Zlatuse Durda. "Analysis of Clinically Important Compounds Using Electrophoretic Separation Techniques Coupled to Time-of-Flight Mass Spectrometry." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/23.

Full text
Abstract:
Capillary electrophoretic (CE) separations were successfully coupled to time-of-flight mass spectrometric (TOFMS) detection for the analysis of three families of biological compounds that act as mediators and/or indicators of disease, namely, catecholamines (dopamine, epinephrine, norepinephrine) and their O-methoxylated metabolites (3-methoxytyramine, norepinephrine, and normetanephrine), indolamines (serotonin, tryptophan, and 5-hydroxytryptophan), and angiotensin peptides. While electrophoretic separation techniques provided high separation efficiency, mass spectrometric detection afforded specificity unsurpassed by other types of detectors. Both catecholamines and indolamines are present in body fluids at concentrations that make it possible for them to be determined by capillary zone electrophoresis coupled to TOFMS without employing any preconcentration scheme beyond sample work up by solid phase extraction (SPE). Using this hyphenated approach, submicromolar levels of catecholamines and metanephrines in normal human urine and indolamines in human plasma were detected after the removal of the analytes from their biological matrices and after preconcentration by SPE on mixed mode cation-exchange sorbents. The CE-TOFMS and SPE methods were individualized for each group of compounds. While catecholamines and metanephrines in urine samples were quantitated using 3,4-dihydroxybenzylamine as an internal standard, deuterated isotopes, considered ideal internal standards, were used for the quantitation of indolamines. Because the angiotensin peptides are present in biological fluids at much lower concentrations than the previous two families of analytes, their analysis required the application of additional preconcentration techniques. In this work, the coupling of either of two types of electrophoretic preconcentration methods - field amplified injection (FAI) and isotachophoresis (ITP) - to capillary zone electrophoresis with both UV and MS detection was evaluated. Using FAI-CE-UV, angiotensins were detected at ~1 nM concentrations. Using similar conditions but TOFMS detection, the detection limits were below 10 nM. ITP was evaluated in both single-column and two-column comprehensive arrangements. The detection limits achieved for the ITP-based techniques were approximately one order of magnitude higher than for the FAI-based preconcentration. While the potential usefulness of these techniques was demonstrated using angiotensins standards, substantial additional research would be required to allow these approaches to be applied to plasma as part of clinical assays.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Fei. "3D Time-lapse Analysis of Seismic Reflection Data to Characterize the Reservoir at the Ketzin CO2 Storage Pilot Site." Doctoral thesis, Uppsala universitet, Geofysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-301003.

Full text
Abstract:
3D time-lapse seismics, also known as 4D seismics, have great potential for monitoring the migration of CO2 at underground storage sites. This thesis focuses on time-lapse analysis of 3D seismic reflection data acquired at the Ketzin CO2 geological storage site in order to improve understanding of the reservoir and how CO2 migrates within it. Four 3D seismic surveys have been acquired to date at the site, one baseline survey in 2005 prior to injection, two repeat surveys in 2009 and 2012 during the injection period, and one post-injection survey in 2015. To accurately simulate time-lapse seismic signatures in the subsurface, detailed 3D seismic property models for the baseline and repeat surveys were constructed by integrating borehole data and the 3D seismic data. Pseudo-boreholes between and beyond well control were built. A zero-offset convolution seismic modeling approach was used to generate synthetic time-lapse seismograms. This allowed simulations to be performed quickly and limited the introduction of artifacts in the seismic responses. Conventional seismic data have two limitations, uncertainty in detecting the CO2 plume in the reservoir and limited temporal resolution. In order to overcome these limitations, complex spectral decomposition was applied to the 3D time-lapse seismic data. Monochromatic wavelet phase and reflectivity amplitude components were decomposed from the 3D time-lapse seismic data. Wavelet phase anomalies associated with the CO2 plume were observed in the time-lapse data and verified by a series of seismic modeling studies. Tuning frequencies were determined from the balanced amplitude spectra in an attempt to discriminate between pressure effects and CO2 saturation. Quantitative assessment of the reservoir thickness and CO2 mass were performed. Time-lapse analysis on the post-injection survey was carried out and the results showed a consistent tendency with the previous repeat surveys in the CO2 migration, but with a decrease in the size of the amplitude anomaly. No systematic anomalies above the caprock were detected. Analysis of the signal to noise ratio and seismic simulations using the detailed 3D property models were performed to explain the observations. Estimation of the CO2 mass and uncertainties in it were investigated using two different approaches based on different velocity-saturation models.
APA, Harvard, Vancouver, ISO, and other styles
32

Song, Shin Miin, and shinmiin@singnet com sg. "Comprehensive two-dimensional gas chromatography (GCxGC ) for drug analysis." RMIT University. Applied Sciences, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080627.114511.

Full text
Abstract:
Separation technologies have occupied a central role in the current practices of analytical methods used for drug analysis today. As the emphasis in contemporary drug analysis shifts towards ultra-trace concentrations, the contribution from unwanted matrix interferences takes on greater significance. In order to single out a trace substance with confidence from a rapidly expanding list of drug compounds (and their metabolites) in real complex specimens, analytical technologies must evolve to keep up with such trends. Today, the task of unambiguous identification in forensic toxicology still relies heavily upon chromatographic methods based on mass spectrometric detection, in particular GC-MS in electron ionisation (EI) mode. Although the combined informing power of (EI) GC-MS has served faithfully in a myriad of drug application studies to date, we may ask if (EI) GC-MS will remain competitive in meeting the impending needs of ultra-trace drug analysis in the fut ure? To what extent of reliability can sample clean-up strategies be used in ultra-trace analysis without risking the loss of important analytes of interest? The increasing use of tandem mass spectrometry with one-dimensional (1D) chromatographic techniques (e.g. GC-MS/MS) at its simplest, considers that single-column chromatographic analysis with mass spectrometry alone is not sufficient in providing unambiguous confirmation of the identity of any given peak, particularly when there are peak-overlap. Where the mass spectra of the individual overlapping peaks are highly similar, confounding interpretation of their identities may arise. By introducing an additional resolution element in the chromatographic domain of a 1D chromatographic system, the informing power of the analytical system can also be effectively raised by the boost in resolving power from two chromatographic elements. Thus this thesis sets out to address the analytical challenges of modern drug analysis through the application of high resolut ion comprehensive two-dimensional gas chromatography (GC„eGC) to a series of representative drug studies of relevance to forensic sciences.
APA, Harvard, Vancouver, ISO, and other styles
33

Joseph, Joshua Allen Jr. "Computational Tools for Improved Analysis and Assessment of Groundwater Remediation Sites." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28458.

Full text
Abstract:
Remediation of contaminated groundwater remains a high-priority national goal in the United States. Water is essential to life, and new sources of water are needed for an expanding population. Groundwater remediation remains a significant technical challenge despite decades of research into this field. New approaches are needed to address the most severely-polluted aquifers, and cost-effective solutions are required to meet remediation objectives that protect human health and the environment. Source reduction combined with Monitored Natural Attenuation (MNA) is a remediation strategy whereby the source of contamination is aggressively treated or removed and the residual groundwater plume depletes due to natural processes in the subsurface. The USEPA requires long-term performance monitoring of groundwater at MNA sites over the remediation timeframe, which often takes decades to complete. Presently, computational tools are lacking to adequately integrate source remediation with economic models. Furthermore, no framework has been developed to highlight the tradeoff between the degree of remediation versus the level of benefit within a cost structure. Using the Natural Attenuation Software (NAS) package developed at Virginia Tech, a set of formulae have been developed for calculating the TOR for petroleum-contaminated aquifers (specifically tracking benzene and MTBE) through statistical techniques. With the knowledge of source area residual saturation, groundwater velocity, and contaminant plume source length, the time to remediate a site contaminated with either benzene or MTBE can be determined across a range of regulatory maximum contaminant levels. After developing formulae for TOR, an integrated and interactive decision tool for framing the decision analysis component of the remediation problem was developed. While MNA can be a stand-alone groundwater remediation technology, significant benefits may be realized by layering a more traditional source zone remedial technique with MNA. Excavation and soil vapor extraction when applied to the front end of a remedial action plan can decrease the amount of time to remediation and while generally more expensive than an MNA-only approach, may accrue long-term economic advantages that would otherwise be foregone. The value of these research components can be realized within the engineering and science communities, as well as through government, business and industry, and communities where groundwater contamination and remediation are of issue. Together, these tools constitute the Sâ ªEâ ªEâ ªPâ ªAGE paradigm, founded upon the concept of sound science for an environmental engineering, effectual economics, and public policy agenda. The TOR formulation simplifies the inputs necessary to determine the number of years that an MNA strategy will require before project closure and thus reduces the specialized skills and training required to perform a numerical analysis that for one set of conditions could require many hours of simulation time. The economic decision tool, that utilizes a life cycle model to evaluate a set of feasible alternatives, highlights the tradeoffs between time and economics can be realized over the lifetime of the remedial project.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

STALLO, COSIMO. "Wireless technologies for future multi-gigabit communications beyond 60 GHz: design issues and performance analysis for terrestrial and satellite applications." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1302.

Full text
Abstract:
Oggigiorno la richiesta di comunicazioni wireless ad alto data rate cresce proporzionalmente alle velocità di trasferimento dati raggiungibili mediante l’uso della fibra ottica. Infatti, la tendenza ultima in materia di ricerca è quello di sviluppare sistemi che consentano a utenti mobili velocità di connessione almeno confrontabili con quello della Ethernet Wired. Lo standard GbE è ormai consolidato, mentre quello 10 GbE è disponibile dal 2002. Mentre la tecnologia in fibra ottica è in grado di fornire data rate dell’ordine di alcuni multi-gigabit/s, i costi ed i tempi per la sua messa in opera sono ancora proibitivi per alcune applicazioni. Ponti wireless troverebbero impiego nella copertura dei buchi nella rete in fibra con costi contenuti e senza azioni complesse di scavo. In questo scenario, le comunicazioni wireless ai multigigabit consentiranno di sostituire il segmento in fibra nei futuri sistemi di terza e quarta generazione, nei sistemi di antenna distribuiti, nelle applicazioni commerciali come la trasmissione di video ad alta definizione (HDTV). Le singole abitazioni così come gli interi complessi residenziali nei prossimi decenni potranno usufruire, grazie alle comunicazioni wireless ai multi gigabit, di nuovi servizi ed applicazioni. La necessità di così alto contenuto informativo dunque emergera’ sia in scenari a corto raggio che medio-lungo. Dove una così elevata larghezza di banda necessaria per realizzare comunicazioni wireless ai multi-gigabit e’ presente senza problematiche di interferenza? L’attenzione cade sulle EHF dove sono disponibili grandi porzioni di banda. Recentemente, si e’ mostrato grande interesse per lo sviluppo di sistemi operanti a 60 GHz per applicazioni sia indoor che outdoor, essendo in diversi paesi non licenziata. Comunque, a causa dell’elevata attenuazione dovuta al picco di assorbimento dell’ossigeno a queste frequenze, tali sistemi non sono indicati per lunghi collegamenti. Inoltre, le FCC hanno stabilito che le frequenze nello spettro tra i 70-95 GHz (quindi lontane dallo bande in cui c’è il picco massimo di assorbimento ossigeno) possano essere impiegate per uso semi-licenziato al fine di realizzare collegamenti “ultimo miglio”. D’altra parte, sopra i 60 GHz, sia su breve che corto raggio, la letteratura manca ancora di conoscenze dettagliate sulle tecniche di trasmissione, modulazione, equalizzazione piu’ opportune a queste frequenze. Il seguente lavoro di tesi ha mirato quindi ad investigare le potenzialita’ di interface radio innovative ed avanzate, come quelle basate su tecniche di trasmissione IR-UWB, per le future generazioni delle comunicazioni wireless oltre i 60 GHz per applicazioni ai Multi-gigabit/s. In particolare, questo lavoro mostra come un sistema di comunicazione del tipo IR-UWB sia sensibile alle non linearita’ circuitali tipiche a queste frequenze (Phase Noise, Timing Jitter, LNA, HPA) e confronta le prestazioni con altri schemi di trasmissione e ricezione piu’ tradizionali utilizzabili oltre i 60 GHz (in particolar modo FSK). L’obiettivo principale del lavoro è stato quello di capire vantaggi e svantaggi di un sistema di trasmissione “impulsato” nello scenario considerato rispetto, considerando appunto tutte le peculiarità propagative e tecnologiche nell’uso delle EHF. L’utilizzo di frequenze così alte rappresenta la soluzione più adeguata per realizzare una infrastruttura cooperative per un’informazione globale al fine di garantire la cosiddetta “Connettività ai Gigabit” sfruttando collegamenti aerospaziali. Cio’ e’ mirato a fare in modo che un tale segmento radio costituisca una sorta di potenziale “dorsale in aria” per una connettività wireless globale. Quindi, l’uso delle bande di frequenza oltre le Q/V sarà la condizione necessaria per sviluppare una rete multi-purpose, come integrazione dei sistemi terrestri e spaziali, al fine di supportare le emergenti richieste di servizio ad alto contenuto informativo. La Banda W (75-110 GHz, rispettivamente 4 -2.7 mm) potrebbe rappresentare la risposta a queste necessità in virtù di una larghezza di banda elevata, corte lunghezze d’onda, ridotte dimensioni delle antenne, consentendo di proporre diversi innovativi servizi che necessitano di trasferimenti dati ad alta velocità. Attualmente, purtroppo, le prestazioni di una qualunque soluzione per trasporto dati a tali frequenze attraverso la Troposfera non sono ancora note, dal momento che nessuna missione scientifica e/o per telecomunicazioni è stata realizzata, né su base sperimentale né operativamente. Quindi, missioni in banda W devono essere studiate ai fini di realizzare una prima valutazione empirica degli effetti della Troposfera sul canale radio. Quindi, l’ultima parte del mio lavoro di tesi è stato focalizzato sulla analisi e valutazione di prestazione di future missioni per l’uso della banda W anche per le comunicazioni satellitari che prevedano una linea completa di P/L operanti a tali frequenze. Lo studio e la valutazione di prestazione di missioni miranti a fornire una prima valutazione empirica degli effetti della Troposfera sul canale radio in banda W rappresenta il primo passo utile per la realizzazione di un “Sistema di Sistemi” che sia in grado di soddisfare le richieste di trasmissioni dati ad alto contenuto informativo per un grande numero di utenti finali e servizi orientati ad applicazioni specifiche.
Demand for very high-speed wireless communications is proportionally growing with respect to the increasing data rates reachable by optical fibers. In fact, the emerging research trend in computer networks is to cut more and more cables and to provide mobile and nomadic users with a data rate at least comparable with that one of wired Ethernet. GbE standard is now widespread and 10 GbE standard has been available since 2002. While established and well-known fiber-optic data-transfer devices can provide multigigabit per second data rates, infrastructure costs and deployment time can be too expensive for some applications. Wireless links can be used to bridge the gaps in the fiber network and they can be deployed very rapidly, without the need for costly and complex trenching actions. Multigigabit wireless applications will include fiber segment replacement in future 3G and 4G backhauls, in distributed antenna systems, in enterprise connectivity, and in consumer-level applications, such as HDTV. Future home and building environments are a domain where, in the coming decade, large quantitative and qualitative changes can be expected in services and applications, that ultimately will benefit from wireless multigigabit/s communication. Therefore, the need for such high data rates arises both in short-range scenarios and in medium-long range scenarios. Where a very huge bandwidth for multigigabit wireless communications can be made available as free spectrum without interference issues? The unique possibility is to look at EHF. Recently, there has been a lot of interest in the development of 60 GHz systems for the indoor and outdoor applications, because this bandwidth has been allocated in many countries as free spectrum. However, because of higher propagation loss due to oxygen absorption at this band, it is not suitable for very long links. Further, the FCC has made available 13 GHz of spectrum in the 70-95 GHz (away from the oxygen absorption band, in order to facilitate longer range communication) for semi-unlicensed use for directional point-to-point “last mile” links. However, above 60 GHz, both for long and short range, there is a lack of discussion on modulation, equalization, and algorithm design at physical layer. This work mainly aims at investigating the possibility to use innovative and advanced radio interfaces, as one based on IR UWB transmission technique, to realise multigigabit/s communications beyond 60 GHz. In particular, this work shows how an IR UWB communication system is sensitive to typical H/W not idealities beyond 60 GHz (Phase Noise, Timing Jitter, LNA and HPA distortions) and compares its performance with the ones of a more classical continuous wave communications system based on FSK modulation. The exploitation of such higher frequencies represents the most suitable solution to develop a cooperative global information infrastructure in order to guarantee the so-called “Gigabit Connectivity” through aerospace links making such a radio segment a potential “backbone on the air” for global wireless connectivity. Therefore, the use of “beyond Q/V bands” will be the necessary condition to develop a multipurpose network, as integration of terrestrial and space systems, in order to support forthcoming high-data-rate services demands. W band (75-110 GHz, respectively 4 -2.7 mm) could represent the answer to these needs due to the high bandwidth availability, short wavelength, reduced interference, small antenna size, allowing to propose many innovative services that need high-volume transfers. Currently, however, the performance behaviour of any solution for data transportation over W band frequencies across the Troposphere is still unknown, since no scientific and/or telecommunication mission has been realised, either on an experimental basis or in an operating mode. Therefore, missions in W band have to be studied in order to perform a first empirical evaluation of the Troposphere effects on the radio channel. Consequently, the last part of this work has been focused on the analysis and performance evaluation of future missions for the exploitation of W band too for satellite communications aiming at designing a full line of P/Ls operating in such a frequency range. The design and performance analysis of missions to perform a first empirical evaluation of the Troposphere effects on the W band radio channel represent the preliminary useful step for realising a “System of Systems” which is able to meet the high-quality data transmission requirements for a large number of end-users and data-oriented services.
APA, Harvard, Vancouver, ISO, and other styles
35

Ida, Björs. "Development of separation method for analysis of oligonucleotides using LC-UV/MS." Thesis, Uppsala universitet, Analytisk farmaceutisk kemi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-403381.

Full text
Abstract:
Introduction Oligonucleotides are short nucleic acid chains, usually 19-27mer long. They bind to their corresponding chain, making a specific inhibition possible. In pharmaceuticals, this can be used to inhibit the expression of a gene or protein of interest. Oligonucleotides are usually analyzed based on separation using both hydrophobic and ion-exchange properties. In this project, the possibility to use a mixed-mode column to separate these oligonucleotides and their impurities were explored. Method Liquid chromatography is used as the separation method and the method of detection is both mass spectrometry and UV. Three different columns are evaluated; C18, DNAPac RP, and mixed-mode RP/WAX. Results and discussion Different compositions of mobile phases and gradients are evaluated based on a literature study. Triethylamine, triethylammonium acetate, ammonium formate, hexafluoroisopropanol is used along with both methanol and acetonitrile. Phosphate buffer is evaluated on LC-UV. The results from the C18 column displays a good separation of the oligonucleotides, whilst the DNAPac RP is not as sufficient using the same mobile phases. The mixed-mode column provides good separation and selectivity using phosphate buffer and UV detection. Conclusion Mixed-mode column has the potential to be used for separation of oligonucleotides and one future focus would be to make the mobile phase compatible with mass spectrometry. Phosphate buffer and UV detection seems to be the go-to mobile phase using mixed-mode column even though MS is a more powerful tool for the characterization and identification of oligonucleotides. This provides a hint about the challenge in making the mobile phase MS compatible.
APA, Harvard, Vancouver, ISO, and other styles
36

Kunadian, Illayathambi. "NUMERICAL INVESTIGATION OF THERMAL TRANSPORT MECHANISMS DURING ULTRA-FAST LASER HEATING OF NANO-FILMS USING 3-D DUAL PHASE LAG (DPL) MODEL." UKnowledge, 2004. http://uknowledge.uky.edu/gradschool_theses/324.

Full text
Abstract:
Ultra-fast laser heating of nano-films is investigated using 3-D Dual Phase Lag heat transport equation with laser heating at different locations on the metal film. The energy absorption rate, which is used to model femtosecond laser heating, is modified to accommodate for three-dimensional laser heating. A numerical solution based on an explicit finite-difference method is employed to solve the DPL equation. The stability criterion for selecting a time step size is obtained using von Neumann eigenmode analysis, and grid function convergence tests are performed. DPL results are compared with classical diffusion and hyperbolic heat conduction models and significant differences among these three approaches are demonstrated. We also develop an implicit finite-difference scheme of Crank-Nicolson type for solving 1-D and 3-D DPL equations. The proposed numerical technique solves one equation unlike other techniques available in the literature, which split the DPL equation into a system of two equations and then apply discretization. Stability analysis is performed using a von Neumann stability analysis. In 3-D, the discretized equation is solved using delta-form Douglas and Gunn time splitting. The performance of the proposed numerical technique is compared with the numerical techniques available in the literature.
APA, Harvard, Vancouver, ISO, and other styles
37

Wilson, Walter. "Novel Developments on the Extraction and Analysis of Polycyclic Aromatic Hydrocarbons in Environmental Samples." Doctoral diss., University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6384.

Full text
Abstract:
This dissertation focuses on the development of analytical methodology for the analysis of polycyclic aromatic hydrocarbons (PAHs) in water samples. Chemical analysis of PAHs is of great environmental and toxicological importance. Many of them are highly suspect as etiological agents in human cancer. Among the hundreds of PAHs present in the environment, the U.S. Environmental Protection Agency (EPA) lists sixteen as "Consent Decree" priority pollutants. Their routine monitoring in environmental samples is recommended to prevent human contamination risks. A primary route of human exposure to PAHs is the ingestion of contaminated water. The rather low PAH concentrations in water samples make the analysis of the sixteen priority pollutants particularly challenging. Current EPA methodology follows the classical pattern of sample extraction and chromatographic analysis. The method of choice for PAHs extraction and pre-concentration is solid-phase extraction (SPE). PAHs determination is carried out via high-performance liquid chromatography (HPLC) or gas chromatography/mass spectrometry (GC/MS). When HPLC is applied to highly complex samples, EPA recommends the use of GC/MS to verify compound identification and to check peak-purity of HPLC fractions. Although EPA methodology provides reliable data, the routine monitoring of numerous samples via fast, cost effective and environmentally friendly methods remains an analytical challenge. Typically, 1 L of water is processed through the SPE device in approximately 1 h. The rather large water volume and long sample processing time are recommended to reach detectable concentrations and quantitative removal of PAHs from water samples. Chromatographic elution times of 30 – 60 min are typical and standards must be run periodically to verify retention times. If concentrations of targeted PAHs are found to lie outside the detector's response range, the sample must be diluted (or concentrated), and the process repeated. In order to prevent environmental risks and human contamination, the routine monitoring of the sixteen EPA-PAHs is not sufficient anymore. Recent toxicological studies attribute a significant portion of the biological activity of PAH contaminated samples to the presence of high molecular weight (HMW) PAHs, i.e. PAHs with MW ? 300. Because the carcinogenic properties of HMW-PAHs differ significantly from isomer to isomer, it is of paramount importance to determine the most toxic isomers even if they are present at much lower concentrations than their less toxic isomers. Unfortunately, established methodology cannot always meet the challenge of specifically analyzing HMW-PAHs at the low concentration levels of environmental samples. The main problems that confront classic methodology arise from the relatively low concentration levels and the large number of structural isomers with very similar elution times and similar, possibly even virtually identical, fragmentation patterns. This dissertation summarizes significant improvements on various fronts. Its first original component deals with the unambiguous determination of four HMW-PAHs via laser-excited time-resolved Shpol'skii spectroscopy (LETRSS) without previous chromatographic separation. The second original component is the improvement of a relatively new PAH extraction method - solid-phase nanoextraction (SPNE) - which uses gold nanoparticles as extracting material for PAHs. The advantages of the improved SPNE procedure are demonstrated for the analysis of EPA-PAHs and HMW-PAHs in water samples via GC/MS and LETRSS, respectively.
Ph.D.
Doctorate
Chemistry
Sciences
Chemistry
APA, Harvard, Vancouver, ISO, and other styles
38

Magron, Paul. "Reconstruction de phase par modèles de signaux : application à la séparation de sources audio." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0078/document.

Full text
Abstract:
De nombreux traitements appliqués aux signaux audio travaillent sur une représentation Temps-Fréquence (TF) des données. Lorsque le résultat de ces algorithmes est un champ spectral d’amplitude, la question se pose, pour reconstituer un signal temporel, d’estimer le champ de phase correspondant. C’est par exemple le cas dans les applications de séparation de sources, qui estiment les spectrogrammes des sources individuelles à partir du mélange ; la méthode dite de filtrage de Wiener, largement utilisée en pratique, fournit des résultats satisfaisants mais est mise en défaut lorsque les sources se recouvrent dans le plan TF. Cette thèse aborde le problème de la reconstruction de phase de signaux dans le domaine TF appliquée à la séparation de sources audio. Une étude préliminaire révèle la nécessité de mettre au point de nouvelles techniques de reconstruction de phase pour améliorer la qualité de la séparation de sources. Nous proposons de baser celles-ci sur des modèles de signaux. Notre approche consiste à exploiter des informations issues de modèles sous-jacents aux données comme les mélanges de sinusoïdes. La prise en compte de ces informations permet de préserver certaines propriétés intéressantes, comme la continuité temporelle ou la précision des attaques. Nous intégrons ces contraintes dans des modèles de mélanges pour la séparation de sources, où la phase du mélange est exploitée. Les amplitudes des sources pourront être supposées connues, ou bien estimées conjointement dans un modèle inspiré de la factorisation en matrices non-négatives complexe. Enfin, un modèle probabiliste de sources à phase non-uniforme est mis au point. Il permet d’exploiter les à priori provenant de la modélisation de signaux et de tenir compte d’une incertitude sur ceux-ci. Ces méthodes sont testées sur de nombreuses bases de données de signaux de musique réalistes. Leurs performances, en termes de qualité des signaux estimés et de temps de calcul, sont supérieures à celles des méthodes traditionnelles. En particulier, nous observons une diminution des interférences entre sources estimées, et une réduction des artéfacts dans les basses fréquences, ce qui confirme l’intérêt des modèles de signaux pour la reconstruction de phase
A variety of audio signal processing techniques act on a Time-Frequency (TF) representation of the data. When the result of those algorithms is a magnitude spectrum, it is necessary to reconstruct the corresponding phase field in order to resynthesize time-domain signals. For instance, in the source separation framework the spectrograms of the individual sources are estimated from the mixture ; the widely used Wiener filtering technique then provides satisfactory results, but its performance decreases when the sources overlap in the TF domain. This thesis addresses the problem of phase reconstruction in the TF domain for audio source separation. From a preliminary study we highlight the need for novel phase recovery methods. We therefore introduce new phase reconstruction techniques that are based on music signal modeling : our approach consists inexploiting phase information that originates from signal models such as mixtures of sinusoids. Taking those constraints into account enables us to preserve desirable properties such as temporal continuity or transient precision. We integrate these into several mixture models where the mixture phase is exploited ; the magnitudes of the sources are either assumed to be known, or jointly estimated in a complex nonnegative matrix factorization framework. Finally we design a phase-dependent probabilistic mixture model that accounts for model-based phase priors. Those methods are tested on a variety of realistic music signals. They compare favorably or outperform traditional source separation techniques in terms of signal reconstruction quality and computational cost. In particular, we observe a decrease in interferences between the estimated sources and a reduction of artifacts in the low-frequency components, which confirms the benefit of signal model-based phase reconstruction methods
APA, Harvard, Vancouver, ISO, and other styles
39

Monteiro, José Humberto Araújo 1981. "Resposta transitória no domínio do tempo de uma linha de transmissão trifásica considerando uma nova implementação do efeito pelicular = Time domain transient response analysis of three-phase transmission line considering a new skin effect model." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260915.

Full text
Abstract:
Orientadores: José Pissolato Filho, Eduardo Coelho Marques da Costa
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T04:23:41Z (GMT). No. of bitstreams: 1 Monteiro_JoseHumbertoAraujo_D.pdf: 2599049 bytes, checksum: 247f68cef0523b44c3d1e9c69dc48119 (MD5) Previous issue date: 2014
Resumo: Este trabalho apresenta o desenvolvimento de um modelo de linha de transmissão trifásica utilizando uma nova implementação do efeito pelicular, além do estudo da resposta transitória obtida a partir do referido modelo quando surtos de manobra são simulados. A metodologia tradicionalmente utilizada para o cálculo da impedância interna de cabos sólidos cilíndricos faz uso das funções de Bessel, o que a torna complexa. A metodologia descrita por Gatous é tão precisa quanto a metodologia que utiliza as funções de Bessel e possui a vantagem de ser mais simples, visto que a solução final é um somatório cuja precisão depende da frequência estudada. O modelo desenvolvido neste trabalho emprega a metodologia de Gatous em uma linha de transmissão trifásica, cujos modos de propagação independentes são obtidos a partir da aplicação da matriz de Clarke. Para validar a metodologia de Gatous, foram calculadas a resistência e a indutância interna de cabos com raios variados em uma ampla faixa de frequências. Os resultados foram comparados aos obtidos a partir da metodologia tradicional. A metodologia de Gatous reproduziu com precisão a variação da impedância interna com a frequência. Para avaliar o funcionamento do modelo de linha de transmissão trifásico no domínio do tempo, um caso base foi estabelecido. Uma linha de transmissão trifásica de 69kV, circuito simples, foi submetida a chaveamentos de carga em duas situações distintas: chaveamento sendo executado no ponto de cruzamento com o zero da tensão e; chaveamento no ponto de 90° da tensão. Os transitórios de tensão e corrente foram obtidos a partir do modelo elaborado e comparados com os resultantes do software de análise de transitórios ATP. Os resultados alcançados reproduzem com fidelidade o comportamento transiente descrito pelo software supracitado
Abstract: This paper presents the development of a three-phase transmission line model using a new skin effect calculation and its transient response when some switching surges are applied to it. The methodology commonly used to calculate the internal impedance of solid conductors with cylindrical cross sectional area employs Bessel functions, which makes it a hard task to accomplish. Gatous, in his doctoral work, presented a new method to calculate skin effect impedance as accurate as Bessel¿s methodology with advantage of simplicity, whereas that final solution is an algebraic sum whose precision depends of frequency studied. The transmission line model developed in this work utilizes Gatous¿s method for skin effect impedance calculation in a three-phase transmission line, whose independent modes of propagation are obtained from the application of Clarke's matrix. In order to validate the mentioned methodology, internal resistances and inductances of cables with different size radii were calculated for a wide range of frequencies. The results were compared with those obtained through the traditional method, reproducing correctly the variation of the internal impedance with frequency. A base case was established to evaluate the operation of the three-phase transmission line in the time-domain model. A 69kV three-phase transmission line, single circuit, was subjected to switching load in two distinct situations: switching at zero crossing voltage and switching at voltage peak. Voltage and current transients were obtained from the developed model and compared with those derived from transient analysis software ATP. The results faithfully reproduced the transient behavior described by the above software
Doutorado
Energia Eletrica
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
40

Holuša, Adam. "Návrh mostu na dálnici D48." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-412919.

Full text
Abstract:
Diploma thesis deals with design of a new bridge on motorway D48 located on bypass of Frýdek – Místek. Total spread of the bridge is 113 meters. Thesis includes 3 studies. For next assesment was chosen concrete girder box construction with 3 spans. Bridge is bulit on falsework. Structural analysis includes influences of construction by TDA method. Load effect is solved in Scia Engineer 18.0. The assessment of the bridge was made according to EC.
APA, Harvard, Vancouver, ISO, and other styles
41

Parrish, Douglas K. "Application of solid phase microextraction with gas chromatography-mass spectrometry as a rapid, reliable, and safe method for field sampling and analysis of chemical warfare agent precursors /." Download the dissertation in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Parrish2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lima, Simone Rodrigues. "Análise tempo-frequência de ondas acústicas em escoamentos monofásicos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/18/18147/tde-04022011-152245/.

Full text
Abstract:
A presente dissertação tem como objetivo principal estudar a propagação acústica em escoamentos monofásicos. Para tal, são analisados sinais transientes de pressão fornecidos por sensores instalados em posições conhecidas na linha de teste, através do estudo de técnicas de análise de sinais, a fim de investigar se as variações do conteúdo espectral dos sinais são influenciadas pela ocorrência de vazamentos no duto. A análise dos sinais foi realizada nos planos temporal, frequencial, tempo-frequência e estatístico. Os resultados experimentais foram obtidos no oleoduto piloto do NETeF - Núcleo de Engenharia Térmica e Fluidos da USP - Universidade de São Paulo, com uma seção de testes com 1500 metros e diâmetro de 51,2 mm, com escoamento monofásico de água. Os resultados obtidos através da análise tempo-frequência mostraram-se satisfatórios, sendo esta técnica capaz de identificar a composição espectral instantânea de um sinal, ou seja, foi eficiente na identificação de picos de amplitude da frequência ao longo do eixo temporal. Além disso, a análise probabilística, através do desvio-padrão do sinal também mostrou-se eficiente exibindo uma disparidade significativa entre os sinais com e sem vazamento.
The present dissertation reports on the study of the acoustic propagation in single-phase flow. It analyzes the transient signals provided by pressure sensors in known locations in the test line through the study of signal analysis techniques to investigate if the variations in spectral content of the signals are influenced by the occurrence of leaks in the pipe. The analysis of signals was performed in the time, frequency, time-frequency and statistical plans. The experimental results were obtained in a 1500 meter-long and 51.2 millimeter-diameter pilot pipeline at the Center of Thermal Engineering and Fluids, with single-phase flow of water. The results obtained by time-frequency analysis were satisfactory, allowing identifying the spectral composition of an instantaneous signal, i.e., the analysis was effective in identifying the frequency amplitude peaks along the time axis. Moreover, probabilistic analysis using the standard deviation of the signal was also efficient, displaying a significant disparity between the signals with and without leakage.
APA, Harvard, Vancouver, ISO, and other styles
43

Hargis, Brent H. "Analysis of Long-Term Utah Temperature Trends Using Hilbert-Haung Transforms." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5490.

Full text
Abstract:
We analyzed long-term temperature trends in Utah using a relatively new signal processing method called Empirical Mode Decomposition (EMD). We evaluated the available weather records in Utah and selected 52 stations, which had records longer than 60 years, for analysis. We analyzed daily temperature data, both minimum and maximums, using the EMD method that decomposes non-stationary data (data with a trend) into periodic components and the underlying trend. Most decomposition algorithms require stationary data (no trend) with constant periods and temperature data do not meet these constraints. In addition to identifying the long-term trend, we also identified other periodic processes in the data. While the immediate goal of this research is to characterize long-term temperature trends and identify periodic processes and anomalies, these techniques can be applied to any time series data to characterize trends and identify anomalies. For example, this approach could be used to evaluate flow data in a river to separate the effects of dams or other regulatory structures from natural flow or to look at other water quality data over time to characterize the underlying trends and identify anomalies, and also identify periodic fluctuations in the data. If these periodic fluctuations can be associated with physical processes, the causes or drivers might be discovered helping to better understand the system. We used EMD to separate and analyze long-term temperature trends. This provides awareness and support to better evaluate the extremities of climate change. Using these methods we will be able to define many new aspects of nonlinear and nonstationary data. This research was successful and identified several areas in which it could be extended including data reconstruction for time periods missing data. This analysis tool can be applied to various other time series records.
APA, Harvard, Vancouver, ISO, and other styles
44

ONeal, Ryan J. "Seismic and Well Log Attribute Analysis of the Jurassic Entrada/Curtis Interval Within the North Hill Creek 3D Seismic Survey, Uinta Basin, Utah, A Case History." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1025.

Full text
Abstract:
3D seismic attribute analysis of the Jurassic Entrada/Curtis interval within the North Hill Creek (NHC) survey has been useful in delineating reservoir quality eolian-influenced dune complexes. Amplitude, average reflection strength and spectral decomposition appear to be most useful in locating reservoir quality dune complexes, outlining their geometry and possibly displaying lateral changes in thickness. Cross sectional views displaying toplap features likely indicate an unconformity between Entrada clinoforms below and Curtis planar beds above. This relationship may aid the explorationist in discovering this important seismic interval. Seismic and well log attribute values were cross plotted and have revealed associations between these data. Cross plots are accompanied by regression lines and R2 values which support our interpretations. Although reservoir quality dune complexes may be delineated, the Entrada/Curtis play appears to be mainly structural. The best producing wells in the survey are associated with structural or stratigraphic relief and the thickest Entrada/Curtis intervals. Structural and stratigraphic traps are not always associated with laterally extensive dune complexes. Time structure maps as well as isochron maps have proven useful in delineating the thickest and/or gas prone portions of the Entrada/Curtis interval as well as areas with structural and stratigraphic relief. We have observed that the zones of best production are associated with low gamma ray (40-60 API) values. These low values are associated with zones of high amplitude. Thus, max peak amplitude as a seismic attribute may delineate areas of higher sand content (i.e. dune complexes) whereas zones of low amplitude may represent areas of lower sand content (i.e. muddier interdune or tidal flat facies). Lack of significant average porosity does not seem to be related to a lack of production. In fact, the best producing wells have been drilled in Entrada/Curtis intervals where average porosity is near 4 %. There are however zones within the upper portion of the Entrada/Curtis that are 40 ft. (12.2 m) thick and have porosities between 14% and 20%. By combining derived attribute maps with observed cross plot relationships, it appears that the best producing intervals within the Entrada/Curtis are those associated with high amplitudes, API values from 40-60 and structural relief.
APA, Harvard, Vancouver, ISO, and other styles
45

Oliveira, Venicio Soares de. "AplicaÃÃo do MÃtodo dos Elementos Finitos 3D na CaracterizaÃÃo EletromagnÃtica EstÃtica de Motores de RelutÃncia VariÃvel com ValidaÃÃo Experimental." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9299.

Full text
Abstract:
nÃo hÃ
Neste trabalho à apresentado um estudo sobre as caracterÃsticas de magnetizaÃÃo estÃtica de um MRRV â Motor Rotativo de RelutÃncia VariÃvel â de 1 cv, com base na simulaÃÃo do projeto da mÃquina utilizando AnÃlise por Elementos Finitos (AEF) 3D com malhas tetraÃdricas e hexaÃdricas, com vistas a investigar a que mais adequava-se a esse estudo. TrÃs mÃtodos experimentais foram utilizados para a validaÃÃo do projeto via Elementos Finitos: mÃtodo de determinaÃÃo da impedÃncia com tensÃo CA, mÃtodo do tempo de subida de corrente DC de fase e mÃtodo do tempo de descida de corrente DC de fase. Um estudo comparativo foi realizado como forma de efetivar a validaÃÃo. Todas as tarefas de simulaÃÃo e de mediÃÃo foram realizadas utilizando um microcomputador. Para a realizaÃÃo da simulaÃÃo do projeto foi utilizado um software de simulaÃÃo numÃrica com anÃlise dos Elementos Finitos (CST STUDIO SUITE TM 2010Â) em trÃs dimensÃes, utilizando para tanto, malhas tetraÃdricas e hexaÃdricas. Para as tarefas de mediÃÃo, foi utilizada tambÃm uma placa de aquisiÃÃo de dados (DAQ) integrada a duas interfaces: do LabView e Signal Express ambos desenvolvidos pelo o mesmo fabricante da placa de aquisiÃÃo (National Instruments) com o objetivo de determinar a indutÃncia por fase do MRRV. A partir dos valores obtidos de indutÃncias por fase, foi calculado o fluxo concatenado por fase. GrÃficos de fluxo concatenado por corrente e perfis de indutÃncias para sete posiÃÃes sÃo apresentados e entÃo comparados com a simulaÃÃo por AEF. Tabelas evidenciando as diferenÃas de alguns valores entre os mÃtodos em termo de porcentagem sÃo apresentadas e discutidas. Uma avaliaÃÃo sobre todos os mÃtodos foi feita, evidenciando aspectos positivos, negativos, limitaÃÃes e sugestÃes de melhoria dos mesmos. A mÃquina estudada foi um Motor Rotativo de relutÃncia VariÃvel 6/4 (6 pÃlos no estator e 4 pÃlos no rotor), trifÃsico, de 1 cv de potÃncia, corrente nominal de 10A e velocidade de 2.000 rpm, projetado pelo Grupo de Pesquisa em AutomaÃÃo e RobÃtica do Departamento de Engenharia ElÃtrica da Universidade Federal do CearÃ.
This paper presents a study on the characteristics of a static magnetization VRRM - Variable Reluctance Rotating Motor â 1 hp, based on the simulation of machine design using Finite Element Analysis (FEA) with 3D tetrahedral and hexahedral meshes intending to investigate which meshes is the most suited to this study. Three experimental methods to validate the design via Finite Elements were used: method for determining the impedance with AC voltage, the rising time method of DC current phase and falling time of DC current phase method. All tasks of simulation and measurement were performed using a personal computer. To perform the design simulation a numeric simulation software was used with finite element analysis (CST STUDIO SUITE TM 2010Â) in three dimensions, using both, tetrahedral and hexahedral meshes. For measurement tasks also a data acquisition board (DAQ) integrated with two interfaces was used: the LabView and Signal Express both developed by the same manufacturer of the acquisition board (National Instruments) aiming to determine the inductance per phase of the VRRM. From the values obtained of inductance per phase the flux linkage per phase was calculated. Graphs of flux linkage with current and inductance profiles for seven positions are shown and compared with the simulation FEA. Tables showing the values of some differences between the methods in terms of percentage are presented and discussed. An evaluation of all methods was done, showing positives and negatives aspects, limitations and suggestions to improve them. The machine was a studied Variable Reluctance Rotating Motor 6/4 (6 stator poles and 4 poles on the rotor), three-phase, 1 hp, rated current of 10A and speed of 2.000 rpm, designed by the Research Group on Automation and Robotics, Department of Electrical Engineering, Federal University of CearÃ.
APA, Harvard, Vancouver, ISO, and other styles
46

Bidaj, Klodjan. "Modélisation du bruit de phase et de la gigue d'une PLL, pour les liens séries haut débit." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0355/document.

Full text
Abstract:
La vitesse des liens séries haut débit (USB, SATA, PCI-express, etc.) a atteint les multi-gigabits par seconde, et continue à augmenter. Deux des principaux paramètres électriques utilisés pour caractériser les performances des SerDes sont la gigue transmis à un niveau de taux d’erreur donné et la capacité du récepteur à suivre la gigue à un taux d’erreur donné.Modéliser le bruit de phase des différents components du SerDes, et extraire la gigue temporelle pour la décomposer, aideraient les ingénieurs en conception de circuits à atteindre les meilleurs résultats pour les futures versions des SerDes. Générer des patterns de gigue synthétiques de bruits blancs ou colorés permettrait de mieux analyser les effets de la gigue dans le système pendant la phase de vérification.La boucle d’asservissement de phase est un des contributeurs de la gigue d’horloge aléatoire et déterministe à l’intérieur du système. Cette thèse présente une méthode pour modéliser la boucle d’asservissement de phase avec injection du bruit de phase et estimation de la gigue temporelle. Un modèle dans le domaine temporel en incluant les effets de non-linéarité de la boucle a été créé pour estimer cette gigue. Une nouvelle méthode pour générer des patterns synthétiques de gigue avec une distribution Gaussienne à partir de profils de bruit de phase coloré a été proposée.Les standards spécifient des budgets séparés de gigue aléatoire et déterministe. Pour décomposer la gigue de la sortie de la boucle d’asservissement de phase (ou la gigue généré par la méthode présentée), une nouvelle technique pour analyser et décomposer la gigue a été proposée. Les résultats de modélisation corrèlent bien avec les mesures et cette technique aidera les ingénieurs de conception à identifier et quantifier proprement les sources de la gigue ainsi que leurs impacts dans les systèmes SerDes.Nous avons développé une méthode, pour spécifier la boucle d’asservissement de phase en termes de bruit de phase. Cette méthode est applicable à tout standard (USB, SATA, PCIe, …) et définit les profils de bruits de4phases pour les différentes parties de la boucle d’asservissement de phase, pour s’assurer que les requis des standards sont satisfaits en termes de gigue. Ces modèles nous ont également permis de générer les spécifications de la PLL, pour des standards différents
Bit rates of high speed serial links (USB, SATA, PCI-express, etc.) have reached the multi-gigabits per second, and continue to increase. Two of the major electrical parameters used to characterize SerDes Integrated Circuit performance are the transmitted jitter at a given bit error rate (BER) and the receiver capacity to track jitter at a given BER.Modeling the phase noise of the different SerDes components, extracting the time jitter and decomposing it, would help designers to achieve desired Figure of Merit (FoM) for future SerDes versions. Generating white and colored noise synthetic jitter patterns would allow to better analyze the effect of jitter in a system for design verification.The phase locked loop (PLL) is one of the contributors of clock random and periodic jitter inside the system. This thesis presents a method for modeling the PLL with phase noise injection and estimating the time domain jitter. A time domain model including PLL loop nonlinearities is created in order to estimate jitter. A novel method for generating Gaussian distribution synthetic jitter patterns from colored noise profiles is also proposed.The Standard Organizations specify random and deterministic jitter budgets. In order to decompose the PLL output jitter (or the generated jitter from the proposed method), a new technique for jitter analysis and decomposition is proposed. Modeling simulation results correlate well with measurements and this technique will help designers to properly identify and quantify the sources of deterministic jitter and their impact on the SerDes system.We have developed a method, for specifying PLLs in terms of Phase Noise. This method works for any standard (USB, SATA, PCIe, …), and defines Phase noise profiles of the different parts of the PLL, in order to be sure that the standard requirements are satisfied in terms of Jitter
APA, Harvard, Vancouver, ISO, and other styles
47

Muševič, Sašo. "Non-stationary sinusoidal analysis." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123809.

Full text
Abstract:
Muchos tipos de señales que encontramos a diario pertenecen a la categoría de sinusoides no estacionarias. Una gran parte de esas señales son sonidos que presentan una gran variedad de características: acústicos/electrónicos, sonidos instrumentales harmónicos/impulsivos, habla/canto, y la mezcla de todos ellos que podemos encontrar en la música. Durante décadas la comunidad científica ha estudiado y analizado ese tipo de señales. El motivo principal es la gran utilidad de los avances científicos en una gran variedad de áreas, desde aplicaciones médicas, financiera y ópticas, a procesado de radares o sonar, y también a análisis de sistemas. La estimación precisa de los parámetros de sinusoides no estacionarias es una de las tareas más comunes en procesado digital de señales, y por lo tanto un elemento fundamental e indispensable para una gran variedad de aplicaciones. Las transformaciones de tiempo y frecuencia clásicas son solamente apropiadas para señales con variación lenta de amplitud y frecuencia. Esta suposición no suele cumplirse en la práctica, lo que conlleva una degradación de calidad y la aparición de artefactos. Además, la resolución temporal y frecuencial no se puede incrementar arbitrariamente debido al conocido principio de incertidumbre de Heisenberg. \\ El principal objetivo de esta tesis es revisar y mejorar los métodos existentes para el análisis de sinusoides no estacionarias, y también proponer nuevas estrategias y aproximaciones. Esta disertación contribuye sustancialmente a los análisis sinusoidales existentes: a) realiza una evaluación crítica del estado del arte y describe con gran detalle los métodos de análisis existentes, b) aporta mejoras sustanciales a algunos de los métodos existentes más prometedores, c) propone varias aproximaciones nuevas para el análisis de los modelos sinusoidales existentes i d) propone un modelo sinusoidal muy general y flexible con un algoritmo de análisis directo y rápido.
Many types of everyday signals fall into the non-stationary sinusoids category. A large family of such signals represent audio, including acoustic/electronic, pitched/transient instrument sounds, human speech/singing voice, and a mixture of all: music. Analysis of such signals has been in the focus of the research community for decades. The main reason for such intense focus is the wide applicability of the research achievements to medical, financial and optical applications, as well as radar/sonar signal processing and system analysis. Accurate estimation of sinusoidal parameters is one of the most common digital signal processing tasks and thus represents an indispensable building block of a wide variety of applications. Classic time-frequency transformations are appropriate only for signals with slowly varying amplitude and frequency content - an assumption often violated in practice. In such cases, reduced readability and the presence of artefacts represent a significant problem. Time and frequency resolu
APA, Harvard, Vancouver, ISO, and other styles
48

Bacon, Philippe. "Graphes d'ondelettes pour la recherche d'ondes gravitationnelles : application aux binaires excentriques de trous noirs." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC113/document.

Full text
Abstract:
En décembre 2015, les détecteurs LIGO ont pour la première fois détecté une onde gravitationnelle émise lors de la coalescence d'une paire de trous noirs il y a de cela 1.3 milliards d'années. Une telle première dans la toute nouvelle astronomie gravitationnelle a été suivie par plusieurs autres observations. La dernière en date est la fusion de deux étoiles à neutron dont la contrepartie électromagnétique a pu être observée par plusieurs observatoires à travers le monde. A cette occasion, les ondes gravitationnelles se sont inscrites dans l'astronomie multi-messager. Ces observations ont été rendues possibles par des techniques avancées d'analyse de données. Grâce à elles, la faible empreinte laissée par une onde gravitationnelle dans les données de détecteurs peut être isolée. Le travail de cette thèse est dédié au développement d'une technique de détection d'ondes gravitationnelles ne reposant que sur une connaissance minimale du signal à isoler. Le développement de cette méthode consiste plus précisément à introduire une information sur la phase du signal d'onde gravitationnelle selon un contexte astrophysique déterminé. La première partie de cette thèse est consacrée à la présentation de la méthode. Dans une seconde partie cette méthode est appliquée à la recherche de signaux d'ondes gravitationnelles en provenance de systèmes binaires de trous noirs de masse stellaire dans du bruit Gaussien. Puis l'étude est répétée dans du bruit de détecteurs collecté pendant la première période de prise de données. Enfin la troisième partie est dédiée à la recherche de binaires de trous noirs dont l'orbite montre un écart à la géométrie circulaire, ce qui complexifie la morphologie du signal. De telles orbites sont qualifiées d'excentriques. Cette troisième analyse permet d'établir de premiers résultats quant à la méthode proposée lorsque le signal d'intérêt est peu connu
In december 2015 the LIGO detectors have first detected a gravitational wave emitted by a pair of coalescing black holes 1.3 billion years ago. Many more observations have been realised since then and heralded gravitational waves as a new messenger in astronomy. The latest detection is the merge of two neutron stars whose electromagnetic counterpart has been followed up by many observatories around the globe. These direct observations have been made possible by the developpement of advanced data analysis techniques. With them the weak gravitational wave inprint in detectors may be recovered. The realised work during this thesis aims at developping an existing gravitational wave detection method which relies on minimal assumptions of the targeted signal. It more precisely consists in introducing an information on the signal phase depending on the astrophysical context. The first part is dedicated to a presentation of the method. The second one presents the results obtained when applying the method to the search of stellar mass binary black holes in simulated Gaussian noise data. The study is repeated in real instrumental data collected during the first run of LIGO. Finally, the third part presents the method applied in the search for eccentric binary black holes. Their orbit exhibits a deviation from the quasi-circular orbit case considered so far and thus complicates the signal morphology. This third analysis establishes first results with the proposed method in the case of a poorly modeled signal
APA, Harvard, Vancouver, ISO, and other styles
49

Hussain, Zahir M. "Adaptive instantaneous frequency estimation: Techniques and algorithms." Thesis, Queensland University of Technology, 2002. https://eprints.qut.edu.au/36137/7/36137_Digitised%20Thesis.pdf.

Full text
Abstract:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
APA, Harvard, Vancouver, ISO, and other styles
50

Khodor, Nadine. "Analyse de la dynamique des séries temporelles multi-variées pour la prédiction d’une syncope lors d’un test d’inclinaison." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S123/document.

Full text
Abstract:
La syncope est une perte brusque de conscience. Bien qu'elle ne soit pas généralement mortelle, elle présente un impact économique sur le système de soins et sur la vie personnelle de personnes en souffrant. L'objet de la présente étude est de réduire la durée du test clinique (environ 1 heure) et d'éviter aux patients de développer une syncope en la prédisant. L'ensemble de travail s'inscrit dans une démarche de datamining associant l'extraction de paramètres, la sélection des variables et la classification. Trois approches complémentaires sont proposées, la première exploite des méthodes d'analyse non-linéaires de séries temporelles extraites de signaux acquises pendant le test, la seconde s'intéresse aux relations cardiovasculaires en proposant des indices dans le plan temps-fréquence et la troisième, plus originale, prendre en compte leurs dynamiques temporelles
Syncope is a sudden loss of consciousness. Although it is not usually fatal, it has an economic impact on the health care system and the personal lives of people suffering. The purpose of this study is to reduce the duration of the clinical test (approximately 1 hour) and to avoid patients to develop syncope by early predicting the occurrence of syncope. The entire work fits into a data mining approach involving the feature extraction, feature selection and classification. 3 complementary approaches are proposed, the first one exploits nonlinear analysis methods of time series extracted from signals acquired during the test, the second one focuses on time- frequency (TF) relation between signals and suggests new indexes and the third one, the most original, takes into account their temporal dynamics
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography