Дисертації з теми "Analytical signal"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Analytical signal.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Analytical signal".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Pai, Hung-Chuan. "Analytical methods for mixed signal processing systems /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487949508368344.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Schiavi, Simona. "Homogenized and analytical models for the diffusion MRI signal." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX083/document.

Повний текст джерела
Анотація:
L'imagerie par résonance magnétique de diffusion (IRMD) est une technique d'imagerie qui teste les propriétés diffusives d'un échantillon en le soumettant aux impulsions d'un gradient de champ magnétique. Plus précisément, elle détecte le mouvement de l'eau dû à la diffusion et s'avère donc être un outil puissant pour obtenir des informations sur la microstructure des tissus. Le signal acquis par le scanner IRM est une mesure moyennée sur un volume physique appelé voxel, dont la taille, pour des raisons techniques, est bien plus grande que l'échelle de variations microscopiques de la structure cellulaire. Ceci implique que les composants microscopiques des tissus ne sont pas visibles à la résolution spatiale de l'IRM et que les caractéristiques géométriques se trouvent agréger dans le signal macroscopique provenant du voxel. Une importante quantité mesurée par l'IRMD dans chaque voxel est le Coefficient de Diffusion Apparent (CDA) dont la dépendance au temps de diffusion est actée par de nombreuses expériences d'imagerie effectuées in vivo. Il existe dans la littérature un nombre important de modèles macroscopiques décrivant le CDA allant du plus simple au plus complexe (modèles phénoménologiques, stochastiques, géométriques, fondés sur des EDP, etc.), chacun étant valide sous certaines hypothèses techniques bien précises. Le but de cette thèse est de construire des modèles simples, disposant d'une bonne validité applicative, en se fondant sur une modélisation de la diffusion à l'échelle microscopique à l'aide d'EDP et de techniques d'homogénéisation.Dans un article antérieur, le modèle homogénéisé FPK a été déduit de l’EDP de Bloch-Torrey sous l'hypothèse que la perméabilité de la membrane soit petite et le temps de diffusion long. Nous effectuons tout d'abord une analyse de ce modèle et établissons sa convergence vers le modèle classique de Kärger lorsque la durée des impulsions magnétiques tend vers 0. Notre analyse montre que le modèle FPK peut être vu comme une généralisation de celui de Kärger, permettant la prise en compte de durées d'impulsions magnétiques arbitraires. Nous donnons aussi une nouvelle définition, motivée par des raisons mathématiques, du temps de diffusion pour le modèle de Kärger (celle impliquant la plus grande vitesse de convergence).Le CDA du modèle FPK est indépendant du temps ce qui entre en contradiction avec nombreuses observations expérimentales. Par conséquent, notre objectif suivant est de corriger ce modèle pour de petites valeurs de ce que l'on appelle des b-valeurs afin que le CDA homogénéisé qui en résulte soit sensible à la fois à la durée des impulsions et à la fois au temps de diffusion. Pour atteindre cet objectif, nous utilisons une technique d'homogénéisation similaire à celle utilisée pour le FPK, tout en proposant un redimensionnement adapté de l'échelle de temps et de l'intensité du gradient pour la gamme de b-valeurs considérées. Nous montrons, à l'aide de simulations numériques, l'excellente qualité de l'approximation du signal IRMD par ce nouveau modèle asymptotique pour de faibles b-valeurs. Nous établissons aussi (grâce à des développements en temps court des potentiels de surface associés à l'équation de la chaleur ou grâce à une décomposition de sa solution selon les fonctions propres) des résultats analytiques d'approximation du modèle asymptotique qui fournissent des formules explicites de la dépendance temporelle du CDA. Nos résultats sont en accord avec les résultats classiques présents dans la littérature et nous améliorons certains d'entre eux grâce à la prise en compte de la durée des impulsions. Enfin nous étudions le problème inverse consistant en la détermination d'information qualitative se rapportant à la fraction volumique des cellules à partir de signaux IRMD mesurés. Si trouver la distribution de sphères semble possible à partir de la mesure du signal IRMD complet, il nous est apparu que la mesure du seul CDA ne serait pas suffisante
Diffusion magnetic resonance imaging (dMRI) is an imaging modality that probes the diffusion characteristics of a sample via the application of magnetic field gradient pulses. More specifically, it encodes water displacement due to diffusion and is then a powerful tool to obtain information on the tissue microstructure. The signal measured by the MRI scanner is a mean-value measurement in a physical volume, called a voxel, whose size, due to technical reasons, is much larger than the scale of the microscopic variations of the cellular structure. It follows that the microscopic components of the tissues are not visible at the spatial resolution of dMRI. Rather, their geometric features are aggregated into the macroscopic signal coming from the voxels. An important quantity measured in dMRI in each voxel is the Apparent Diffusion Coefficient (ADC) and it is well-established from imaging experiments that, in the brain, in-vivo, the ADC is dependent on the diffusion time. There is a large variety (phenomenological, probabilistic, geometrical, PDE based model, etc.) of macroscopic models for ADC in the literature, ranging from simple to complicated. Indeed, each of these models is valid under a certain set of assumptions. The goal of this thesis is to derive simple (but sufficiently sound for applications) models starting from fine PDE modelling of diffusion at microscopic scale using homogenization techniques.In a previous work, the homogenized FPK model was derived starting from the Bloch-Torrey PDE equation under the assumption that membrane's permeability is small and diffusion time is large. We first analyse this model and establish a convergence result to the well known K{"a}rger model as the magnetic pulse duration goes to 0. In that sense, our analysis shows that the FPK model is a generalisation of the K{"a}rger one for the case of arbitrary duration of the magnetic pulses. We also give a mathematically justified new definition of the diffusion time for the K{"a}rger model (the one that provides the highest rate of convergence).The ADC for the FPK model is time-independent which is not compatible with some experimental observations. Our goal next is to correct this model for small so called $b$-values so that the resulting homogenised ADC is sensitive to both the pulses duration and the diffusion time. To achieve this goal, we employed a similar homogenization technique as for FPK, but we include a suitable time and gradient intensity scalings for the range of considered $b$-values. Numerical simulations show that the derived asymptotic new model provides a very accurate approximation of the dMRI signal at low $b$-values. We also obtain some analytical approximations (using short time expansion of surface potentials for the heat equation and eigenvalue decompositions) of the asymptotic model that yield explicit formulas of the time dependency of ADC. Our results are in concordance with classical ones in the literature and we improved some of them by accounting for the pulses duration.Finally we explored the inverse problem of determining qualitative information on the cells volume fractions from measured dMRI signals. While finding sphere distributions seems feasible from measurement of the whole dMRI signal, we show that ADC alone would not be sufficient to obtain this information
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vitanov, Ivan. "Kernel-based fault diagnosis of inertial sensors using analytical redundancy." Thesis, Cranfield University, 2017. http://dspace.lib.cranfield.ac.uk/handle/1826/12741.

Повний текст джерела
Анотація:
Kernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sun, Jingyuan. "Optimization of high-speed CMOS circuits with analytical models for signal delay." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0002/MQ43548.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Desmond, Allan Peter. "An analytical signal transform derived from the Walsh Transform for efficient detection of dual tone multiple frequency (DTMF) signals." Thesis, Bucks New University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401474.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pachnicke, Stephan [Verfasser]. "Fast Analytical Assessment of the Signal Quality in Transparent Optical Networks / Stephan Pachnicke." Aachen : Shaker, 2005. http://d-nb.info/1186576782/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

MacKay, James D. "Analytical method for turbine blade temperature mapping to estimate a pyrometer input signal." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45797.

Повний текст джерела
Анотація:

The purpose of this thesis is to develop a method to estimate local blade temperatures in a gas turbine for comparison with the output signal of an experimental pyrometer. The goal of the method is to provide a temperature measurement benchmark based on a knowledge of blade geometry and engine operating conditions. A survey of currently available methods is discussed including both experimental and analytical techniques.The purpose of this thesis is to develop a method to estimate local blade temperatures in a gas turbine for comparison with the output signal of an experimental pyrometer. The goal of the method is to provide a temperature measurement benchmark based on a knowledge of blade geometry and engine operating conditions. A survey of currently available methods is discussed including both experimental and analytical techniques.

An analytical approach is presented as an example, using the output from a cascade flow solver to estimate local blade temperatures from local flow conditions. With the local blade temperatures, a grid is constructed which maps the temperatures onto the blade. A predicted pyrometer trace path is then used to interpolate temperature values from the grid, predicting the temperature history a pyrometer would record as the blade rotates through the pyrometer line of sight. Plotting the temperature history models a pyrometer input signal. An analytical approach is presented as an example, using the output from a cascade flow solver to estimate local blade temperatures from local flow conditions. With the local blade temperatures, a grid is constructed which maps the temperatures onto the blade. A predicted pyrometer trace path is then used to interpolate temperature values from the grid, predicting the temperature history a pyrometer would record as the blade rotates through the pyrometer line of sight. Plotting the temperature history models a pyrometer input signal.


Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Воргуль, О. В. "Approaches Half Band Filter Realization for Means FPGA." Thesis, NURE, MC&FPGA, 2019. https://mcfpga.nure.ua/conf/2019-mcfpga/10-35598-mcfpga-2019-015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Воргуль, О. В. "Approaches Half Band Filter Realization for Means FPGA." Thesis, NURE, MC&FPGA, 2019. https://mcfpga.nure.ua/conf/2019-mcfpga/10-35598-mcfpga-2019-015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Liang. "Myocardial motion estimation from 2D analytical phases and preliminary study on the hypercomplex signal." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0140/document.

Повний текст джерела
Анотація:
Les signaux analytiques multidimensionnels nous permettent d'avoir des possibilités de calculer les phases et modules. Cependant, peu de travaux se trouvent sur les signaux analytiques multidimensionnels qui effectuent une extensibilité appropriée pour les applications à la fois sur du traitement des données médicales 2D et 3D. Cette thèse a pour objectif de proposer des nouvelles méthodes pour le traitement des images médicales 2D/3D pour les applications de détection d'enveloppe et d'estimation du mouvement. Premièrement, une représentation générale du signal quaternionique 2D est proposée dans le cadre de l'algèbre de Clifford et cette idée est étendue pour modéliser un signal analytique hypercomplexe 3D. La méthode proposée décrit que le signal analytique complexe 2D, est égal aux combinaisons du signal original et de ses transformées de Hilbert partielles et totale. Cette écriture est étendue au cas du signal analytique hypercomplexe 3D. Le résultat obtenu est que le signal analytique hypercomplexe de Clifford peut être calculé par la transformée de Fourier complexe classique. Basé sur ce signal analytique de Clifford 3D, une application de détection d'enveloppe en imagerie ultrasonore 3D est présentée. Les résultats montrent une amélioration du contraste de 7% par rapport aux méthodes de détection d'enveloppe 1D et 2D. Deuxièmement, cette thèse propose une approche basée sur deux phases spatiales du signal analytique 2D appliqué aux séquences cardiaques. En combinant l'information de ces phases des signaux analytiques de deux images successives, nous proposons un estimateur analytique pour les déplacements locaux 2D. Pour améliorer la précision de l'estimation du mouvement, un modèle bilinéaire local de déformation est utilisé dans un algorithme itératif. Cette méthode basée sur la phase permet au déplacement d'être estimé avec une précision inférieure au pixel et est robuste à la variation d'intensité des images dans le temps. Les résultats de sept séquences simulées d'imagerie par résonance magnétique (IRM) marquées montrent que notre méthode est plus précise comparée à des méthodes récentes utilisant la phase du signal monogène ou des méthodes classiques basées sur l'équation du flot optique. Les erreurs d'estimation de mouvement de la méthode proposée sont réduites d'environ 33% par rapport aux méthodes testées. En outre, les déplacements entre deux images sont cumulés en temps, pour obtenir la trajectoire d'un point du myocarde. En effet, des trajectoires ont été calculées sur deux patients présentant des infarctus. Les amplitudes des trajectoires des points du myocarde appartenant aux régions pathologiques sont clairement réduites par rapport à celles des régions normales. Les trajectoires des points du myocarde, estimées par notre approche basée sur la phase de signal analytique, sont donc un bon indicateur de la dynamique cardiaque locale. D'ailleurs, elles s'avèrent cohérentes à la déformation estimée du myocarde
Different mathematical tools, such as multidimensional analytic signals, provide possibilities to calculate multidimensional phases and modules. However, little work can be found on multidimensional analytic signals that perform appropriate extensibility for the applications on both of the 2D and 3D medical data processing. In this thesis, based on the Hahn 1D complex analytic, we aim to proposed a multidimensional extension approach from the 2D to a new 3D hypercomplex analytic signal in the framework of Clifford algebra. With the complex/hypercomplex analytic signals, we propose new 2D/3D medical image processing methods for the application of ultrasound envelope detection and cardiac motion estimation. Firstly, a general representation of 2D quaternion signal is proposed in the framework of Clifford algebra and this idea is extended to generate 3D hypercomplex analytic signal. The proposed method describes that the complex/hypercomplex 2D analytic signals, together with 3D hypercomplex analytic signal, are equal to different combinations of the original signal and its partial and total Hilbert transforms, which means that the hypercomplex Clifford analytic signal can be calculated by the classical Fourier transform. Based on the proposed 3D Clifford analytic signal, an application of 3D ultrasound envelope detection is presented. The results show a contrast optimization of about 7% comparing with 1D and 2D envelope detection methods. Secondly, this thesis proposes an approach based on two spatial phases of the 2D analytic signal applied to cardiac sequences. By combining the information of these phases issued from analytic signals of two successive frames, we propose an analytical estimator for 2D local displacements. To improve the accuracy of the motion estimation, a local bilinear deformation model is used within an iterative estimation scheme. This phase-based method allows the displacement to be estimated with subpixel accuracy and is robust to image intensity variation in time. Results from seven realistic simulated tagged magnetic resonance imaging (MRI) sequences show that our method is more accurate compared with the state-of-the-art method. The motion estimation errors (end point error) of the proposed method are reduced by about 33% compared with that of the tested methods. In addition, the frame-to-frame displacements are further accumulated in time, to allow for the calculation of myocardial point trajectories. Indeed, from the estimated trajectories in time on two patients with infarcts, the shape of the trajectories of myocardial points belonging to pathological regions are clearly reduced in magnitude compared with the ones from normal regions. Myocardial point trajectories, estimated from our phase-based analytic signal approach, are therefore a good indicator of the local cardiac dynamics. Moreover, they are shown to be coherent with the estimated deformation of the myocardium
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zhu, Xinjie, and 朱信杰. "START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2015. http://hdl.handle.net/10722/211136.

Повний текст джерела
Анотація:
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed. START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users. It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Sunnegårdh, Johan. "Combining analytical and iterative reconstruction in helical cone-beam CT." Licentiate thesis, Computer Vision, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8286.

Повний текст джерела
Анотація:

Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated.

The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated.

In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification.

The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.

Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hoang, Thang Nam. "Analytical methods for signal separation and localisation from single-trial event related potentials to investigate brain dynamics." Thesis, Liverpool John Moores University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ruusunen, M. (Mika). "Signal correlations in biomass combustion – an information theoretic analysis." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526201924.

Повний текст джерела
Анотація:
Abstract Increasing environmental and economic awareness are driving the development of combustion technologies to efficient biomass use and clean burning. To accomplish these goals, quantitative information about combustion variables is needed. However, for small-scale combustion units the existing monitoring methods are often expensive or complex. This study aimed to quantify correlations between flue gas temperatures and combustion variables, namely typical emission components, heat output, and efficiency. For this, data acquired from four small-scale combustion units and a large circulating fluidised bed boiler was studied. The fuel range varied from wood logs, wood chips, and wood pellets to biomass residue. Original signals and a defined set of their mathematical transformations were applied to data analysis. In order to evaluate the strength of the correlations, a multivariate distance measure based on information theory was derived. The analysis further assessed time-varying signal correlations and relative time delays. Ranking of the analysis results was based on the distance measure. The uniformity of the correlations in the different data sets was studied by comparing the 10-quantiles of the measured signal. The method was validated with two benchmark data sets. The flue gas temperatures and the combustion variables measured carried similar information. The strongest correlations were mainly linear with the transformed signal combinations and explicable by the combustion theory. Remarkably, the results showed uniformity of the correlations across the data sets with several signal transformations. This was also indicated by simulations using a linear model with constant structure to monitor carbon dioxide in flue gas. Acceptable performance was observed according to three validation criteria used to quantify modelling error in each data set. In general, the findings demonstrate that the presented signal transformations enable real-time approximation of the studied combustion variables. The potentiality of flue gas temperatures to monitor the quality and efficiency of combustion allows development toward cost effective control systems. Moreover, the uniformity of the presented signal correlations could enable straightforward copies of such systems. This would cumulatively impact the reduction of emissions and fuel consumption in small-scale biomass combustion
Tiivistelmä Kasvava ympäristö- ja kustannustietoisuus ohjaa polttoteknologioiden kehitystä yhä tehokkaampaan biomassan hyödyntämiseen ja puhtaampaan palamiseen. Näiden tavoitteiden saavuttamiseen tarvitaan mittaustietoa palamismuuttujista. Nykyiset palamisen seurantaan tarkoitetut ratkaisut ovat kuitenkin pienpolttolaitteita ajatellen usein kalliita tai monimutkaisia. Tässä työssä tutkittiin mitattujen savukaasun lämpötilojen riippuvuussuhdetta tyypillisiin kaasukomponentteihin, lämpötehoon ja tehokkuuteen. Tätä varten analysoitiin mittausaineistot neljästä erityyppisestä pienpolttolaitteesta ja suuresta kiertoleijupeti-kattilasta. Puupolttoaineina olivat klapi, hake, pelletti ja hakkuujäte. Analyysi tehtiin alkuperäisillä mittaussignaaleilla ja niistä matemaattisesti muunnetuilla signaaleilla. Riippuvuussuhteiden selvittämiseksi johdettiin informaatioteoriaan perustuva monimuuttuja-etäisyysmitta, jonka lukuarvolla mitataan signaalien samankaltaisuutta. Esitetty analyysimenetelmä sisälsi myös riippuvuuksien ajallisen muutoksen ja suhteellisten aikaviiveiden arvioinnin. Tulosten arvojärjestys perustui etäisyysmitan arvoon. Riippuvuussuhteiden samankaltaisuutta mittausaineistojen välillä vertailtiin 10-kvantiileilla. Analyysimenetelmän toimivuus vahvistettiin kahdella tunnetulla koeaineistolla. Savukaasun lämpötilojen ja palamismuuttujien mittaussignaaleissa oli samankaltainen informaatiosisältö. Vahvimmat riippuvuudet olivat muunnettujen signaalien yhdistelmillä pääosin lineaarisia ja palamisteorian mukaisia. Merkittävää oli, että tietyillä signaalimuunnos- ja palamismuuttujapareilla oli sama riippuvuussuhde kaikissa mittausaineistossa. Tämä todettiin myös simuloinneilla arvioitaessa savukaasujen hiilidioksidipitoisuutta lineaarisella, kiinteällä mallirakenteella. Mallin tarkkuus oli riittävä kolmella erityyppisellä kriteerillä jokaisessa mittausaineistossa. Tulosten perusteella signaalimuunnoksilla voidaan arvioida palamismuuttujia reaaliaikaisesti. Savukaasujen lämpötilojen potentiaali palamisen laadun ja tehokkuuden seurannassa mahdollistaa kustannustehokkaiden säätöratkaisujen kehityksen. Löydettyjä yleistettäviä riippuvuussuhteita hyödyntämällä niiden käyttöönotto lukuisissa polttolaitteissa helpottuisi. Pienpolton päästöjen ja polttoaineen kulutuksen vähentyminen olisi tällöin kumulatiivista
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lu, Chenxi. "Improving Analytical Travel Time Estimation for Transportation Planning Models." FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/237.

Повний текст джерела
Анотація:
This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Louisell, William. "A Framework and Analytical Methods for Evaluation of Preferential Treatment for Emergency and Transit Vehicles at Signalized Intersections." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26820.

Повний текст джерела
Анотація:
Preferential treatments are employed to provide preemption for emergency vehicles (EV) and conditional priority for transit vehicles at signalized intersections. EV preemption employs technologies and signal control strategies seeking to reduce emergency vehicle crash potential and response times. Transit priority employs the same technologies with signal control strategies seeking to reduce travel time and travel time variability. Where both preemption and transit technologies are deployed, operational strategies deconflict simultaneous requests. Thus far, researchers have developed separate evaluation frameworks for preemption and priority. This research addresses the issue of preemption and priority signal control strategies in breadth and depth. In breadth, this research introduces a framework that reveals planning interdependence and operational interaction between preemption and priority from the controlling strategy down to roadway hardware operation under the inclusive title: preferential treatment. This fulfills a current gap in evaluation. In depth, this research focuses on evaluation of EV preemption. There are two major analytical contributions resulting from this research. The first is a method to evaluate the safety benefits of preemption based on conflict analysis. The second is an algorithm, suitable for use in future traffic simulation models, that incorporates the impact of auto driver behavior into the determination of travel time savings for emergency vehicles operating on signalized arterial roadways. These two analytical methods are a foundation for future research that seeks to overcome the principal weakness of current EV preemption evaluation. Current methods, which rely on modeling and simulation tools, do not consider the unique auto driver behaviors observed when emergency vehicles are present. This research capitalizes on data collected during a field operational test in Northern Virginia, which included field observations of emergency vehicles traversing signalized intersections under a wide variety of geometric, traffic flow, and signal operating conditions. The methods provide a means to quantify the role of EV preemption in reducing the number and severity of conflict points and the delay experienced at signalized intersections. This forms a critical basis for developing deployment and operational guidelines, and eventually, warrants.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wettstein, Christoph [Verfasser], and Ursula [Akademischer Betreuer] Wollenberger. "Cytochrome c-DNA and cytochrome c-enzyme interactions for the construction of analytical signal chains / Christoph Wettstein ; Betreuer: Ursula Wollenberger." Potsdam : Universität Potsdam, 2015. http://d-nb.info/1218399406/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Gopalappa, Chaitra. "Three Essays on Analytical Models to Improve Early Detection of Cancer." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1647.

Повний текст джерела
Анотація:
Development of approaches for early detection of cancer requires a comprehensive understanding of the cellular functions that lead to cancer, as well as implementing strategies for population-wide early detection. Cell functions are supported by proteins that are produced by active or expressed genes. Identifying cancer biomarkers, i.e., the genes that are expressed and the corresponding proteins present only in a cancer state of the cell, can lead to its use for early detection of cancer and for developing drugs. There are approximately 30,000 genes in the human genome producing over 500,000 proteins, thereby posing significant analytical challenges in linking specific genes to proteins and subsequently to cancer. Along with developing diagnostic strategies, effective population-wide implementation of these strategies is dependent on the behavior and interaction between entities that comprise the cancer care system, like patients, physicians, and insurance policies. Hence, obtaining effective early cancer detection requires developing models for a systemic study of cancer care. In this research, we develop models to address some of the analytical challenges in three distinct areas of early cancer detection, namely proteomics, genomics, and disease progression. The specific research topics (and models) are: 1) identification and quantification of proteins for obtaining biomarkers for early cancer detection (mixed integer-nonlinear programming (MINLP) and wavelet-based model), 2) denoising of gene values for use in identification of biomarkers (wavelet-based multiresolution denoising algorithm), and 3) estimation of disease progression time of colorectal cancer for developing early cancer intervention strategies (computational probability model and an agent-based simulation).
Стилі APA, Harvard, Vancouver, ISO та ін.
19

NANATTUCHIRAYIL, VIJAYAN ANJALY. "Synthesis and Characterization of Nanoparticles for Sensing Applications." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658763145713.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Tornador, Antolin Cristian 1979. "Prognosis and risk models of depression are built from analytical components of the rs-fMRI activity in patients." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/383067.

Повний текст джерела
Анотація:
Depression is the most common type of emotional disorder among the world's population. It is characterized by negative sentiments, the feeling of guilt, low self-esteem, a loss of interest, a high-level process of reflection, and in general by a decrease of the individual's psychic functions. The new non-invasive neuroimaging techniques have increased the ability of studying possible variations in patients' brain activity. In concrete, functional magnetic resonance imaging (fMRI) has become the most important method to study human brain functions in the past two decades, being non-invasive and with no risk for human health. Biswal and others in 1995, and later Lowe and his colleagues in 1998, showed the existence of continous spontaneous activity in the brain's activity at rest. These fluctuations have also been verified in other species like macaques (Vicent JL et l, 2007). Studying the brain's activity at rest (rs-fMRI) by means of neuroimaging techniques has become a powerful tool for the investigation of diseases, since it has demonstrated a better signal to noise ratio concerning task-based approaches on one hand, and since certain patients could have difficulties to perform cognitive, language or motor tasks on the other hand. However, it seems that because of certain inconsistencies found among studies, rs-fMRI techniques would not reach a practical clinical use of a personalised monitoring, prognosis or pre-diagnosis in individuals with depression. In this respect, even if Grecius MD exposed in 2008 the benefits of rs-fMRI techniques, he also commented that the signal to noise ratio remains to be improved to be used in a clinical routine. Grecius suggested to lenghthen the time of the temporal series at rest, and to improve analysis procedures. The aim of this thesis is to elucidate if the existence of certain factors or components in the functional signal at rest could be used at the clinical health level. In order to achieve this, we use rs-fMRI data on two sets of samples. In the first set of samples, composed by 27 patients with major depression (MDD) and 27 individuals as controls, we design descriptors that describe both static and dynamic aspects of the resting-state signal for the construction of prediction models. Conversely, with the second type of samples (48 twins), we analyse the relation between possible genetic and environmental factors which could explain certain depressive components in the activity in resting condition. On the one hand, the results show that depression could simultaneously affect different brain networks located in the prefrontal-limbic area, in the DMN, and between the frontoparietal lobes. Besides, it seems that the alterations in these networks could be explained by both static and dynamic aspects existing in the rest signal. Finally, we achieve the creation of models that would partially explain certain clinical phenomenons present in depressive patients by means of global descriptors in these networks. These network descriptors could be used for personalised monitoring in patients with major depression. On the other hand, using the twin sample, we achieve the construction of a risk model from the amygdalar activity which evaluates the risk or predisposition of an individual from analytical components in the activity at rest. The cerebellum of this sample was also analysed, and the environment was found to be possibly modifying the activity in these regions
La depresión es el tipo de trastorno emocional más común en la población mundial. Se caracteriza por sentimientos de culpa o negativos, baja autoestima, pérdida de interés, alto nivel de reflexión y en general una disminución de las funciones psíquicas del individuo. Las nuevas técnicas de neuroimagen no invasivas han incrementado la habilidad para estudiar posibles variaciones de la actividad cerebral en pacientes. En concreto, las imágenes por resonancia funcional magnética (fRMI) se han convertido en las dos últimas décadas el método más importante, no-invasivo sin riesgo para la salud humana, para el estudio de las funciones cerebrales humanas. Biswal y otros en 1995, y posteriormente Lowe y compañía en 1998, demostraron la existencia de actividad espontanea continua en la actividad cerebral en estado de reposo. Estas fluctuaciones también han sido confirmadas en otras especies como en macacos (Vincent JL y compañía, 2007). El estudio mediante técnicas de neuroimagen sobre la actividad cerebral en reposo (rs-fMRI) se ha convertido en una potente herramienta para el estudio de enfermedades, puesto que, por un lado, se ha demostrado tener una mejor relación señal-ruido respecto a enfoques basados en tareas, y por otro lado, ciertos pacientes podrían tener dificultades para realizar algún tipo de tareas cognitivas, de lenguaje o motoras. Sin embargo, parece ser que debido a ciertas inconsistencias encontradas entre estudios, las técnicas de rs-fMRI no estarían llegando a un uso clínico-práctico para el seguimiento, pronóstico o pre-diagnostico personalizado en individuos con depresión. En línea a esto, aunque Grecius MD en 2008 expuso los beneficios de la técnica rs-fMRI también comentó que para poder ser utilizada en la rutina clínica aún se debería mejorar la relación señal-ruido. Propuso alargar los tiempos de las series temporales en estado de reposo y mejorar los procedimientos de análisis. En esta tesis se trabaja para dilucidar si existen ciertos factores o componentes en la señal funcional en estado de reposo que pudieran ser utilizados para su uso en la salud clínica. Por ello, utilizamos datos de rs-fMRI sobre dos conjunto de muestras. En el primer conjunto, 27 pacientes con depresión mayor (MDD) y 27 individuos como control, diseñamos descriptores que describan aspectos estáticos y dinámicos de la señal de reposo para la construcción de modelos de prónostico. En cambio, con el segundo tipo de muestras, 48 gemelos, analizamos la relación de posibles factores genéticos y de entorno que pudieran explicar ciertos componentes depresivos en la actividad en estado de reposo. Por un lado, los resultados muestran que la depresión pudiera estar afectando diferentes redes cerebrales al mismo tiempo localizadas en la parte prefrontal-limbica, en la red DMN, y entre los lóbulos frontoparietales. Además, parece ser que las alteraciones sobre estas redes pudieran ser explicadas tanto por aspectos estáticos y dinámicos existentes en la señal de reposo. Finalmente, conseguimos crear modelos que explicarían parcialmente ciertos fenómenos clínicos presentes en los pacientes depresivos, mediante descriptores globales de estas redes. Estos descriptores de red pudieran ser utilizados para el seguimiento personalizado en pacientes con depresión mayor. Por otro, utilizando la muestra de gemelos, conseguimos construir un modelo de riesgo a partir de la actividad amigdalar que evalúa el riesgo o propensión de un individuo a partir de componentes analíticas en la actividad de reposo. También sobre esta muestra, se analizó el cerebelo encontrando que el entorno pudiera estar modificando la actividad en estas regiones
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Björk, Anders. "Chemometric and signal processing methods for real time monitoring and modeling : applications in the pulp and paper industry." Doctoral thesis, KTH, Kemi, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4383.

Повний текст джерела
Анотація:
In the production of paper, the quality of the pulp is an important factor both for the productivity and for the final quality. Reliable real-time measurements of pulp quality are therefore needed. One way is to use acoustic or vibration sensors that give information-rich signals and place the sensors at suitable locations in a pulp production line. However, these sensors are not selective for the pulp properties of interest. Therefore, advanced signal processing and multivariate calibration are essential tools. The current work has been focused on the development of calibration routes for extraction of information from acoustic sensors and on signal processing algorithms for enhancing the information-selectivity for a specific pulp property or class of properties. Multivariate analysis methods like Principal Components Analysis (PCA), Partial Least Squares (PLS) and Orthogonal Signal Correction (OSC) have been used for visualization and calibration. Signal processing methods like Fast Fourier Transform (FFT), Fast Wavelet Transform (FWT) and Continuous Wavelet Transform (CWT) have been used in the development of novel signal processing algorithms for extraction of information from vibrationacoustic sensors. It is shown that use of OSC combined with PLS for prediction of Canadian Standard Freeness (CSF) using FFT-spectra produced from vibration data on a Thermo Mechanical Pulping (TMP) process gives lower prediction errors and a more parsimonious model than PLS alone. The combination of FFT and PLS was also used for monitoring of beating of kraft pulp and for screen monitoring. When using regular FFT-spectra on process acoustic data the obtained information tend to overlap. To circumvent this two new signal processing methods were developed: Wavelet Transform Multi Resolution Spectra (WT-MRS) and Continuous Wavelet Transform Fibre Length Extraction (CWT-FLE). Applying WT-MRS gave PLS-models that were more parsimonious with lower prediction error for CSF than using regular FFT-Spectra. For a Medium Consistency (MC) pulp stream WT-MRS gave predictions errors comparable to the reference methods for CSF and Brightness. The CWT-FLE method was validated against a commercial fibre length analyzer and good agreement was obtained. The CWT-FLE-curves could therefore be used instead of other fibre distribution curves for process control. Further, the CWT-FLE curves were used for PLS modelling of tensile strength and optical parameters with good results. In addition to the mentioned results a comprehensive overview of technologies used with acoustic sensors and related applications has been performed.
Vid framställning av pappersprodukter är kvaliteten på massan en viktig faktor för produktiviteten och kvalitén på slutresultatet. Det är därför viktigt att ha tillgång till tillförlitliga mätningar av massakvalitet i realtid. En möjlighet är att använda akustik- eller vibrationssensorer i lämpliga positioner vid enhetsoperationer i massaprocessen. Selektiviteten hos dessa mätningar är emellertid relativt låg i synnerhet om mätningarna är passiva. Därför krävs avancerad signalbehandling och multivariat kalibrering. Det nu presenterade arbetet har varit fokuserat på kalibreringsmetoder för extraktion av information ur akustiska mätningar samt på algoritmer för signalbehandling som kan ge förbättrad informationsselektivitet. Multivariata metoder som Principal Component Analysis (PCA), Partial Least Squares (PLS) and Orthogonal Signal Correction (OSC) har använts för visualisering och kalibrering. Signalbehandlingsmetoderna Fast Fourier Transform (FFT), Fast Wavelet Transform (FWT) och Continuous Wavelet Transform (CWT) har använts i utvecklingen av nydanande metoder för signalbehandling anpassade till att extrahera information ur signaler från vibrations/akustiska sensorer. En kombination av OSC och PLS applicerade på FFT-spektra från raffineringen i en Termo Mechnaical Pulping (TMP) process ger lägre prediktionsfel för Canadian Standard Freeness (CSF) än enbart PLS. Kombinationen av FFT och PLS har vidare använts för monitorering av malning av sulfatmassa och monitorering av silning. Ordinära FFT-spektra av t.ex. vibrationssignaler är delvis överlappande. För att komma runt detta har två signalbehandlingsmetoder utvecklats, Wavelet Transform Multi Resolution Spectra (WT-MRS) baserat på kombinationen av FWT och FFT samt Continuous Wavelet Transform Fibre Length Extraction (CWT-FLE) baserat på CWT. Tillämpning av WT-MRS gav enklare PLS-modeller med lägre prediktionsfel för CSF jämfört med att använda normala FFT-spektra. I en annan tillämpning på en massaström med relativt hög koncentration (Medium Consistency, MC) kunde prediktioner för CSF samt ljushet erhållas med prediktionsfel jämförbart med referensmetodernas fel. Metoden CWT-FLE validerades mot en kommersiell fiberlängdsmätare med god överensstämmelse. CWT-FLE-kurvorna skulle därför kunna användas i stället för andra fiberdistributionskurvor för processtyrning. Vidare användes CWT-FLE kurvor för PLS modellering av dragstyrka samt optiska egenskaper med goda resultat. Utöver de nämnda resultaten har en omfattande litteratursammanställning gjorts över området och relaterade applikationer.
QC 20100629
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kuna, Zdeněk. "Detekce komplexů QRS v signálech EKG." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218655.

Повний текст джерела
Анотація:
This project considers methods of construction QRS detectors. It focus in detection complexes of QRS single leads and space speed, which are calculated from three orthogonal leads. In theory was refer to various methods, which lead to design detector. It were designed two algoritms (constant and adaptive detecting threshold), which were implemented into detector and the signal was preprocessed by Hilbert transformation. Toward algoritms were completed by modification, which improved detection effectivity. Function of algoritms were tested in all signals of CSE (V2,V5,aVF).
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Cheaito, Ali. "Analytical anaysis of in-band and out-of-band distorsions for multicarrier signals : Impact of non-linear amplification, memory effects and predistorsion." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0001/document.

Повний текст джерела
Анотація:
Les techniques multiporteuses de type OFDM sont aujourd'hui largement déployées dans tous les systèmes de communication sans fils notamment dans les réseaux cellulaires (L TE), les réseaux de diffusion (DVB) ou encore les réseaux WiFi. Cependant, les modulations multiporteuses se caractérisent par une très grande dynamique mesurée par le Peak to Average Power Ratio (PAPR), ce qui empêche d'alimenter l'amplificateur de puissance non linéaire (utilisé avant l'émission des signaux) à son point optimal et ainsi conduit à diminuer son efficacité énergétique. Des techniques de réduction du PAPR peuvent alors être mises en oeuvre pour réduire le PAPR du signal et des techniques de prédistorsion peuvent alors être utilisées pour compenser les non-linéarités de l'amplificateur de puissance. L'approche développée dans le cadre de cette thèse a pour objectif d'étudier une solution intelligente pour les implémentations futures pour contrôler la réduction du PAPR et les étapes de linéa risation de manière flexible en fonction de certains paramètres prédéfinis afin qu'ils deviennent adaptatifs et auto-configurables. Plus précisément, notre travail a principalement porté sur l'analyse des différentes distorsions dans la bande (in-band distortions)mesurées par I'EVM ou Error Vector Magnitude et en dehors de la bande de transmission (out-of-band distortions) mesurées par I'ACPR ou Adjacent Channel Power Ratio de signaux à porteuses multiples. En particulier de nombreux résultats analytiques complétés par des résultats de simulation permettant d'évaluer I'EVM et I'ACPR en fonction des caractéristiques de l'amplification nonlinéaire en prenant en compte ou pas l'effet mémoire de l'amplificateur et la mise en oeuvre de techniques d'écrêtage et de pré-distorsion ont été obtenus. Ces résultats constituent une étape importante dans l'optimisation globale de la complexité, de la linéarité et de l'efficacité énergétique des émetteurs aussi bien pour la diffusion de la télévision numérique que pour les réseaux cellulaires de 4 ème génération (L TE) ou de 5""' génération
OFDM multicarrier techniques are now widely deployed in most wireless communication systems, in particular in cellularnetworks (L TE), broadcast networks (DVB) and WiFi networks. However, multi-carrier modulations are characterized by avery large dynamic amplitude measured by the Peak to Average Power Ratio (PAPR). which prevents radio frequencydesigners to feed the signal at the optimal point of the Power Amplifier (PA) which reduces the PA energy efficiency. Inliterature, the PAPR reduction and linearization techniques are the main approaches to solve the PAPR problem, the PAnonlinearities problem. as well as the low PA efficiency problem.The approach developed in this thesis was to study an intelligent solution for future implementations to control thereduction of PAPR and the linearization steps in a flexible way according to some predefined parameters so that theybecome adaptive and self-configurable. More specifically, our work focused on the analytical analysis of in-band measureby the Error Vector Magnitude (EVM) and out-of-band distortions measured by the Adjacent Chanel Power Ratio (ACPR)for clipped multicarrier signals taking into account the impact of non-linear amplification, memory effects and predistortion.In particular. many analytical results complemented by simulation results to evaluate the EVM and ACPR are proposed.These analytical expressions depend on the PA characteristics taking into account or not the PA memory effects and theuse of clipping and pre-distortion techniques. lt is worthwhile to note that our proposed theoretical analyses could be veryuseful for optimizing future transmitter efficiency and linearity in the field of broadcasting applications for the deployment oDVB-T2 transmitters as well as for L TE cellular networks
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Mansour, Ali. "Contribution à la séparation aveugle de sources." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0012.

Повний текст джерела
Анотація:
Le probleme de separation de sources est un probleme relativement recent en traitement du signal, qui consiste a separer des sources, statistiquement independantes, observees par un reseau de capteurs. Dans cette these, plusieurs approches ont ete etudiees : deux approches directes, valables uniquement pour le melange lineaire instantane, ont ete proposees. La premiere, analytique, est basee sur les statistiques de signaux observes, l'autre geometrique, est basee sur les distributions de ces signaux, dont la densite de probabilite est supposee a support borne. Pour les signaux de meme signe de kurtosis, on a propose un algorithme adaptatif base uniquement sur les cumulants croises (2x2). Ce critere est valable pour les melanges instantanes, aussi bien pour les melanges convolutifs. L'hypothese concernant le signe de kurtosis est assez frequent dans la litterature sur la separation de sources. Des etudes sur cette hypothese, et sur sa relation avec la nature de sources, sont presentees dans cette these. Finalement, en s'inspirant des methodes d'identification aveugles et a l'aide de deux parametrisations differentes de la matrice de sylvester, on montre la possibilite de separer un melange convolutif ou le transformer en un melange instantane, en utilisant les statistiques de second ordre. Dans ce cadre, trois algorithmes de sous-espaces sont proposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Young, Andrew Coady. "A Consensus Model for Electroencephalogram Data Via the S-Transform." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etd/1424.

Повний текст джерела
Анотація:
A consensus model combines statistical methods with signal processing to create a better picture of the family of related signals. In this thesis, we will consider 32 signals produced by a single electroencephalogram (EEG) recording session. The consensus model will be produced by using the S-Transform of the individual signals and then normalized to unit energy. A bootstrapping process is used to produce a consensus spectrum. This leads to the consensus model via the inverse S-Transform of the consensus spectrum. The method will be applied to both a control and experimental EEG to show how the results can be used in clinical settings to analyze experimental outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Xu, Su Huai. "Random analytic signals." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b1944056.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Júnior, Alcebíades Dal Col. "Visual analytics via graph signal processing." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-22102018-112358/.

Повний текст джерела
Анотація:
The classical wavelet transform has been widely used in image and signal processing, where a signal is decomposed into a combination of basis signals. By analyzing the individual contribution of the basis signals, one can infer properties of the original signal. This dissertation presents an overview of the extension of the classical signal processing theory to graph domains. Specifically, we review the graph Fourier transform and graph wavelet transforms both of which based on the spectral graph theory, and explore their properties through illustrative examples. The main features of the spectral graph wavelet transforms are presented using synthetic and real-world data. Furthermore, we introduce in this dissertation a novel method for visual analysis of dynamic networks, which relies on the graph wavelet theory. Dynamic networks naturally appear in a multitude of applications from different domains. Analyzing and exploring dynamic networks in order to understand and detect patterns and phenomena is challenging, fostering the development of new methodologies, particularly in the field of visual analytics. Our method enables the automatic analysis of a signal defined on the nodes of a network, making viable the detection of network properties. Specifically, we use a fast approximation of the graph wavelet transform to derive a set of wavelet coefficients, which are then used to identify activity patterns on large networks, including their temporal recurrence. The wavelet coefficients naturally encode spatial and temporal variations of the signal, leading to an efficient and meaningful representation. This method allows for the exploration of the structural evolution of the network and their patterns over time. The effectiveness of our approach is demonstrated using different scenarios and comparisons involving real dynamic networks.
A transformada wavelet clássica tem sido amplamente usada no processamento de imagens e sinais, onde um sinal é decomposto em uma combinação de sinais de base. Analisando a contribuição individual dos sinais de base, pode-se inferir propriedades do sinal original. Esta tese apresenta uma visão geral da extensão da teoria clássica de processamento de sinais para grafos. Especificamente, revisamos a transformada de Fourier em grafo e as transformadas wavelet em grafo ambas fundamentadas na teoria espectral de grafos, e exploramos suas propriedades através de exemplos ilustrativos. As principais características das transformadas wavelet espectrais em grafo são apresentadas usando dados sintéticos e reais. Além disso, introduzimos nesta tese um método inovador para análise visual de redes dinâmicas, que utiliza a teoria de wavelets em grafo. Redes dinâmicas aparecem naturalmente em uma infinidade de aplicações de diferentes domínios. Analisar e explorar redes dinâmicas a fim de entender e detectar padrões e fenômenos é desafiador, fomentando o desenvolvimento de novas metodologias, particularmente no campo de análise visual. Nosso método permite a análise automática de um sinal definido nos vértices de uma rede, tornando possível a detecção de propriedades da rede. Especificamente, usamos uma aproximação da transformada wavelet em grafo para obter um conjunto de coeficientes wavelet, que são então usados para identificar padrões de atividade em redes de grande porte, incluindo a sua recorrência temporal. Os coeficientes wavelet naturalmente codificam variações espaciais e temporais do sinal, criando uma representação eficiente e com significado expressivo. Esse método permite explorar a evolução estrutural da rede e seus padrões ao longo do tempo. A eficácia da nossa abordagem é demonstrada usando diferentes cenários e comparações envolvendo redes dinâmicas reais.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Sanghani, Aditya Deepak. "QUANTIFICATION OF BLOOD FLOW VELOCITY USING COLOR SENSING." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1490.

Повний текст джерела
Анотація:
Blood flow velocity is an important parameter that can give information on several pathologies including atherosclerosis, glaucoma, Raynaud’s phenomenon, and ischemic stroke [2,5,6,10]. Present techniques of measuring blood flow velocity involve expensive procedures such as Doppler echocardiography, Doppler ultrasound, and magnetic resonance imaging [11,12]. They cost from $8500-$20000. It is desired to find a low-cost yet equally effective solution for measuring blood flow velocity. This thesis has a goal of creating a proof of concept device for measuring blood flow velocity. Finger blood flow velocity is investigated in this project. The close proximity to the skin of the finger’s arteries makes it a practical selection. A Red Green Blue (RGB) color sensor is integrated with an Arduino Uno microcontroller to analyze color on skin. The initial analysis involved utilization of red RGB values to measure heart rate; this was performed to validate the sensor. This test achieved similar results to an experimental control as the measurements had error ranging from 0% to 6.67%. The main analysis was to measure blood flow velocity using 2 RGB color sensors. The range of velocity found was 5.20cm/s to 12.22cm/s with an average of 7.44cm/s. This compared well with the ranges found in published data that varied from 4cm/s to 19cm/s. However, there is an error associated with the device that affects the accuracy of the results. The apparatus has the limitation of collecting data between sensors every 102-107ms, so there is a maximum error of 107ms. The average finger blood flow velocity of 7.44cm/s may actually be between 6.17cm/s and 9.39cm/s due to the sampling error. In addition, mean squared error analysis found that the most likely time difference between pulses among those found is 739ms, which corresponds to 5.21cm/s. Although there is error in the system, the tests for heart rate along with the obtained range and average for finger blood velocity data provided a method for analyzing blood flow velocity. Finger blood velocity was examined in a much more economical manner than its traditional methods that cost between $8500-$20000. The cost for this entire thesis was $99.66, which is a maximum of 1.17% of the cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Saghafi, Abolfazl. "Real-time Classification of Biomedical Signals, Parkinson’s Analytical Model". Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6946.

Повний текст джерела
Анотація:
The reach of technological innovation continues to grow, changing all industries as it evolves. In healthcare, technology is increasingly playing a role in almost all processes, from patient registration to data monitoring, from lab tests to self-care tools. The increase in the amount and diversity of generated clinical data requires development of new technologies and procedures capable of integrating and analyzing the BIG generated information as well as providing support in their interpretation. To that extent, this dissertation focuses on the analysis and processing of biomedical signals, specifically brain and heart signals, using advanced machine learning techniques. That is, the design and implementation of automatic biomedical signal pre-processing and monitoring algorithms, the design of novel feature extraction methods, and the design of classification techniques for specific decision making processes. In the first part of this dissertation Electroencephalogram (EEG) signals that are recorded in 14 different locations on the scalp are utilized to detect random eye state change in real-time. In summary, cross channel maximum and minimum is used to monitor real-time EEG signals in 14 channels. Upon detection of a possible change, Multivariate Empirical Mode Decomposes the last two seconds of the signal into narrow-band Intrinsic Mode Functions. Common Spatial Pattern is then employed to create discriminating features for classification purpose. Logistic Regression, Artificial Neural Network, and Support Vector Machine classifiers all could detect the eye state change with 83.4% accuracy in less than two seconds. We could increase the detection accuracy to 88.2% by extracting relevant features from Intrinsic Mode Functions and directly feeding it to the classification algorithms. Our approach takes less than 2 seconds to detect an eye state change which provides a significant improvement and promising real-life applications when compared to slow and computationally intensive instance based classification algorithms proposed in literatures. Increasing the training examples could even improve the accuracy of our analytic algorithms. We employ our proposed analytic method in detecting the three different dance moves that honey bees perform to communicate the location of a food source. The results are significantly better than other alternative methods in the literature in terms of both accuracy and run time. The last chapter of the dissertation brings out a collaborative research on Parkinson's disease. As a Parkinson’s Progression Markers Initiative (PPMI) investigator, I had access to the vast database of The Michael J. Fox Foundation for Parkinson's Research. We utilized available data to study the heredity factors leading to Parkinson's disease by using Maximum Likelihood and Bayesian approach. Through sophisticated modeling, we incorporated information from healthy individuals and those diagnosed with Parkinson's disease (PD) to available historical data on their grandparents' family to draw Bayesian estimations for the chances of developing PD in five types of families. That is, families with negative history of PD (type 1) and families with positive history in which estimations provided for the prevalence of developing PD when none of the parents (type 2), one of the parents (type 3 and 4), or both of the parents (type 5) carried the disease. The results in the provided data shows that for the families with negative history of PD the prevalence is estimated to be 20% meaning that a child in this family has 20% chance of developing Parkinson. If there is positive history of PD in the family the chance increases to 33% when none of the parents had PD and to 44% when both of the parents had the disease. The chance of developing PD in a family whose solely mother is diagnosed with the disease is estimated to be 26% in comparison to 31% when only father is diagnosed with Parkinson's.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Ting. "Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00768315.

Повний текст джерела
Анотація:
Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Fisher, Julia Marie. "Classification Analytics in Functional Neuroimaging: Calibrating Signal Detection Parameters." Thesis, The University of Arizona, 2015. http://hdl.handle.net/10150/594646.

Повний текст джерела
Анотація:
Classification analyses are a promising way to localize signal, especially scattered signal, in functional magnetic resonance imaging data. However, there is not yet a consensus on the most effective analysis pathway. We explore the efficacy of k-Nearest Neighbors classifiers on simulated functional magnetic resonance imaging data. We utilize a novel construction of the classification data. Additionally, we vary the spatial distribution of signal, the design matrix of the linear model used to construct the classification data, and the feature set available to the classifier. Results indicate that the k-Nearest Neighbors classifier is not sufficient under the current paradigm to adequately classify neural data and localize signal. Further exploration of the data using k-means clustering indicates that this is likely due in part to the amount of noise present in each data point. Suggestions are made for further research.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Wiggins, Bryan Blake. "Using Induced Signals to Develop a Position-Sensitive Microchannel Plate Detector." Thesis, Indiana University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10686059.

Повний текст джерела
Анотація:

A novel concept to provide position-sensitivity to a microchannel plate (MCP) is described. While several designs exist to make MCPs position sensitive, all these designs are based upon collection of the electrons. In contrast, this approach utilizes an induced signal as the electron cloud emanates from an MCP and passes a wire plane. We demonstrate the validity of the concept by constructing a device that provides single electron detection with 98 μm position resolution (FWHM) over an area of 50 mm × 50 mm. The characteristics of the detector are described through both bench-top tests and simulation. After characterization of the detector, the sense wire detector was utilized for slow-neutron radiography. Furthermore, we utilized our knowledge of position-sensitive techniques to realize a beam-imaging MCP detector useful for radioactive beam facilities.

Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kocian, Ondřej. "Detekce komplexů QRS s využitím vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217974.

Повний текст джерела
Анотація:
This project investigates methods of construction the wavelet-based QRS-complex detector. QRS-complex detection is very important, because it helps automatically calculate heart rate and in some cases it is used for compression ECG signal. The design of QRS detector can be made with many methods, in this project were mentioned and consequently tested only a few variants. The principle of designed detector used a wavelet-based decomposition of the original ECG signal to several frequency-coded bands. These bands are consequently transformed to absolute values and with the help of the threshold value are marked positions of assumed QRS complexes. Then are these assumed positions from all bands compared between themselves. If the position is confirmed at least at one nearby band, then is this position marked as true QRS complex. To increase efficiency of designed detector, two modifications were additionally mentioned. The first one, using the envelope of the signal, had rather negative effect on detectors efficiency. The second modification, using combined signal from three pseudoorthogonal leads, had reversely very good effect on detectors efficiency. In the end, the designed detector and all its modifications were tested on signals from CSE library (exactly on leads II, V2 and V6).
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Feuillet, Thomas. "Développement de capteurs optimisés pour l'IRM à champ magnétique faible (0.2T) : application à l'imagerie de l'animal." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10302/document.

Повний текст джерела
Анотація:
L'imagerie par résonance magnétique (IRM) appliquée au domaine vétérinaire exploite des systèmes à bas champ magnétostatique qui ont de nombreux avantages, notamment leur faible coût d'achat et d'entretien. Mais sur ces machines, les capteurs radiofréquence (RF) sont initialement dédiées à l'homme et ne permettent pas une qualité d'image optimale. Dans le cadre de cette thèse, des méthodes simples d'optimisation de capteurs à 0,2 T ont été développées, puis exploitées pour des applications de recherche et préclinique. Le travail d'optimisation a été partagé en deux axes. Dans un premier temps, un modèle analytique a été développé sous MATLAB pour l'estimation du rapport signal sur bruit intrinsèque à un capteur paramétré par ses dimensions et les propriétés de l'objet imagé. La validation du modèle a été obtenue par la comparaison entre mesures et simulations du facteur de qualité. Cette méthode d'optimisation a été appliquée pour deux études spécifiques qui ont fait l'objet d'une publication. Dans un second temps, un travail sur le découplage actif a été mené. En effet, sur l'IRM 0,2 T à notre disposition, le découplage passif est la méthode retenue par le constructeur. Mais pour certaines applications des artefacts d'imagerie sont inévitables et le facteur de qualité réduit. Des moyens de découplage actif ont donc été développés. Les performances des capteurs ainsi équipés se sont avérées meilleures qu'en découplage passif. Ce système de découplage associé à un dispositif de connexion par couplage inductif du signal de résonance magnétique a été également démontré à 3 T comme une preuve de concept d'un dispositif de connexion universelle. Ce dispositif a fait l'objet d'un article récemment soumis pour publication
Magnetic resonance imaging {MRl) in veterinary practice employs low magnetostatic field devices which have numerous advantages such as their low maintenance and initial cost. Yet, the radiofrequency {RF) coils commercially provided with these devices are dedicated to human morphology, therefore reducing image quality. ln this work, simple optimization methods for 0.2 T RF coils were developed for an implementation in research and preclinical studies. Optimization protocol was subdivided into two main steps. First, an analytical model was developed using MATLAB in order to estimate the intrinsic signal to noise ratio variations with coil and imaged sample characteristics. Validation of the model was assessed thanks to quality factor comparison between simulated and measured values. The use of the analytical model for two specific studies was described in a recently accepted publication. Second, active decoupling was investigated. lndeed, passive decoupling is the decoupling method implemented on the 0.2 T MR device at our disposal. However, this technique can lack of efficiency in some experiments, inducing imaging artifacts and reduced quality factor. Active decoupling method was therefore implemented. The electronic performances of the coils equipped this way were better than in passive decoupling. This active decoupling device combined with an inductive coupling connecting system was tested at 3 T to demonstrate the technical feasibility of a new universal connecting device, for which an article was recently submitted
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Djedidi, Oussama. "Modélisation incrémentale des processeurs embarqués pour l'estimation des caractéristiques et le diagnostic." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0639.

Повний текст джерела
Анотація:
Les systèmes-sur-puce (Systems on Chip, SoC) sont de plus en plus embarqués dans des systèmes à risque comme les systèmes aéronautiques et les équipements de production d’énergie. Cette évolution technologique permet un gain de temps et de performance, mais présente des limites en termes de fiabilité et de sécurité. Ainsi, le développement d’outils de surveillance et de diagnostic des systèmes électroniques embarqués, en particuliers les SoC, est devenu l’un des verrous scientifiques à lever pour assurer une large utilisation de ces systèmes dans les équipements à risque en toute sécurité. Ce travail de thèse s’inscrit dans ce contexte, et a pour objectif le développement d’une approche de détection et identification des dérives des performances des SoC embarqués. L’approche proposée est basée sur un modèle incrémental, construit à partir de modules réutilisables et échangeables pour correspondre à la large gamme de SoC existants sur le marché. Le modèle est ensuite utilisé pour estimer un ensemble de caractéristiques relatives à l’état de fonctionnement du SoC. L’algorithme de diagnostic développé dans ce travail consiste à générer des indices de dérives par la comparaison en ligne des caractéristiques estimées à celles mesurées. L’évaluation des résidus et la prise de décision sont réalisées par des méthodes statistiques appropriées à la nature de chaque indice de dérive. L’approche développée a été validée expérimentalement sur des SoC différents, ainsi que sur un démonstrateur développé dans le cadre de ce travail. Les résultats expérimentaux obtenus, montrent l’efficacité et la robustesse de l’approche développée
Systems on Chip are increasingly embedded in safety-critical systems, such as aeronautical systems and energy production equipment. Such technological evolution allows for significant improvements in performance but presents limits in terms of reliability and security. Therefore, the development of new tools for the monitoring and diagnosis of embedded electronic systems, Systems on Chip, in particular is currently one of the scientific challenges to overcome, in order to ensure a broader and safer use of these systems in safety-critical equipment. The work presented in this thesis aims to develop an approach for detecting and identifying drifts in embedded Systems of Chips characteristics and performance. The proposed approach is based on an incremental model built from reusable and exchangeable modules able to adapt and accommodate the broad range of Systems on Chips available on the market. This model is then used to estimate a set of characteristics relating to the state of operation of the SoC. The diagnostic algorithm developed in this work consists of generating drift signals though the online comparison of the estimated characteristics to those measured. Then, the assessment of residuals and decision making are performed by statistical methods appropriate to the nature of each drift. The developed approach has been experimentally validated on different Systems on Chip, as well as on a demonstrator developed as part of this work. The obtained experimental results validate and show the efficiency and robustness of the incremental model and the monitoring algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Gupta, Damayanti. "WLAN signal characteristics in an indoor environment - an analytic model and experiments." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2876.

Повний текст джерела
Анотація:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Dept. of Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Odani, Motoi. "A Bayesian meta-analytic approach for safety signal detection in randomized clinical trials." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225514.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Alm, Erik. "Solving the correspondence problem in analytical chemistry : Automated methods for alignment and quantification of multiple signals." Doctoral thesis, Stockholms universitet, Institutionen för analytisk kemi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-74556.

Повний текст джерела
Анотація:
When applying statistical data analysis techniques to analytical chemical data, all variables must have correspondence over the samples dimension in order for the analysis to generate meaningful results. Peak shifts in NMR and chromatography destroys that correspondence and creates data matrices that have to be aligned before analysis. In this thesis, new methods are introduced that allow for automated transformation from unaligned raw data to aligned data matrices where each column corresponds to a unique signal. These methods are based around linear multivariate models for the peak shifts and Hough transform for establishing the parameters of these linear models. Methods for quantification under difficult conditions, such as crowded spectral regions, noisy data and unknown peak identities are also introduced. These methods include automated peak selection and a robust method for background subtraction. This thesis focuses on the processing of the data; the experimental work is secondary and is not discussed in great detail. All the developed methods are put together in a full procedure that takes us from raw data to a table of concentrations in a matter of minutes. The procedure is applied to 1H-NMR data from biological samples, which is one of the toughest alignment tasks available in the field of analytical chemistry. It is shown that the procedure performs consistently on the same level as much more labor intensive manual techniques such as Chenomx NMRSuite spectral profiling. Several kinds of datasets are evaluated using the procedure. Most of the data is from the field of Metabolomics, where the goal is to establish concentrations of as many small molecules as possible in biological samples.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

ZHU, XIANGDONG. "WAVELET-BASED SIGNAL ANALYSIS FOR THE ENVIRONMENTAL HEALTH RESEARCH." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1085064472.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Trippas, Dries. "Motivated reasoning and response bias : a signal detection approach." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/2853.

Повний текст джерела
Анотація:
The aim of this dissertation was to address a theoretical debate on belief bias. Belief bias is the tendency for people to be influenced by their prior beliefs when engaged in deductive reasoning. Deduction is the act of drawing necessary conclusions from premises which are meant to be assumed as true. Given that the logical validity of an argument is independent of its content, being influenced by your prior beliefs in such content is considered a bias. Traditional theories posit there are two belief bias components. Motivated reasoning is the tendency to reason better for arguments with unbelievable conclusions relative to arguments with believable conclusions. Response bias is the tendency to accept believable arguments and to reject unbelievable arguments. Dube et al. (2010) pointed out critical methodological problems that undermine evidence for traditional theories. Using signal detection theory (SDT), they found evidence for response bias only. We adopted the SDT method to compare the viability of the traditional and the response bias accounts. In Chapter 1 the relevant literature is reviewed. In Chapter 2 four experiments which employed a novel SDT-based forced choice reasoning method are presented, showing evidence compatible with motivated reasoning. In Chapter 3 four experiments which used the receiver operating characteristic (ROC) method are presented. Crucially, cognitive ability turned out to be linked to motivated reasoning. In Chapter 4 three experiments are presented in which we investigated the impact of cognitive ability and analytic cognitive style on belief bias, concluding that cognitive style mediated the effects of cognitive ability on motivated reasoning. In Chapter 5 we discuss our findings in light of a novel individual differences account of belief bias. We conclude that using the appropriate measurement method and taking individual differences into account are two key elements to furthering our understanding of belief bias, human reasoning, and cognitive psychology in general.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Valdivia, Paola Tatiana Llerena. "Graph signal processing for visual analysis and data exploration." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15102018-165426/.

Повний текст джерела
Анотація:
Signal processing is used in a wide variety of applications, ranging from digital image processing to biomedicine. Recently, some tools from signal processing have been extended to the context of graphs, allowing its use on irregular domains. Among others, the Fourier Transform and the Wavelet Transform have been adapted to such context. Graph signal processing (GSP) is a new field with many potential applications on data exploration. In this dissertation we show how tools from graph signal processing can be used for visual analysis. Specifically, we proposed a data filtering method, based on spectral graph filtering, that led to high quality visualizations which were attested qualitatively and quantitatively. On the other hand, we relied on the graph wavelet transform to enable the visual analysis of massive time-varying data revealing interesting phenomena and events. The proposed applications of GSP to visually analyze data are a first step towards incorporating the use of this theory into information visualization methods. Many possibilities from GSP can be explored by improving the understanding of static and time-varying phenomena that are yet to be uncovered.
O processamento de sinais é usado em uma ampla variedade de aplicações, desde o processamento digital de imagens até a biomedicina. Recentemente, algumas ferramentas do processamento de sinais foram estendidas ao contexto de grafos, permitindo seu uso em domínios irregulares. Entre outros, a Transformada de Fourier e a Transformada Wavelet foram adaptadas nesse contexto. O Processamento de Sinais em Grafos (PSG) é um novo campo com muitos aplicativos potenciais na exploração de dados. Nesta dissertação mostramos como ferramentas de processamento de sinal gráfico podem ser usadas para análise visual. Especificamente, o método de filtragem de dados porposto, baseado na filtragem de grafos espectrais, levou a visualizações de alta qualidade que foram atestadas qualitativa e quantitativamente. Por outro lado, usamos a transformada de wavelet em grafos para permitir a análise visual de dados massivos variantes no tempo, revelando fenômenos e eventos interessantes. As aplicações propostas do PSG para analisar visualmente os dados são um primeiro passo para incorporar o uso desta teoria nos métodos de visualização da informação. Muitas possibilidades do PSG podem ser exploradas melhorando a compreensão de fenômenos estáticos e variantes no tempo que ainda não foram descobertos.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Goosen, Ryno Johannes. "Sense, signal and software : a sensemaking analysis of meaning in early warning systems." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/96132.

Повний текст джерела
Анотація:
Thesis (MPhil)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: This thesis considers the contribution that Karl Weick’s notion of sensemaking can make to an improved understanding of weak signals, cues, warning analysis, and software within early warning systems. Weick’s sensemaking provides a framework through which the above mentioned concepts are discussed and analysed. The concepts of weak signals, early warning systems, and Visual Analytics are investigated from within current business and formal intelligence viewpoints. Intelligence failure has been a characteristic of events such as 9/11, the recent financial crisis triggered by the collapse of Lehman Brothers, and the so-called Arab Spring. Popular methodologies such as early warning analysis, weak signal analysis and environmental scanning employed within both the business and government sphere failed to provide adequate early warning in many of these events. These failures warrant renewed attention as to what improvements can be made and how new technology can enhance early warning analysis. Chapter One is introductory and states the research question, methodology, and delimits the thesis. Chapter Two sets the scene by investigating current conceptions of the main constructs. Chapter Three explores Weick’s theory of sensemaking, and provides the analytical framework against which these concepts are then analysed in Chapter Four. The emphasis is directed towards the extent of integration of frames within the analysis phase of early warning systems and how frames may be incorporated within the theoretical foundation of Visual Analytics to enhance warning systems. The findings of this thesis suggest that Weick’s conceptualisation of sensemaking provide conceptual clarity to weak signal analysis in that Weick’s “seed” metaphor, representing the embellishment and elaboration of cues, epitomizes the progressive nature of weak signals. The importance of Weick’s notion of belief driven sensemaking, in specific the role of expectation in the elaboration of frames, and discussed and confirmed by various researchers in different study areas, is a core feature underlined in this thesis. The centrality of the act of noticing and the effect that framing and re-framing has thereon is highlighted as a primary notion in the process of not only making sense of warning signals but identifying them in the first place. This ties in to the valuable contribution Weick’s sensemaking makes to understanding the effect that a specification has on identifying transients and signals in the resulting visualization in Visual Analytic software.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek hoe Karl Weick se konsep van singewing ons insig teenoor swak seine, tekens, waarskuwingsanalise en sagteware binne vroeë waarskuwingstelsels verbeter. Weick se bydrae verskaf ‘n raamwerk waarbinne hierdie konsepte geanaliseer en ondersoek kan word. Die konsep van swak seine, vroeë-waarskuwing en visuele analise word binne huidige besigheidsuitgangspunte, en die formele intelligensie arena ondersoek. Die mislukking van intelligensie is kenmerkend van gebeure soos 9/11, die onlangse finansiёle krisis wat deur die ondergang van Lehman Brothers ingelei is, en die sogenaamde “Arab Spring”. Hierdie gebeure het ‘n wêreldwye opskudding op ekonomiese en politiese vlak veroorsaak. Moderne metodologieё soos vroeë waarskuwingsanalise, swaksein-analise en omgewingsaanskouing binne regerings- en besigheidsverband het duidelik in hul doelstelling misluk om voortydig te waarsku oor hierdie gebeurtenisse. Dit is juis hierdie mislukkings wat dit noodsaaklik maak om meer aandag te skenk aan hierdie konsepte, asook nuwe tegnologie wat dit kan verbeter. Hoofstuk Een is inleidend en stel die navorsingsvraagstuk, doelwitte en afbakkening. Hoofstuk Twee lê die fondasie van die tesis deur ‘n ondersoek van die hoof konsepte. Hoofstuk Drie verskaf die teoretiese raamwerk, die van Weick se singewingsteorie, waarteen die hoof konsepte in Hoofstuk Twee ondersoek word in Hoofstuk Vier. Klem word gelê op die diepte van integrasie en die toepassing van raamwerke in die analisefase van vroeё waarskuwingstelsels en hoe dit binne die teoretiese beginsels van visuele analise geïnkorporeer word. Die bevindinge van hierdie tesis spreek die feit aan dat Weick se konsepsualisering van singewing konseptuele helderheid rakende die begrip “swakseine” verskaf. In hierdie verband verteenwoordig Weick se “saad”- metafoor die samewerking en uitbouing van seine en “padpredikante” wat die progressiewe aard van swakseine weerspieёl. Die kernbeskouing van hierdie tesis is die belangrikheid van Weick se geloofsgedrewesingewing, veral die uitkoms van die bou van raamwerke asook die bespreking hiervan deur verskeie navorsers. Die belangrikheid van die aksie om seine op te merk, en die effek wat dit op die herbeskouing van raamwerke het, asook die raaksien daarvan in die eerste plek word beklemtoon. Laasgenoemde dui ook aan tot watter mate Weick se singewingsteorie ‘n bydrae maak tot visuele analise veral in ons begrip van die gevolg wat data of inligtingspesifikasie het op die identifisering van seine en onsinnighede in visualisering binne visuele analise-sagteware.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jalali, Shahrzad. "Estimating Bus Passengers' Origin-Destination of Travel Route Using Data Analytics on Wi-Fi and Bluetooth Signals." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39210.

Повний текст джерела
Анотація:
Accurate estimation of Origin and Destination (O-D) of passengers has been an essential objective for public transit agencies because knowledge of passengers’ flow enables them to forecast ridership, and plan for bus schedules, and bus routes. However, obtaining O-D information using traditional ways, such as conducting surveys, cannot fulfill today’s requirements of intelligent transportation and route planning in smart cities. Estimating bus passengers’ O-D using Wi-Fi and Bluetooth signals detected from their mobile devices is the primary objective of this project. For this purpose, we collected anonymized passengers’ data using SMATS TrafficBoxTM sensor provided by “SMATS Traffic Solutions” company. We then performed pre-processing steps including data cleaning, feature extraction, and data normalization, then, built various models using data mining techniques. The main challenge in this project was to distinguish between passengers’ and non-passengers’ signals since the sensor captures all signals in its surrounding environment including substantial noise from devices outside of the bus. To address this challenge, we applied Hierarchical and K-Means clustering algorithms to separate passengers from non-passengers’ signals automatically. By assigning GPS data to passengers’ signals, we could find commuters’ O-D. Moreover, we developed a second method based on an online analysis of sequential data, where specific thresholds were set to recognize passengers’ signals in real time. This method could create the O-D matrix online. Finally, in the validation phase, we compared the ground truth data with both estimated O-D matrices in both approaches and calculated their accuracy. Based on the final results, our proposed approaches can detect more than 20% of passengers (compared to 5% detection rate of traditional survey-based methods), and estimate the origin and destination of passengers with an accuracy of about 93%. With such promising results, these approaches are suitable alternatives for traditional and time-consuming ways of obtaining O-D data. This enables public transit companies to enhance their service offering by efficiently planning and scheduling the bus routes, improving ride comfort, and lowering operating costs of urban transportation.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Sacchi, Rodrigo. "Política de operação preditiva estabilizada via termo inercial utilizando \"analytic signal\", \"dynamic modelling\" e sistemas inteligentes na previsão de vazões afluentes em sistemas hidrotérmicos de potência." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-25092009-111629/.

Повний текст джерела
Анотація:
Este trabalho de pesquisa objetivou a obtenção de uma nova política de operação que melhor caracterizasse o comportamento ótimo dos sistemas hidrelétricos de potência, mesmo diante das mais variadas condições hidrológicas. Este trabalho teve duas linhas de investigação. Uma tratou do problema de previsão de vazões afluentes mensais, na busca por abordagens e técnicas que definissem bons modelos de previsão. A outra linha de pesquisa tratou de encontrar uma nova política de operação, para o problema de planejamento da operação, que fosse capaz de definir uma seqüência de decisões operativas mais estáveis, confiáveis e de menor custo operativo. Na primeira linha de pesquisa, investigou-se três aspectos importantes na definição de um modelo de previsão: técnicas de pré-processamento dos dados, definição automática do espaço de entrada e avaliação do desempenho de alguns modelos de redes neurais e sistemas Fuzzy como modelos de previsão. Nestes aspectos foram investigadas a utilização da análise dos componentes principais e o tratamento da série temporal de vazões afluentes como um sinal discreto, utilizando-se a representação \"analytic signal\". Para a definição do espaço de entrada de maneira automática utilizou-se a abordagem da \"dynamic modelling\", empregando-se a \"average mutual information\" e \"false nearest neighbors\". Para implementação dos modelos de previsão foram estudados e avaliados quatro modelos inteligentes: rede SONARX, rede SONARX-RBF, modelo ANFIS e a rede ESN. Já na outra linha de pesquisa, foi proposta uma política de operação que fosse capaz de estabilizar os despachos de geração termelétrica e conseqüentemente o custo marginal de operação. A política de operação preditiva estabilizada via termo inercial produziu excelentes resultados operativos, melhorando de forma significativa a performance da política preditiva.
This research work aimed at obtaining a new operation policy which could better describe the optimal behavior of hydropower systems, even when faced with the most varied hydrological conditions. This research had two lines of investigation. The first one dealt with the monthly water inflow forecasting problem, searching for approaches and techniques which could define efficient forecasting models. Three important aspects to define a forecasting model were investigated: data pre-processing techniques, automatic definition of the embedding and the performance assessment of some artificial neural networks and Fuzzy systems. Hence, the use of the principal components analysis was investigated and, considering the water inflow time series as a discrete signal, the analytic signal representation could be used to preprocess the data. Furthermore, the embedding was automatically defined using the dynamic modelling approach, by using the average mutual information and the false nearest neighbors techniques. The forecasting models were implemented by four intelligent models: SONARX network, SONARX-RBF network, ANFIS model and the ESN network. The other line of investigation came up with a new operation policy to solve the operation planning problem, defining a more stable, reliable and less costly operative decision sequence. It was proposed an approach to stabilize the thermoelectric generation dispatches and, as a result, the operative marginal cost. The predictive operation policy stabilized via inertial term produced excellent operation results, improving the performance of the predictive policy.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Perevyazko, Igor [Verfasser], Ulrich Sigmar [Akademischer Betreuer] Schubert, and Alfred [Akademischer Betreuer] Fahr. "Hydrodynamic analysis of macromolecular and colloidal systems by analytical ultracentrifugation and related methods / Igor Perevyazko. Gutachter: Ulrich Sigmar Schubert ; Alfred Fahr." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2014. http://d-nb.info/1047097168/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Pajovic, Milutin. "The development and application of random matrix theory in adaptive signal processing in the sample deficient regime." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93775.

Повний текст джерела
Анотація:
Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 237-243).
This thesis studies the problems associated with adaptive signal processing in the sample deficient regime using random matrix theory. The scenarios in which the sample deficient regime arises include, among others, the cases where the number of observations available in a period over which the channel can be approximated as time-invariant is limited (wireless communications), the number of available observations is limited by the measurement process (medical applications), or the number of unknown coefficients is large compared to the number of observations (modern sonar and radar systems). Random matrix theory, which studies how different encodings of eigenvalues and eigenvectors of a random matrix behave, provides suitable tools for analyzing how the statistics estimated from a limited data set behave with respect to their ensemble counterparts. The applications of adaptive signal processing considered in the thesis are (1) adaptive beamforming for spatial spectrum estimation, (2) tracking of time-varying channels and (3) equalization of time-varying communication channels. The thesis analyzes the performance of the considered adaptive processors when operating in the deficient sample support regime. In addition, it gains insights into behavior of different estimators based on the estimated second order statistics of the data originating from time-varying environment. Finally, it studies how to optimize the adaptive processors and algorithms so as to account for deficient sample support and improve the performance. In particular, random matrix quantities needed for the analysis are characterized in the first part. In the second part, the thesis studies the problem of regularization in the form of diagonal loading for two conventionally used spatial power spectrum estimators based on adaptive beamforming, and shows the asymptotic properties of the estimators, studies how the optimal diagonal loading behaves and compares the estimators on the grounds of performance and sensitivity to optimal diagonal loading. In the third part, the performance of the least squares based channel tracking algorithm is analyzed, and several practical insights are obtained. Finally, the performance of multi-channel decision feedback equalizers in time-varying channels is characterized, and insights concerning the optimal selection of the number of sensors, their separation and constituent filter lengths are presented.
by Milutin Pajovic.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Viljoen, Suretha. "Analysis of crosstalk signals in a cylindrical layered volume conductor influence of the anatomy, detection system and physical properties of the tissues /." Diss., Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-08082005-113739.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Xu, Yanli. "Une mesure de non-stationnarité générale : Application en traitement d'images et du signaux biomédicaux." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0090/document.

Повний текст джерела
Анотація:
La variation des intensités est souvent exploitée comme une propriété importante du signal ou de l’image par les algorithmes de traitement. La grandeur permettant de représenter et de quantifier cette variation d’intensité est appelée une « mesure de changement », qui est couramment employée dans les méthodes de détection de ruptures d’un signal, dans la détection des contours d’une image, dans les modèles de segmentation basés sur les contours, et dans des méthodes de lissage d’images avec préservation de discontinuités. Dans le traitement des images et signaux biomédicaux, les mesures de changement existantes fournissent des résultats peu précis lorsque le signal ou l’image présentent un fort niveau de bruit ou un fort caractère aléatoire, ce qui conduit à des artefacts indésirables dans le résultat des méthodes basées sur la mesure de changement. D’autre part, de nouvelles techniques d'imagerie médicale produisent de nouveaux types de données dites à valeurs multiples, qui nécessitent le développement de mesures de changement adaptées. Mesurer le changement dans des données de tenseur pose alors de nouveaux problèmes. Dans ce contexte, une mesure de changement, appelée « mesure de non-stationnarité (NSM) », est améliorée et étendue pour permettre de mesurer la non-stationnarité de signaux multidimensionnels quelconques (scalaire, vectoriel, tensoriel) par rapport à un paramètre statistique, et en fait ainsi une mesure générique et robuste. Une méthode de détection de changements basée sur la NSM et une méthode de détection de contours basée sur la NSM sont respectivement proposées et appliquées aux signaux ECG et EEG, ainsi qu’a des images cardiaques pondérées en diffusion (DW). Les résultats expérimentaux montrent que les méthodes de détection basées sur la NSM permettent de fournir la position précise des points de changement et des contours des structures tout en réduisant efficacement les fausses détections. Un modèle de contour actif géométrique basé sur la NSM (NSM-GAC) est proposé et appliqué pour segmenter des images échographiques de la carotide. Les résultats de segmentation montrent que le modèle NSM-GAC permet d’obtenir de meilleurs résultats comparativement aux outils existants avec moins d'itérations et de temps de calcul, et de réduire les faux contours et les ponts. Enfin, et plus important encore, une nouvelle approche de lissage préservant les caractéristiques locales, appelée filtrage adaptatif de non-stationnarité (NAF), est proposée et appliquée pour améliorer les images DW cardiaques. Les résultats expérimentaux montrent que la méthode proposée peut atteindre un meilleur compromis entre le lissage des régions homogènes et la préservation des caractéristiques désirées telles que les bords ou frontières, ce qui conduit à des champs de tenseurs plus homogènes et par conséquent à des fibres cardiaques reconstruites plus cohérentes
The intensity variation is often used in signal or image processing algorithms after being quantified by a measurement method. The method for measuring and quantifying the intensity variation is called a « change measure », which is commonly used in methods for signal change detection, image edge detection, edge-based segmentation models, feature-preserving smoothing, etc. In these methods, the « change measure » plays such an important role that their performances are greatly affected by the result of the measurement of changes. The existing « change measures » may provide inaccurate information on changes, while processing biomedical images or signals, due to the high noise level or the strong randomness of the signals. This leads to various undesirable phenomena in the results of such methods. On the other hand, new medical imaging techniques bring out new data types and require new change measures. How to robustly measure changes in theos tensor-valued data becomes a new problem in image and signal processing. In this context, a « change measure », called the Non-Stationarity Measure (NSM), is improved and extended to become a general and robust « change measure » able to quantify changes existing in multidimensional data of different types, regarding different statistical parameters. A NSM-based change detection method and a NSM-based edge detection method are proposed and respectively applied to detect changes in ECG and EEG signals, and to detect edges in the cardiac diffusion weighted (DW) images. Experimental results show that the NSM-based detection methods can provide more accurate positions of change points and edges and can effectively reduce false detections. A NSM-based geometric active contour (NSM-GAC) model is proposed and applied to segment the ultrasound images of the carotid. Experimental results show that the NSM-GAC model provides better segmentation results with less iterations that comparative methods and can reduce false contours and leakages. Last and more important, a new feature-preserving smoothing approach called « Nonstationarity adaptive filtering (NAF) » is proposed and applied to enhance human cardiac DW images. Experimental results show that the proposed method achieves a better compromise between the smoothness of the homogeneous regions and the preservation of desirable features such as boundaries, thus leading to homogeneously consistent tensor fields and consequently a more reconstruction of the coherent fibers
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Magaia, Luis. "Processing Techniques of Aeromagnetic Data. Case Studies from the Precambrian of Mozambique." Thesis, Uppsala universitet, Geofysik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183714.

Повний текст джерела
Анотація:
During 2002-2006 geological field work were carried out in Mozambique. The purpose was to check the preliminary geological interpretations and also to resolve the problems that arose during the compilation of preliminary geological maps and collect samples for laboratory studies. In parallel, airborne geophysical data were collected in many parts of the country to support the geological interpretation and compilation of geophysical maps. In the present work the aeromagnetic data collected in 2004 and 2005 in two small areas northwest of Niassa province and another one in eastern part of Tete province is analysed using GeosoftTM. The processing of aeromagnetic data began with the removal of diurnal variations and corrections for IGRF model of the Earth in the data set. The study of the effect of height variations on recorded magnetic field, levelling and interpolation techniques were also studied. La Porte interpolation showed to be a good tool for interpolation of aeromagnetic data using measured horizontal gradient. Depth estimation techniques are also used to obtain semi-quantitative interpretation of geological bodies. It was showed that many features in the study areas are located at shallow depth (less than 500 m) and few geological features are located at depths greater than 1000 m. This interpretation could be used to draw conclusions about the geology or be incorporated into further investigations in these areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Jha, Mayank Shekhar. "Diagnostic et Pronostic de Systèmes Dynamiques Incertains dans un contexte Bond Graph." Thesis, Ecole centrale de Lille, 2015. http://www.theses.fr/2015ECLI0027/document.

Повний текст джерела
Анотація:
Cette thèse développe des approches pour le diagnostic et le pronostic de systèmes dynamiques incertains en utilisant la technique de modélisation Bond Graph (BG). Tout d'abord, une représentation par intervalles des incertitudes paramétriques et de mesures est intégrée à un modèle BG-LFT (Linear Fractional Transformation). Une méthode de détection robuste de défaut est développée en utilisant les règles de l'arithmétique d'intervalle pour la génération de seuils robustes et adaptatifs sur les résidus nominaux. La méthode est validée en temps réel sur un système de générateur de vapeur.Deuxièmement, une nouvelle méthodologie de pronostic hybride est développée en utilisant les Relations de Redondance Analytique déduites d'un modèle BG et les Filtres Particulaires. Une estimation de l'état courant du paramètre candidat pour le pronostic est obtenue en termes probabilistes. La prédiction de la durée de vie résiduelle est atteinte en termes probabilistes. Les incertitudes associées aux mesures bruitées, les conditions environnementales, etc. sont gérées efficacement. La méthode est validée en temps réel sur un système mécatronique incertain.Enfin, la méthodologie de pronostic développée est mise en œuvre et validée pour le suivi efficace de la santé d'un sous-système électrochimique d’une pile à combustible à membrane échangeuse de protons (PEMFC) industrielle à l’aide de données de dégradation réelles
This thesis develops the approaches for diagnostics and prognostics of uncertain dynamic systems in Bond Graph (BG) modeling framework. Firstly, properties of Interval Arithmetic (IA) and BG in Linear Fractional Transformation, are integrated for representation of parametric and measurement uncertainties on an uncertain BG model. Robust fault detection methodology is developed by utilizing the rules of IA for the generation of adaptive interval valued thresholds over the nominal residuals. The method is validated in real time on an uncertain and highly complex steam generator system.Secondly, a novel hybrid prognostic methodology is developed using BG derived Analytical Redundancy Relationships and Particle Filtering algorithms. Estimations of the current state of health of a system parameter and the associated hidden parameters are achieved in probabilistic terms. Prediction of the Remaining Useful Life (RUL) of the system parameter is also achieved in probabilistic terms. The associated uncertainties arising out of noisy measurements, environmental conditions etc. are effectively managed to produce a reliable prediction of RUL with suitable confidence bounds. The method is validated in real time on an uncertain mechatronic system.Thirdly, the prognostic methodology is validated and implemented on the electrical electro-chemical subsystem of an industrial Proton Exchange Membrane Fuel Cell. A BG of the latter is utilized which is suited for diagnostics and prognostics. The hybrid prognostic methodology is validated, involving real degradation data sets
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії