Rozprawy doktorskie na temat „Estimation de séparation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Estimation de séparation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Ichir, Mahieddine Mehdi. "Estimation bayésienne et approche multi-résolution en séparation de sources". Paris 11, 2005. http://www.theses.fr/2005PA112370.
Pełny tekst źródłaMamouni, Nezha. "Utilisation des Copules en Séparation Aveugle de Sources Indépendantes/Dépendantes". Thesis, Reims, 2020. http://www.theses.fr/2020REIMS007.
Pełny tekst źródłaThe problem of Blind Source Separation (BSS) consists in retrieving unobserved mixed signals from unknown mixtures of them, where there is no, or very limited, information about the source signals and/or the mixing system. In this thesis, we present algorithms in order to separate instantaneous and convolutive mixtures. The principle of these algorithms is to minimize, appropriate separation criteria based on copula densities, using descent gradient type algorithms. These methods can magnificently separate instantaneous and convolutive mixtures of possibly dependent source components even when the copula model is unknown
Rafi, Selwa. "Chaînes de Markov cachées et séparation non supervisée de sources". Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0020/document.
Pełny tekst źródłaThe restoration problem is usually encountered in various domains and in particular in signal and image processing. It consists in retrieving original data from a set of observed ones. For multidimensional data, the problem can be solved using different approaches depending on the data structure, the transformation system and the noise. In this work, we have first tackled the problem in the case of discrete data and noisy model. In this context, the problem is similar to a segmentation problem. We have exploited Pairwise and Triplet Markov chain models, which generalize Hidden Markov chain models. The interest of these models consist in the possibility to generalize the computation procedure of the posterior probability, allowing one to perform bayesian segmentation. We have considered these methods for two-dimensional signals and we have applied the algorithms to retrieve of old hand-written document which have been scanned and are subject to show through effect. In the second part of this work, we have considered the restoration problem as a blind source separation problem. The well-known "Independent Component Analysis" (ICA) method requires the assumption that the sources be statistically independent. In practice, this condition is not always verified. Consequently, we have studied an extension of the ICA model in the case where the sources are not necessarily independent. We have introduced a latent process which controls the dependence and/or independence of the sources. The model that we propose combines a linear instantaneous mixing model similar to the one of ICA model and a probabilistic model on the sources with hidden variables. In this context, we show how the usual independence assumption can be weakened using the technique of Iterative Conditional Estimation to a conditional independence assumption
Arberet, Simon. "Estimation robuste et apprentissage aveugle de modèles pour la séparation de sources sonores". Phd thesis, Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/arberet.pdf.
Pełny tekst źródłaBlind source separation in the underdetermined case is an ill-posed problem where it is usually assumed that sources are independent and sparse in the time-frequency domain. Separation is then done in two steps : the estimation of the mixture parameters, followed by the estimation of the sources. The assumptions made about the sources are not valid for all the time-frequency points, so that the approaches which naively address all the points identically and independently, are little robust in estimating the mixture parameters and the sources. In this thesis we exploit the local distribution of the mixture in the neighborhood of each time-frequency point, to : - Detect the time-frequency regions where only one source is active and to estimate the direction of the dominant source in these regions; - Estimate the distribution of the sources in each time-frequency point using the knowledge on the mixture parameters. The proposed local approach is supported by a clustering algorithm called DEMIX, which robustly estimates the mixture parameters in the instantaneous and anechoic cases. On the other hand, the local spatial distribution of the sources can be used to learn Spectral-GMM which until now required a learning step with source examples. We show that this approach improve the source estimation performance of some dB in SDR
Arberet, Simon. "Estimation robuste et apprentissage aveugle de modèles pour la séparation de sources sonores". Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00564052.
Pełny tekst źródłaEssebbar, Abderrahman. "Séparation paramétrique des ondes en sismique". Phd thesis, Grenoble INPG, 1992. http://tel.archives-ouvertes.fr/tel-00785644.
Pełny tekst źródłaRosier, Julie. "Estimation de fréquences fondamentales multiples : application à la séparation de signaux de parole et musique". Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00000723.
Pełny tekst źródłaRAFI, Selwa. "Chaînes de Markov cachées et séparation non supervisée de sources". Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00995414.
Pełny tekst źródłaFourt, Olivier. "Traitement des signaux à phase polynomiale dans des environnements fortement bruités : séparation et estimation des paramètres". Paris 11, 2008. http://www.theses.fr/2008PA112064.
Pełny tekst źródłaThe research works of this thesis deal with the processings of polynomial phase signals in heavily corrupted environnements, whatsoever noise with high levels or impulse noise, noise modelled by the use of alpha-stable laws. Noise robustness is a common task in signal processing and if several algorithms are able to work with high gaussian noise level, the presence of impulse noise often leads to a great loss in performances or makes algorithms unable to work. Recently, some algorithms have been built in order to support impulse noise environnements but with one limit: the achievable results decrease with gaussian noise situations and thus needs as a first step to select the good method versus the kind of the noise. So one of the key points of this thesis was building algorithms who were robust to the kind of the noise which means that they have similar performances with gaussian noise or alpha-stable noise. The second key point was building fast algorithms, something difficult to add to robustness
Boudjellal, Abdelouahab. "Contributions à la localisation et à la séparation de sources". Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2063.
Pełny tekst źródłaSignal detection, localization, and separation problems date back to the beginning of the twentieth century. Nowadays, this subject is still a hot topic receiving more and more attention, notably with the rapid growth of wireless communication systems that arose in the last two decades and it turns out that many challenging aspects remain poorly addressed by the available literature relative to this subject. This thesis deals with signal detection, localization using temporal or directional measurements, and separation of dependent source signals. The main objective is to make use of some available priors about the source signals such as sparsity, cyclo-stationarity, non-circularity, constant modulus, autoregressive structure or training sequences in a cooperative framework. The first part is devoted to the analysis of (i) signal’s time-of-arrival estimation using a new minimum error rate based detector, (ii) noise power estimation using an improved order-statistics estimator and (iii) side information impact on direction-of-arrival estimation accuracy and resolution. In the second part, the source separation problem is investigated at the light of different priors about the original sources. Three kinds of prior have been considered : (i) separation of constant modulus communication signals, (ii) separation of dependent source signals knowing their dependency structure and (iii) separation of dependent autoregressive sources knowing their autoregressive structure
Liutkus, Antoine. "Processus gaussiens pour la séparation de sources et le codage informé". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0069.
Pełny tekst źródłaSource separation consists in recovering different signals that are only observed through their mixtures. To solve this difficult problem, any available prior information about the sources must be used so as to better identify them among all possible solutions. In this thesis, I propose a general framework, which permits to include a large diversity of prior information into source separation. In this framework, the sources signals are modeled as the outcomes of independent Gaussian processes, which are powerful and general nonparametric Bayesian models. This approach has many advantages: it permits the separation of sources defined on arbitrary input spaces, it permits to take many kinds of prior knowledge into account and also leads to automatic parameters estimation. This theoretical framework is applied to the informed source separation of audio sources. In this setup, a side-information is computed beforehand on the sources themselves during a so-called encoding stage where both sources and mixtures are available. In a subsequent decoding stage, the sources are recovered using this information and the mixtures only. Provided this information can be encoded efficiently, it permits popular applications such as karaoke or active listening using a very small bitrate compared to separate transmission of the sources. It became clear that informed source separation is very akin to a multichannel coding problem. With this in mind, it was straightforwardly cast into information theory as a particular source-coding problem, which permits to derive its optimal performance as rate-distortion functions as well as practical coding algorithms achieving these bounds
Song, Yingying. "Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes". Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0207.
Pełny tekst źródłaA hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances
Song, Yingying. "Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0207/document.
Pełny tekst źródłaA hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances
Signol, François. "Estimation de fréquences fondamentales multiples en vue de la séparation de signaux de parole mélangés dans un même canal". Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00618687.
Pełny tekst źródłaAnthoine, Sandrine. "Plusieurs approches en ondelettes pour la séparation et déconvolection de composantes. Application à des données astrophysiques". Phd thesis, Ecole Polytechnique X, 2005. http://pastel.archives-ouvertes.fr/pastel-00001556.
Pełny tekst źródłaEfligenir, Anthony. "Estimation des propriétés électriques/diélectriques et des performances de séparation d'ions métalliques de membranes d'ultrafiltration et/ou de nanofiltration". Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2039/document.
Pełny tekst źródłaThe characterization of electrical and dielectric properties of UF and NF membranes is an essential step to understand their filtration performance. A new approach has been developed to determine the dielectric properties of a NF membrane by impedance spectroscopy. This is based on the isolation of the membrane active layer and the use of mercury as conductive material, which allowed us to prove that the dielectric constant of the solution inside nanopores is lower than that of the external solution. Two cell configurations (fibers immersed in the solution or fibers embedded in an insulating gel) were investigated for the implementation of tangential electrokinetic measurements with hollow fibers and the solution around the fibers was found to influence both streaming current and cell electrical conductance. Moreover, the important contribution of the fiber porous body to the streaming current does not allow the conversion of the latter to luminal zeta potential. The advantageous properties of these membranes were finally used to decontaminate solutions containing metal ions. Decontamination performances in terms of both pollutant retention and ecotoxicological impact were studied on synthetic solutions and a discharge water from surface treatment industry. Although retention performances were remarkable, the toxicity of the real effluent could not be totally annihilated. A thorough study of the retention of non-metallic contaminants is thus required
Degottex, Gilles. "Séparation de la source glottique des influences du conduit vocal". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00554763.
Pełny tekst źródłaLiutkus, Antoine. "Processus gaussiens pour la séparation de sources et le codage informé". Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00790841.
Pełny tekst źródłaCohen-Hadria, Alice. "Estimation de descriptions musicales et sonores par apprentissage profond". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS607.
Pełny tekst źródłaIn Music Information Retrieval (MIR) and voice processing, the use of machine learning tools has become in the last few years more and more standard. Especially, many state-of-the-art systems now rely on the use of Neural Networks.In this thesis, we propose a wide overview of four different MIR and voice processing tasks, using systems built with neural networks. More precisely, we will use convolutional neural networks, an image designed class neural networks. The first task presented is music structure estimation. For this task, we will show how the choice of input representation can be critical, when using convolutional neural networks. The second task is singing voice detection. We will present how to use a voice detection system to automatically align lyrics and audio tracks.With this alignment mechanism, we have created the largest synchronized audio and speech data set, called DALI. Singing voice separation is the third task. For this task, we will present a data augmentation strategy, a way to significantly increase the size of a training set. Finally, we tackle voice anonymization. We will present an anonymization method that both obfuscate content and mask the speaker identity, while preserving the acoustic scene
Boyer, Eric. "Estimation paramétrique de moments spectraux d'échos doppler : application aux radars strato-troposphériques". Cachan, Ecole normale supérieure, 2002. http://www.theses.fr/2002DENS0037.
Pełny tekst źródłaTurgeon, Keven. "Séparation des éléments de terres rares par extraction par solvant : estimation des constantes d'équilibre d'extraction pour la simulation du procédé". Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/28326.
Pełny tekst źródłaThis thesis discusses the separation of rare earth elements (REE) using the solvent extraction process. The objective is to develop a flow sheet allowing the separation of the REE present in a leaching solution of REE ore. To achieve this, a simulator to predict the distribution of REE in the phases of a mixer-settler that are in chemical equilibrium was developed. This simulator uses the chemical equilibrium constants of each element present in solution. A method for accurate and precise analysis of REE with a microwave-induced plasma atomic emission spectrometer (MP-AES) was also developed. Solvent extraction tests are performed in the laboratory and the data is processed to produce extraction equilibrium constants that are used in the simulation. A study of the kinetics of the extraction process shows that thirty seconds of agitation in a separating funnel are sufficient to reach the chemical equilibrium in the case of the extractors used. An innovative data reconciliation method for balancing the extraction test results and estimating the equilibrium constants for the species transferred between the phases is applied to the raw data of experimental tests. It is shown that the proposed method provides more reproducible extraction equilibrium constants than standard data processing methods. A simulator based on the estimated equilibrium constants has then been written to predict the equilibrium concentrations in the phases of one or several consecutive extractions. A flow sheet allowing the separation of REE into two subgroups (light REE and SEG with heavy REE) is developed. The flow sheet consisting of three extraction steps, three scrubbing stages and two stripping stages is simulated and tested at a laboratory scale on a leaching solution of REE ore. The experimental results obtained are similar to the predictions of the simulator for the whole process showing the validity of the simulation method based on the use of the equilibrium constants.
Che, Viet Nhat Anh. "Cyclostationary analysis : cycle frequency estimation and source separation". Thesis, Saint-Etienne, 2011. http://www.theses.fr/2011STET4035.
Pełny tekst źródłaBlind source separation problem aims to recover a set of statistically independent source signals from a set of sensor observations. These observations can be modeled as an instantaneous or convolutive mixture of the same sources. In this dissertation, the source signals are assumed to be cyclostationary where their cycle frequencies may be known or unknown a priori. First, we establish relations between the spectrum, power spectrum of a source signal and its component, then we propose two novel algorithms to estimate its cycle frequencies. Next, for blind separation of instantaneous mixtures of sources, we present four algorithms based on orthogonal (or non-orthogonal) approximate diagonalization of the multiple cyclic temporal moment matrices, and the matrix pencil approach to extract the source signal. We also introduce and prove a new identifiability condition to show which kind of input cyclostationary sources can be separated based on second-order cyclostationarity statistics. For blind separation of convolutive mixtures of sources signal or blind deconvolution of FIR MIMO systems, we present a two-steps algorithm based on time domain approach for recovering the source signals. Numerical simulations are used throughout this thesis to demonstrate the effectiveness of our proposed approaches, and compare theirs performances with previous methods
Guidara, Rima. "Méthodes markoviennes pour la séparation aveugle de signaux et images". Toulouse 3, 2009. http://thesesups.ups-tlse.fr/705/.
Pełny tekst źródłaThis thesis presents new Markovian methods for blind separation of instantaneous linear mixtures of one-dimensional signals and images. In the first part, we propose several improvements to an existent method for separating temporal signals. The new method exploits simultaneously non-Gaussianity, autocorrelation and non-stationarity of the sources. Excellent performance is obtained for the separation of artificial mixtures of speech signals, and we succeed to separate real mixtures of astrophysical spectra. An extension to image separation is then proposed. The dependence within the image pixels is modelled by non-symetrical half-plane Markov random fields. Very good performance is obtained for the separation of artificial mixtures of natural images and noiseless observations of the Planck satellite. The results obtained with a low level noise are acceptable
Betoule, Marc. "Analyse des données du fond diffus cosmologique : simulation et séparation de composantes". Phd thesis, Observatoire de Paris, 2009. http://tel.archives-ouvertes.fr/tel-00462157.
Pełny tekst źródłaMeseguer, Brocal Gabriel. "Multimodal analysis : informed content estimation and audio source separation". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS111.
Pełny tekst źródłaThis dissertation proposes the study of multimodal learning in the context of musical signals. Throughout, we focus on the interaction between audio signals and text information. Among the many text sources related to music that can be used (e.g. reviews, metadata, or social network feedback), we concentrate on lyrics. The singing voice directly connects the audio signal and the text information in a unique way, combining melody and lyrics where a linguistic dimension complements the abstraction of musical instruments. Our study focuses on the audio and lyrics interaction for targeting source separation and informed content estimation. Real-world stimuli are produced by complex phenomena and their constant interaction in various domains. Our understanding learns useful abstractions that fuse different modalities into a joint representation. Multimodal learning describes methods that analyse phenomena from different modalities and their interaction in order to tackle complex tasks. This results in better and richer representations that improve the performance of the current machine learning methods. To develop our multimodal analysis, we need first to address the lack of data containing singing voice with aligned lyrics. This data is mandatory to develop our ideas. Therefore, we investigate how to create such a dataset automatically leveraging resources from the World Wide Web. Creating this type of dataset is a challenge in itself that raises many research questions. We are constantly working with the classic ``chicken or the egg'' problem: acquiring and cleaning this data requires accurate models, but it is difficult to train models without data. We propose to use the teacher-student paradigm to develop a method where dataset creation and model learning are not seen as independent tasks but rather as complementary efforts. In this process, non-expert karaoke time-aligned lyrics and notes describe the lyrics as a sequence of time-aligned notes with their associated textual information. We then link each annotation to the correct audio and globally align the annotations to it. For this purpose, we use the normalized cross-correlation between the voice annotation sequence and the singing voice probability vector automatically, which is obtained using a deep convolutional neural network. Using the collected data we progressively improve that model. Every time we have an improved version, we can in turn correct and enhance the data
Gouldieff, Vincent. "Etude des techniques de séparation de signaux satellitaires dans un contexte d'interférences". Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0004.
Pełny tekst źródłaRecent satellite telecommunications systems use advanced technologies, in particular to increase their spectral efficiency. In this context, several transmission scenarios lead to severe co-channel interference situations, both in the uplink and in the downlink. Thus, the signal-to-interference ratio (SIR) of the received mixture may prove to be weak or even negative. In order to recover the signals transmitted by each of the co-channel users, it is necessary to process the carrier-of-interest with suitable methods. In this thesis, we study blind signal processing techniques for the detection, characterization and separation of satellite signals showing co-channel interference. The complexity and the performance of the proposed methods are decisive factors for their integration in intelligent satellite modems
Betoule, Marc. "Analyse des données du fond diffus cosmologique : simulation et séparation de composantes". Phd thesis, Observatoire de Paris (1667-....), 2009. https://theses.hal.science/tel-00462157v2.
Pełny tekst źródłaThe next generation of experiments dedicated to measuring temperature and polarization anisotropies of the microwave background radiation (CMB), inaugurated with the launch of the Planck satellite, will enable the detection and study of increasingly subtle effects. However, the superposition of astrophysical foreground emissions hinder the analysis of the cosmological signal and will contribute as the main source of uncertainty in the forthcoming measurements. An improved modeling of foreground emissions and the development of statistical methods to extract the cosmological information from this contamination are thus crucial steps in the scientific analysis of incoming datasets. In this work we describe the development of the Planck Sky Model, a tool for modeling and simulating the sky emission. We then make use of these simulations to develop and evaluate statistical treatments of foreground emission. We explore the efficiency of wavelet analysis on the sphere (needlets) in the domain of spectral estimation on incomplete data with inhomogeneous contamination, and design a method for treating small scales contamination induced by point sources in the Planck and WMAP data. We also study the impact of foregrounds on our ability to detect primordial gravitational waves (predicted by inflation) and offer forecasts of the performance of future dedicated experiments
Umiltà, Caterina. "Development and assessment of a blind component separation method for cosmological parameter estimation". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066453/document.
Pełny tekst źródłaThe Planck satellite observed the whole sky at various frequencies in the microwave range. These data are of high value to cosmology, since they help understanding the primordial universe through the observation of the cosmic microwave background (CMB) signal. To extract the CMB information, astrophysical foreground emissions need to be removed via component separation techniques. In this work I use the blind component separation method SMICA to estimate the CMB angular power spectrum with the aim of using it for the estimation of cosmological parameters. In order to do so, small scales limitations as the residual contamination of unresolved point sources and the noise need to be addressed. In particular, the point sources are modelled as two independent populations with a flat angular power spectrum: by adding this information, the SMICA method is able to recover the joint emission law of point sources. Auto-spectra deriving from one sky map have a noise bias at small scales, while cross-spectra show no such bias. This is particularly true in the case of cross-spectra between data-splits, corresponding to sky maps with the same astrophysical content but different noise properties. I thus adapt SMICA to use data-split cross-spectra only. The obtained CMB spectra from simulations and Planck 2015 data are used to estimate cosmological parameters. Results show that this estimation can be biased if the shape of the (weak) foreground residuals in the angular power spectrum is not well known. In the end, I also present results of the study of a Modified Gravity model called Induced Gravity
Bertholon, Francois. "Analyse de mélanges à partir de signaux de chromatographie gazeuse". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT110.
Pełny tekst źródłaThe chromatography is a chemical technique to separate entities from a mixture. In this thesis, we will focused on the gaseous mixture, and particularly on the signal processing of gas chromatography. To acquire those signal we use different sensors. However whatever the sensor used, we obtain a peaks succession, where each peak corresponds to an entity present in the mixture. Our aim is then to analyze gaseous mixtures from acquired signals, by characterizing each peaks. After a bibliographic survey of the chromatography, we chose the Giddings and Eyring distribution to describe a peak shape. This distribution define the probability that a molecule walking randomly through the chromatographic column go out at a given time. Then we propose analytical model of the chromatographic signal which corresponds to a peak shape mixture model. Also in first approximation, this model is considered as Gaussian mixture model. To process those signals, we studied two broad groups of methods, executed upon real data and simulated data. The first family of algorithms consists in a Bayesian estimation of unknown parameters of our model. The order of mixture model can be include in the unknown parameters. It corresponds also to the number of entity in the gaseous mixture. To estimate those parameters, we use a Gibbs sampler, and Markov Chain Monte Carlo sampling, or a variational approach. The second methods consists in a sparse representation of the signal upon a dictionary. This dictionary includes a large set of peak shapes. The sparsity is then characterized by the number of components of the dictionary needed to describe the signal. At last we propose a sparse Bayesian method
Grosfils, Valérie. "Modelling and parametric estimation of simulated moving bed chromatographic processes (SMB)". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210313.
Pełny tekst źródłaLes modèles mathématiques décrivant les procédés SMB consistent en les bilans massiques des composés à séparer. Ce sont des modèles à paramètres distribués (décrit par des équations aux dérivées partielles). Certains ont un comportement dynamique de type hybride (c'est-à-dire faisant intervenir des dynamiques à temps continu et des événements discrets). Quelques modèles ont été développés dans la littérature. Il s’agit de sélectionner ceux qui paraissent les plus intéressants au niveau de leur temps de calcul, de leur efficacité et du nombre de paramètres à déterminer. En outre, de nouvelles structures de modèles sont également proposées afin d’améliorer le compromis précision / temps de calcul.
Ces modèles comportent généralement certains paramètres inconnus. Ils consistent soit, en des grandeurs physiques mal définies au départ des données de base, soit, en des paramètres fictifs, introduits à la suite d'hypothèses simplificatrices et englobant à eux seuls un ensemble de phénomènes. Il s’agit de mettre au point une procédure systématique d’estimation de ces paramètres requérant le moins d’expériences possible et un faible temps de calcul. La valeur des paramètres est estimée, au départ de mesures réelles, à l'aide d'une procédure de minimisation d'une fonction de coût qui indique l’écart entre les grandeurs estimées par le modèle et les mesures. La sensibilité du modèle aux écarts sur les paramètres, ainsi que l’identifiabilité du modèle (possibilité de déterminer de manière univoque les paramètres du modèle) sur la base de mesures en fonctionnement normal sont étudiées. Ceci fournit un critère de comparaison supplémentaire entre les différents modèles et permet en outre de déterminer les conditions expérimentales optimales (choix du type d’expérience, choix des signaux d’entrée, choix du nombre et de la position des points de mesures…) dans lesquelles les mesures utilisées lors de l’estimation paramétrique doivent être relevées. De plus, les erreurs d’estimation sur les paramètres et les erreurs de simulation sont estimées. La procédure choisie est ensuite validée sur des données expérimentales recueillies sur un procédé pilote existant au Max-Planck-Institut für Dynamik komplexer technischer systeme (Magdebourg, Allemagne).
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Gloaguen, Jean-Rémy. "Estimation du niveau sonore de sources d'intérêt au sein de mixtures sonores urbaines : application au trafic routier". Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0023/document.
Pełny tekst źródłaAcoustic sensor networks are being set up in several major cities in order to obtain a more detailed description of the urban sound environment. One challenge is to estimate useful indicators such as the road traffic noise level on the basis of sound recordings. This task is by no means trivial because of the multitude of sound sources that composed this environment. For this, Non-negative Matrix Factorization (NMF) is considered and applied on two corpuses of simulated urban sound mixtures. The interest of simulating such mixtures is the possibility of knowing all the characteristics of each sound class including the exact road traffic noise level. The first corpus consists of 750 30-second scenes mixing a road traffic component with a calibrated sound level and a more generic sound class. The various results have notably made it possible to propose a new approach, called ‘Thresholded Initialized NMF', which is proving to be the most effective. The second corpus created makes it possible to simulate sound mixtures more representatives of recordings made in cities whose realism has been validated by a perceptual test. With an average noise level estimation error of less than 1.3 dB, the Thresholded Initialized NMF stays the most suitable method for the different urban noise environments. These results open the way to the use of this method for other sound sources, such as birds' whistling and voices, which can eventually lead to the creation of multi-source noise maps
Poilleux-Milhem, Hélène. "Test de validation adaptatif dans un modèle de régression : modélisation et estimation de l'effet d'une discontinuité du couvert végétal sur la dispersion du pollen de colza". Paris 11, 2002. http://www.theses.fr/2002PA112297.
Pełny tekst źródłaThis thesis framework is the spread of genetically modified organisms in the environment. Several parametric models of the individual pollen dispersal distribution have already been proposed for homogeneous experiments (plants emitting marked pollen surrounded by the same unmarked plants). In order to predict the "genetic pollution" in an agricultural landscape, a discontinuity effect on pollen flows in a cultivated area (e. G. A road crosses a field) has to be taken into account. This effect was modelled and estimated: according to the size of the discontinuity, it may correspond to a significant acceleration of the pollen flow. Graphical diagnosis methods show that the modelling of the individual pollen dispersal distribution and of the discontinuity effect, is best fitting the data when using constant piecewise functions. Prior to using parametric models to predict genetic pollution, goodness-of-fit tools are essential. We therefore propose a goodness-of-fit test in a nonlinear Gaussian regression model, where the errors are independent and identically distributed. This test does not require any knowledge on the regression function and on the variance of the observations. It generalises the linear hypothesis tests proposed by Baraud et al (Ann. Statist. 2003, Vol. 31) to the nonlinear hypothesis. It is asymptotically of level α and a set of functions over which it is asymptotically powerful is characterized. It is rate optimal among adaptive procedures over isotropic and anisotropic Hölder classes of alternatives. It is consistent against directional alternatives that approach the null hypothesis at a rate close to the parametric rate. According to a simulation study, this test is powerful even for fixed sample sizes
Fourer, Dominique. "Approche informée pour l’analyse du son et de la musique". Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14973/document.
Pełny tekst źródłaIn the field of audio signal processing, analysis is an essential step which allows interactions with existing signals. In fact, the quality of transformed or synthesized audio signals depends on the accuracy over the estimated model parameters. However, theoretical limits exist and show that the best accuracy which can be reached by a classic estimator can be insufficient for the most demanding applications (e.g. active listening of music). The work which is developed in this thesis revisits well known audio analysis problems like spectral analysis, automatic transcription of music and audio sources separation using the novel ``informed'' approach. This approach takes advantage of a specific configuration where the parameters of the elementary signals which compose a mixture are known before the mixing process. Using the tools which are proposed in this thesis, the minimal side information is computed and transmitted with the mixture signal. This allows any kind of transformation of the mixture signal with a constraint over the resulting quality. When the compatibility with existing audio formats is required, the side information is embedded directly into the analyzed audio signal using a watermarking technique. This work describes several theoretical and practical aspects of audio signal processing. We show that a classic estimator combined with the sufficient side information can obtain better performances than classic approaches (classic estimation or pure coding)
Yeh, Chunghsin. "Extraction de fréquences fondamentales multiples dans des enregistrements polyphoniques". Paris 6, 2008. http://www.theses.fr/2008PA066261.
Pełny tekst źródłaOuld, Mohamed Mohamed Salem. "Contribution à la séparation aveugle de sources par utilisation des divergences entre densités de probabilité: application à l'analyse vibratoire". Reims, 2010. http://theses.univ-reims.fr/sciences/2010REIMS010.pdf.
Pełny tekst źródłaIn this thesis, we propose a new blind source separation algorithm based on the optimization of mutual information under constraints. The optimization problem is solved by using the dual problem. The estimator of stochastic gradient is based on the estimation of the densities by maximum likelihood method. The densities are chosen from exponential families using the AIC criterion. Then, we propose a new al- gorithm for blind source separation based on the minimization of divergences witch generalizes the Mutual Information (MI) approach. We show that the algorithm using Hellinger's divergence has better properties in terms of effciency-robustness, for noisy data. In the context of cyclostationary signals, the above methods of sepa- ration were adapted using second order statistics. We illustrate the performances of the proposed algorithms through simulations and on real rotating machine vibration signals
Wang, Wei-Min. "Estimation of component temperatures of vegetative canopy with Vis/NIR and TIR multiple-angular data through inversion of vegetative canopy radiative transfer model". Strasbourg, 2009. http://www.theses.fr/2009STRA6027.
Pełny tekst źródłaThe separation of component temperature is the basic step for the application of two-source algorithm. Multi-angular thermal infrared measurements provide a chance for the estimation of component temperatures (namely, soil and vegetation temperatures) with remotely-sensed data. The objective of this study is to explore the factors that affect the estimation of component temperatures and propose new algorithm for inverting the canopy radiative transfer models to compute component temperatures. The objectives of this dissertation include: (1) finding an appropriate candidate leaf angle distribution functions for modeling and inversion, (2) evaluating the scaling behavior of Beer's law and its effect on the estimation of component temperatures, (3) proposing an analytical model for directional brightness temperature at top of canopy, (4) retrieving component temperatures with neural network and simplex algorithms. The effects of leaf angle distribution function on extinction coefficient, which is a key parameter for simulating the radiative transfer through vegetative canopy, is explored to improve the radiative transfer modeling. These contributions will enhance our understanding of the basic problems existing in thermal IR remote sensing and improve the simulation of land surface energy balance. Further work can be conducted to continue the enhancement and application of proposed algorithm to remote sensing images
Durrieu, Jean-Louis. "Transcription et séparation automatique de la mélodie principale dans les signaux de musique polyphoniques". Phd thesis, Paris, Télécom ParisTech, 2010. https://pastel.hal.science/pastel-00006123.
Pełny tekst źródłaWe propose to address the problem of melody extraction along with the monaural lead instrument and accompaniment separation problem. The first task is related to Music Information Retrieval (MIR), since it aims at indexing the audio music signals with their melody. The separation problem is related to Blind Audio Source Separation (BASS), as it aims at breaking an audio mixture into several source tracks. Leading instrument source separation and main melody extraction are addressed within a unified framework. The lead instrument is modelled thanks to a source/filter production model. Its signal is generated by two hidden states, the filter state and the source state. The proposed signal spectral model therefore explicitly uses pitches both to separate the lead instrument from the others and to transcribe the pitch sequence played by that instrument, the "main melody". This model gives rise to two alternative models, a Gaussian Scaled Mixture Model (GSMM) and the Instantaneous Mixture Model (IMM). The accompaniment is modelled with a more general spectral model. Five systems are proposed. Three systems detect the fundamental frequency sequence of the lead instrument, i. E. They estimate the main melody. A system returns a musical melody transcription and the last system separates the lead instrument from the accompaniment. The results in melody transcription and source separation are at the state of the art, as shown by our participations to international evaluation campaigns (MIREX'08, MIREX'09 and SiSEC'08). The proposed extension of previous source separation works using "MIR" knowledge is therefore a very successful combination
Durrieu, Jean-Louis. "Transcription et séparation automatique de la mélodie principale dans les signaux de musique polyphoniques". Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00006123.
Pełny tekst źródłaRouvière, Clémentine. "Experimental parameter estimation in incoherent images via spatial-mode demultiplexing". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS033.
Pełny tekst źródłaHistorically, the resolution of optical imaging systems was dictated by diffraction, and the Rayleigh criterion was long considered an unsurpassable limit. In superresolution microscopy, this limit is overcome by manipulating the emission properties of the object. However, in passive imaging, when sources are uncontrolled, reaching sub-Rayleigh resolution remains a challenge. Here, we implement a quantum-metrology-inspired approach for estimating the separation between two incoherent sources, achieving a sensitivity five orders of magnitude beyond the Rayleigh limit. Using a spatial mode demultiplexer, we examine scenes with bright and faint sources, through intensity measurements in the Hermite-Gauss basis. Analysing sensitivity and accuracy over an extensive range of separations, we demonstrate the remarkable effectiveness of demultiplexing for sub-Rayleigh separation estimation. These results effectively render the Rayleigh limit obsolete for passive imaging
Fuentes, Benoit. "L'analyse probabiliste en composantes latentes et ses adaptations aux signaux musicaux : application à la transcription automatique de musique et à la séparation de sources". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0011.
Pełny tekst źródłaAutomatic music transcription consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. To address this problem, there is a class of methods which is based on the modeling of a signal as a sum of basic elements, carrying symbolic information. Among these analysis techniques, one can find the probabilistic latent component analysis (PLCA). The purpose of this thesis is to propose variants and improvements of the PLCA, so that it can better adapt to musical signals and th us better address the problem of transcription. To this aim, a first approach is to put forward new models of signals, instead of the inherent model 0 PLCA, expressive enough so they can adapt to musical notes having variations of both pitch and spectral envelope over time. A second aspect of this work is to provide tools to help the parameters estimation algorithm to converge towards meaningful solutions through the incorporation of prior knowledge about the signals to be analyzed, as weil as a new dynamic model. Ali the devised algorithms are applie to the task of automatic transcription. They can also be directly used for source separation, which consists in separating several sources from a mixture, and Iwo applications are put forward in this direction
El, Guedri Mabrouka. "Caractérisation aveugle de la courbe de charge électrique : Détection, classification et estimation des usages dans les secteurs résidentiel et tertiaire". Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00461671.
Pełny tekst źródłaLassami, Nacerredine. "Représentations parcimonieuses et analyse multidimensionnelle : méthodes aveugles et adaptatives". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0139.
Pełny tekst źródłaDuring the last decade, the mathematical and statistical study of sparse signal representations and their applications in audio, image, video processing and source separation has been intensively active. However, exploiting sparsity in multidimensional processing contexts such as digital communications remains a largely open problem. At the same time, the blind methods seem to be the answer to a lot of problems recently encountered by the signal processing and the communications communities such as the spectral efficiency. Furthermore, in a context of mobility and non-stationarity, it is important to be able to implement adaptive processing solutions of low algorithmic complexity to ensure reduced consumption of devices. The objective of this thesis is to address these challenges of multidimensional processing by proposing blind solutions of low computational cost by using the sparsity a priori. Our work revolves around three main axes: sparse principal subspace tracking, adaptive sparse source separation and identification of sparse systems. For each problem, we propose new adaptive solutions by integrating the sparsity information to the classical methods in order to improve their performance. Numerical simulations have been conducted to confirm the superiority of the proposed methods compared to the state of the art
Xiong, Wenmeng. "Localisation de sources dispersées : Performances de MUSIC en présence d'erreurs de modèle et estimation parcimonieuse à rang faible". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC076/document.
Pełny tekst źródłaThis thesis focuses on the distributed source localization problem. In a first step, performance of high resolution methods in the presence of model errors due to the angular distribution of source has been studied. Theoretical expressions of the estimation bias and the mean square error of the direction of arrival of sources have been established in terms of model error. The impacts of the array geometry on the performances have studied in order to optimize the robustness of the array to the model error due to distributed sources.Theoretical results have been validated by numerical simulations.In a second step, a new approach for the localization of spatially distributed source has been proposed: the approach is based on the sparsity and low-rank property of the spatial covariance matrix of the sources. The proposed method provides also an estimation of the distribution shapes of the sources. Simulation results exhibit the advantages of exploiting the sparsity and the low rank properties
Madrolle, Stéphanie. "Méthodes de traitement du signal pour l'analyse quantitative de gaz respiratoires à partir d’un unique capteur MOX". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT065/document.
Pełny tekst źródłaNon-invasively taken, exhaled breath contains many volatile organic compounds (VOCs) whose amount depends on the health of the subject. Quantitative analysis of exhaled air is of great medical interest, whether for diagnosis or for a treatment follow-up. As part of my thesis, we propose to study a device to analyze exhaled breath, including these VOCs. This multidisciplinary thesis addresses various aspects, such as the choice of sensors, materials and acquisition modes, the acquisition of data using a gas bench, and then the processing of the signals obtained to quantify a gas mixture. We study the response of a metal oxide sensor (MOX) to mixtures of two gases (acetone and ethanol) diluted in synthetic air (oxygen and nitrogen). Then, we use source separation methods in order to distinguish the two gases, and to determine their concentration. To give satisfactory results, these methods require first to use several sensors for which we know the mathematical model describing the interaction of the mixture with the sensor, and which present a sufficient diversity in the calibration measurements to estimate the model coefficients. In this thesis, we show that MOX sensors can be described by a linear-quadratic mixing model, and that a dual temperature acquisition mode can generate two virtual sensors from a single physical sensor. To quantify the components of the mixture from measurements on these (virtual) sensors, we have develop supervised and unsupervised source separation methods, applied to this nonlinear model: independent component analysis, least squares methods (Levenberg Marquardt algorithm), and a Bayesian method were studied. The experimental results show that these methods make it possible to estimate the VOC concentrations of a gas mixture, accurately, while requiring only a few calibration points
Fuentes, Benoît. "L'analyse probabiliste en composantes latentes et ses adaptations aux signaux musicaux : application à la transcription automatique de musique et à la séparation de sources". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0011/document.
Pełny tekst źródłaAutomatic music transcription consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. To address this problem, there is a class of methods which is based on the modeling of a signal as a sum of basic elements, carrying symbolic information. Among these analysis techniques, one can find the probabilistic latent component analysis (PLCA). The purpose of this thesis is to propose variants and improvements of the PLCA, so that it can better adapt to musical signals and th us better address the problem of transcription. To this aim, a first approach is to put forward new models of signals, instead of the inherent model 0 PLCA, expressive enough so they can adapt to musical notes having variations of both pitch and spectral envelope over time. A second aspect of this work is to provide tools to help the parameters estimation algorithm to converge towards meaningful solutions through the incorporation of prior knowledge about the signals to be analyzed, as weil as a new dynamic model. Ali the devised algorithms are applie to the task of automatic transcription. They can also be directly used for source separation, which consists in separating several sources from a mixture, and Iwo applications are put forward in this direction
Carlo, Diego Di. "Echo-aware signal processing for audio scene analysis". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S075.
Pełny tekst źródłaMost of audio signal processing methods regard reverberation and in particular acoustic echoes as a nuisance. However, they convey important spatial and semantic information about sound sources and, based on this, recent echo-aware methods have been proposed. In this work we focus on two directions. First, we study the how to estimate acoustic echoes blindly from microphone recordings. Two approaches are proposed, one leveraging on continuous dictionaries, one using recent deep learning techniques. Then, we focus on extending existing methods in audio scene analysis to their echo-aware forms. The Multichannel NMF framework for audio source separation, the SRP-PHAT localization method, and the MVDR beamformer for speech enhancement are all extended to their echo-aware versions
Raguet, Hugo. "A Signal Processing Approach to Voltage-Sensitive Dye Optical Imaging". Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090031/document.
Pełny tekst źródłaVoltage-sensitive dye optical imaging is a promising recording modality for the cortical activity, but its practical potential is limited by many artefacts and interferences in the acquisitions. Inspired by existing models in the literature, we propose a generative model of the signal, based on an additive mixtures of components, each one being constrained within an union of linear spaces, determined by its biophysical origin. Motivated by the resulting component separation problem, which is an underdetermined linear inverse problem, we develop: (1) convex, spatially structured regularizations, enforcing in particular sparsity on the solutions; (2) a new rst-order proximal algorithm for minimizing e›ciently the resulting functional; (3) statistical methods for automatic parameters selection, based on Stein’s unbiased risk estimate.We study thosemethods in a general framework, and discuss their potential applications in variouselds of applied mathematics, in particular for large scale inverse problems or regressions. We develop subsequently a soŸware for noisy component separation, in an integrated environment adapted to voltage-sensitive dye optical imaging. Finally, we evaluate this soŸware on dišerent data set, including synthetic and real data, showing encouraging perspectives for the observation of complex cortical dynamics
Guilloux, Frédéric. "Analyse harmonique et Estimation spectrale sur la Sphère.Applications à l'étude du Fond diffus cosmologique". Phd thesis, Université Paris-Diderot - Paris VII, 2008. http://tel.archives-ouvertes.fr/tel-00347673.
Pełny tekst źródłaLa localisation des needlets (récente construction d'ondelettes) sur la sphère est étudiée et optimisée en terme de concentration spatiale et d'estimation statistique. Ces fonctions sont ensuite utilisées pour construire un nouvel estimateur du spectre de puissance angulaire. L'examen des propriété de cet estimateur, d'un point de vue théorique (dans l'asymptotique des hautes fréquences angulaires) et pratique, montre qu'il améliore les méthodes existantes dans un modèle réaliste comportant des données manquantes et un bruit hétéroscédastique. A côté de l'estimation spectrale, l'utilisation des needlets est également introduite dans un problème de séparation de sources.
Après quatre chapitres introductifs (dédiés respectivement aux aspects physiques, analytiques et statistiques de l'étude du CMB, puis à une présentation d'ensemble des résultats), quatre articles de revue (en collaboration) sont présentés : "Practical wavelet design on the sphere" ; "CMB power spectrum estimation using wavelets" ; "Spectral estimation on the sphere with needlets: high frequency asymptotics" et "A full sky, low foreground, high resolution CMB map from WMAP".
Khaddour, Fadi. "Amélioration de la production de gaz des « Tight Gas Reservoirs »". Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3005/document.
Pełny tekst źródłaThe valorization of compact gas reservoirs, called tight gas reservoirs (TGR), whose discoveries are important, would significantly increase the global hydrocarbon resources. With the aim of improving the production of these types of gas, we have conducted a study to achieve a better understanding of the relationship between damage and the transport properties of geomaterials. The microstructure evolution of specimens, which were submitted beforehand to dynamic loading, has been investigated. An estimation of their permeability upon damage is first presented with the help of a bundle model of parallel capillaries coupling Poiseuille flow with Knudsen diffusion. Then, we have carried out an experimental work to estimate the permeability evolution upon damage in relation to the evolution of the pore size distribution in uniaxial compression. The measurements of permeability have been performed on mortar cylinders, designed to mimic typical tight rocks that can be found in tight gas reservoirs. Microstructural characterization of damaged mortars has been performed with the help of mercury intrusion porosimetry (MIP). To estimate the permeability evolution, a new random hierarchical model has been devised. The comparisons with the experimental data show the ability of this model to estimate not only the apparent and intrinsic permeabilities but also their evolutions under loading due to a change in the pore size distribution. This model and the experimental set up have been extended to estimate the relative permeabilities of gas mixtures in the future. The final chapter presents a study of the adsorption of methane on different porous media fractured by electrical shocks. The results, concerning the estimation of the in-place resources, have shown that fracturing can enhance the extraction of the initial amount of adsorbed gas
Patanchon, Guillaume. "Analyse multi-composantes d'observations du fond diffus cosmologique". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2003. http://tel.archives-ouvertes.fr/tel-00004512.
Pełny tekst źródła