Dissertations / Theses on the topic 'Separation estimation'

To see the other types of publications on this topic, follow the link: Separation estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Separation estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gunawan, David Oon Tao Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Musical instrument sound source separation." Awarded By:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/41751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The structured arrangement of sounds in musical pieces, results in the unique creation of complex acoustic mixtures. The analysis of these mixtures, with the objective of estimating the individual sounds which constitute them, is known as musical instrument sound source separation, and has applications in audio coding, audio restoration, music production, music information retrieval and music education. This thesis principally addresses the issues related to the separation of harmonic musical instrument sound sources in single-channel mixtures. The contributions presented in this work include novel separation methods which exploit the characteristic structure and inherent correlations of pitched sound sources; as well as an exploration of the musical timbre space, for the development of an objective distortion metric to evaluate the perceptual quality of separated sources. The separation methods presented in this work address the concordant nature of musical mixtures using a model-based paradigm. Model parameters are estimated for each source, beginning with a novel, computationally efficient algorithm for the refinement of frequency estimates of the detected harmonics. Harmonic tracks are formed, and overlapping components are resolved by exploiting spectro-temporal intra-instrument dependencies, integrating the spectral and temporal approaches which are currently employed in a mutually exclusive manner in existing systems. Subsequent to the harmonic magnitude extraction using this method, a unique, closed-loop approach to source synthesis is presented, separating sources by iteratively minimizing the aggregate error of the sources, constraining the minimization to a set of estimated parameters. The proposed methods are evaluated independently, and then are placed within the context of a source separation system, which is evaluated using objective and subjective measures. The evaluation of music source separation systems is presently limited by the simplicity of objective measures, and the extensive effort required to conduct subjective evaluations. To contribute to the development of perceptually relevant evaluations, three psychoacoustic experiments are also presented, exploring the perceptual sensitivity of timbre for the development of an objective distortion metric for timbre. The experiments investigate spectral envelope sensitivity, spectral envelope morphing and noise sensitivity.
2

Parfitt, Maxwell. "Estimation of magnet separation for magnetic suspension applications." Thesis, University of Reading, 2013. http://centaur.reading.ac.uk/36656/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis describes a form of non-contact measurement using two dimensional hall effect sensing to resolve the location of a moving magnet which is part of a ‘magnetic spring’ type suspension system. This work was inspired by the field of Space Robotics, which currently relies on solid link suspension techniques for rover stability. This thesis details the design, development and testing of a novel magnetic suspension system with a possible application in space and terrestrial based robotics, especially when the robot needs to traverse rough terrain. A number of algorithms were developed, to utilize experimental data from testing, that can approximate the separation between magnets in the suspension module through observation of the magnetic fields. Experimental hardware was also developed to demonstrate how two dimensional hall effect sensor arrays could provide accurate feedback, with respects to the magnetic suspension modules operation, so that future work can include the sensor array in a real-time control system to produce dynamic ride control for space robots. The research performed has proven that two dimensional hall effect sensing with respects to magnetic suspension is accurate, effective and suitable for future testing.
3

Che, Viet Nhat Anh. "Cyclostationary analysis : cycle frequency estimation and source separation." Thesis, Saint-Etienne, 2011. http://www.theses.fr/2011STET4035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le problème de séparation aveugle de sources a but de retrouver un ensemble des sources signaux statistiquement indépendants à partir seulement d’un ensemble des observations du capteur. Ces observations peuvent être modélisées comme un mélanges linéaires instantané ou convolutifs de sources. Dans cette thèse, les sources signaux sont supposées être cyclostationnaire où leurs fréquences cycles peuvent être connues ou inconnu par avance. Premièrement, nous avons établi des relations entre le spectre, spectre de puissance d’un signal source et leurs composants, puis nous avons proposé deux nouveaux algorithmes pour estimer sa fréquences cycliques. Ensuite, pour la séparation aveugle de sources en mélanges instantanés, nous présentons quatre algorithmes basés sur diagonalisation conjoint approchées orthogonale (ou non-orthogonales) d’une famille des matrices cycliques multiples moment temporel, or l’approche matricielle crayon pour extraire les sources signaux. Nous introduisons aussi et prouver une nouvelle condition identifiabilité pour montrer quel type de sources cyclostationnaires d’entrée peuvent être séparées basées sur des statistiques cyclostationnarité à l’ordre deux. Pour la séparation aveugle de sources en mélanges convolutifs, nous présentons un algorithme en deux étapes basées sur une approche dans le domaine temporel pour récupérer les signaux source. Les simulations numériques sont utilisés dans cette thèse pour démontrer l’efficacité de nos approches proposées, et de comparer les performances avec leurs méthodes précédentes
Blind source separation problem aims to recover a set of statistically independent source signals from a set of sensor observations. These observations can be modeled as an instantaneous or convolutive mixture of the same sources. In this dissertation, the source signals are assumed to be cyclostationary where their cycle frequencies may be known or unknown a priori. First, we establish relations between the spectrum, power spectrum of a source signal and its component, then we propose two novel algorithms to estimate its cycle frequencies. Next, for blind separation of instantaneous mixtures of sources, we present four algorithms based on orthogonal (or non-orthogonal) approximate diagonalization of the multiple cyclic temporal moment matrices, and the matrix pencil approach to extract the source signal. We also introduce and prove a new identifiability condition to show which kind of input cyclostationary sources can be separated based on second-order cyclostationarity statistics. For blind separation of convolutive mixtures of sources signal or blind deconvolution of FIR MIMO systems, we present a two-steps algorithm based on time domain approach for recovering the source signals. Numerical simulations are used throughout this thesis to demonstrate the effectiveness of our proposed approaches, and compare theirs performances with previous methods
4

Meseguer, Brocal Gabriel. "Multimodal analysis : informed content estimation and audio source separation." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse propose l'étude de l'apprentissage multimodal dans le contexte de signaux musicaux. Tout au long de ce manuscrit, nous nous concentrerons sur l'interaction entre les signaux audio et les informations textuelles. Parmi les nombreuses sources de texte liées à la musique qui peuvent être utilisées (par exemple les critiques, les métadonnées ou les commentaires des réseaux sociaux), nous nous concentrerons sur les paroles. La voix chantée relie directement le signal audio et les informations textuelles d'une manière unique, combinant mélodie et paroles où une dimension linguistique complète l'abstraction des instruments de musique. Notre étude se focalise sur l'interaction audio et paroles pour cibler la séparation de sources et l'estimation de contenu informé. Les stimuli du monde réel sont produits par des phénomènes complexes et leur interaction constante dans divers domaines. Notre compréhension apprend des abstractions utiles qui fusionnent différentes modalités en une représentation conjointe. L'apprentissage multimodal décrit des méthodes qui analysent les phénomènes de différentes modalités et leur interaction afin de s'attaquer à des tâches complexes. Il en résulte des représentations meilleures et plus riches qui améliorent les performances des méthodes d'apprentissage automatique actuelles. Pour développer notre analyse multimodale, nous devons d'abord remédier au manque de données contenant une voix chantée avec des paroles alignées. Ces données sont obligatoires pour développer nos idées. Par conséquent, nous étudierons comment créer une telle base de données en exploitant automatiquement les ressources du World Wide Web. La création de ce type de base de données est un défi en soi qui soulève de nombreuses questions de recherche. Nous travaillons constamment avec le paradoxe classique de la `` poule ou de l'œuf '': l'acquisition et le nettoyage de ces données nécessitent des modèles précis, mais il est difficile de former des modèles sans données. Nous proposons d'utiliser le paradigme enseignant-élève pour développer une méthode où la création de bases de données et l'apprentissage de modèles ne sont pas considérés comme des tâches indépendantes mais plutôt comme des efforts complémentaires. Dans ce processus, les paroles et les annotations non-expertes de karaoké décrivent les paroles comme une séquence de notes alignées sur le temps avec leurs informations textuelles associées. Nous lions ensuite chaque annotation à l'audio correct et alignons globalement les annotations dessus
This dissertation proposes the study of multimodal learning in the context of musical signals. Throughout, we focus on the interaction between audio signals and text information. Among the many text sources related to music that can be used (e.g. reviews, metadata, or social network feedback), we concentrate on lyrics. The singing voice directly connects the audio signal and the text information in a unique way, combining melody and lyrics where a linguistic dimension complements the abstraction of musical instruments. Our study focuses on the audio and lyrics interaction for targeting source separation and informed content estimation. Real-world stimuli are produced by complex phenomena and their constant interaction in various domains. Our understanding learns useful abstractions that fuse different modalities into a joint representation. Multimodal learning describes methods that analyse phenomena from different modalities and their interaction in order to tackle complex tasks. This results in better and richer representations that improve the performance of the current machine learning methods. To develop our multimodal analysis, we need first to address the lack of data containing singing voice with aligned lyrics. This data is mandatory to develop our ideas. Therefore, we investigate how to create such a dataset automatically leveraging resources from the World Wide Web. Creating this type of dataset is a challenge in itself that raises many research questions. We are constantly working with the classic ``chicken or the egg'' problem: acquiring and cleaning this data requires accurate models, but it is difficult to train models without data. We propose to use the teacher-student paradigm to develop a method where dataset creation and model learning are not seen as independent tasks but rather as complementary efforts. In this process, non-expert karaoke time-aligned lyrics and notes describe the lyrics as a sequence of time-aligned notes with their associated textual information. We then link each annotation to the correct audio and globally align the annotations to it. For this purpose, we use the normalized cross-correlation between the voice annotation sequence and the singing voice probability vector automatically, which is obtained using a deep convolutional neural network. Using the collected data we progressively improve that model. Every time we have an improved version, we can in turn correct and enhance the data
5

Lahlou, Mouncef. "Color-Based Surface Reflectance Separation for Scene Illumination Estimation and Rendering." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Given the importance of color processing in computer vision and computer graphics, estimating and rendering illumination spectral reflectance of image scenes is important to advance the capability of a large class of applications such as scene reconstruction, rendering, surface segmentation, object recognition, and reflectance estimation. Consequently, this dissertation proposes effective methods for reflection components separation and rendering in single scene images. Based on the dichromatic reflectance model, a novel decomposition technique, named the Mean-Shift Decomposition (MSD) method, is introduced to separate the specular from diffuse reflectance components. This technique provides a direct access to surface shape information through diffuse shading pixel isolation. More importantly, this process does not require any local color segmentation process, which differs from the traditional methods that operate by aggregating color information along each image plane. Exploiting the merits of the MSD method, a scene illumination rendering technique is designed to estimate the relative contributing specular reflectance attributes of a scene image. The image feature subset targeted provides a direct access to the surface illumination information, while a newly introduced efficient rendering method reshapes the dynamic range distribution of the specular reflectance components over each image color channel. This image enhancement technique renders the scene illumination reflection effectively without altering the scene’s surface diffuse attributes contributing to realistic rendering effects. As an ancillary contribution, an effective color constancy algorithm based on the dichromatic reflectance model was also developed. This algorithm selects image highlights in order to extract the prominent surface reflectance that reproduces the exact illumination chromaticity. This evaluation is presented using a novel voting scheme technique based on histogram analysis. In each of the three main contributions, empirical evaluations were performed on synthetic and real-world image scenes taken from three different color image datasets. The experimental results show over 90% accuracy in illumination estimation contributing to near real world illumination rendering effects.
6

Becker, Saskia. "The Propagation-Separation Approach." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2014. http://dx.doi.org/10.18452/16960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lokal parametrische Modelle werden häufig im Kontext der nichtparametrischen Schätzung verwendet. Bei einer punktweisen Schätzung der Zielfunktion können die parametrischen Umgebungen mithilfe von Gewichten beschrieben werden, die entweder von den Designpunkten oder (zusätzlich) von den Beobachtungen abhängen. Der Vergleich von verrauschten Beobachtungen in einzelnen Punkten leidet allerdings unter einem Mangel an Robustheit. Der Propagations-Separations-Ansatz von Polzehl und Spokoiny [2006] verwendet daher einen Multiskalen-Ansatz mit iterativ aktualisierten Gewichten. Wir präsentieren hier eine theoretische Studie und numerische Resultate, die ein besseres Verständnis des Verfahrens ermöglichen. Zu diesem Zweck definieren und untersuchen wir eine neue Strategie für die Wahl des entscheidenden Parameters des Verfahrens, der Adaptationsbandweite. Insbesondere untersuchen wir ihre Variabilität in Abhängigkeit von der unbekannten Zielfunktion. Unsere Resultate rechtfertigen eine Wahl, die unabhängig von den jeweils vorliegenden Beobachtungen ist. Die neue Parameterwahl liefert für stückweise konstante und stückweise beschränkte Funktionen theoretische Beweise der Haupteigenschaften des Algorithmus. Für den Fall eines falsch spezifizierten Modells führen wir eine spezielle Stufenfunktion ein und weisen eine punktweise Fehlerschranke im Vergleich zum Schätzer des Algorithmus nach. Des Weiteren entwickeln wir eine neue Methode zur Entrauschung von diffusionsgewichteten Magnetresonanzdaten. Unser neues Verfahren (ms)POAS basiert auf einer speziellen Beschreibung der Daten, die eine zeitgleiche Glättung bezüglich der gemessenen Positionen und der Richtungen der verwendeten Diffusionsgradienten ermöglicht. Für den kombinierten Messraum schlagen wir zwei Distanzfunktionen vor, deren Eignung wir mithilfe eines differentialgeometrischen Ansatzes nachweisen. Schließlich demonstrieren wir das große Potential von (ms)POAS auf simulierten und experimentellen Daten.
In statistics, nonparametric estimation is often based on local parametric modeling. For pointwise estimation of the target function, the parametric neighborhoods can be described by weights that depend on design points or on observations. As it turned out, the comparison of noisy observations at single points suffers from a lack of robustness. The Propagation-Separation Approach by Polzehl and Spokoiny [2006] overcomes this problem by using a multiscale approach with iteratively updated weights. The method has been successfully applied to a large variety of statistical problems. Here, we present a theoretical study and numerical results, which provide a better understanding of this versatile procedure. For this purpose, we introduce and analyse a novel strategy for the choice of the crucial parameter of the algorithm, namely the adaptation bandwidth. In particular, we study its variability with respect to the unknown target function. This justifies a choice independent of the data at hand. For piecewise constant and piecewise bounded functions, this choice enables theoretical proofs of the main heuristic properties of the algorithm. Additionally, we consider the case of a misspecified model. Here, we introduce a specific step function, and we establish a pointwise error bound between this function and the corresponding estimates of the Propagation-Separation Approach. Finally, we develop a method for the denoising of diffusion-weighted magnetic resonance data, which is based on the Propagation-Separation Approach. Our new procedure, called (ms)POAS, relies on a specific description of the data, which enables simultaneous smoothing in the measured positions and with respect to the directions of the applied diffusion-weighting magnetic field gradients. We define and justify two distance functions on the combined measurement space, where we follow a differential geometric approach. We demonstrate the capability of (ms)POAS on simulated and experimental data.
7

Han, Kun. "Supervised Speech Separation And Processing." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407865723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yuzhou. "Deep CASA for Robust Pitch Tracking and Speaker Separation." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1566179636974186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Umiltà, Caterina. "Development and assessment of a blind component separation method for cosmological parameter estimation." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066453/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le rayonnement fossile, ou CMB, est un sujet d’étude clé pour la cosmologie car il indique l’état de l’univers à une époque primordiale. Le CMB est observable dans le ciel dans la bande de fréquences des micro-ondes. Cependant, il existe des processus astrophysiques, les avant-plans, qui émettent dans les micro-ondes, et rendent indispensable le traitement des données avec des méthodes de séparation de composantes. J'utilisé la méthode aveugle SMICA pour obtenir une estimation directe du spectre de puissance angulaire du CMB. La détermination des petites échelles de ce spectre est limité par les avant-plans comme les galaxies lointaines, et par le biais du bruit. Dans cette analyse, ces deux limitations sont abordées. En ajoutant des hypothèses sur la physique des galaxies lointaines, il est possible de guider l’algorithme pour estimer leur loi d'émission. Un spectre de puissance angulaire obtenu d'une carte du ciel a un biais dû au bruit à petites échelles. Toutefois, les spectres obtenus en croisant différentes cartes n'ont pas ce biais. J'ai donc adapté la méthode SMICA pour qu'elle n'utilise que ces derniers, diminuant l'incertitude due au bruit dans l'estimation du CMB. Cette méthode a été étudiée sur des nombreuses simulations et sur les données Planck 2015, afin d'estimer des paramètres cosmologiques. Les résultats montrent que la contamination résiduelle des avant-plans présente dans le spectre CMB, même si fortement réduite, peut introduire des biais dans l'estimation des paramètres si la forme des résiduels n'est pas bien connue. Dans cette thèse, je montre les résultats obtenus en étudiant un modèle de gravité modifiée
The Planck satellite observed the whole sky at various frequencies in the microwave range. These data are of high value to cosmology, since they help understanding the primordial universe through the observation of the cosmic microwave background (CMB) signal. To extract the CMB information, astrophysical foreground emissions need to be removed via component separation techniques. In this work I use the blind component separation method SMICA to estimate the CMB angular power spectrum with the aim of using it for the estimation of cosmological parameters. In order to do so, small scales limitations as the residual contamination of unresolved point sources and the noise need to be addressed. In particular, the point sources are modelled as two independent populations with a flat angular power spectrum: by adding this information, the SMICA method is able to recover the joint emission law of point sources. Auto-spectra deriving from one sky map have a noise bias at small scales, while cross-spectra show no such bias. This is particularly true in the case of cross-spectra between data-splits, corresponding to sky maps with the same astrophysical content but different noise properties. I thus adapt SMICA to use data-split cross-spectra only. The obtained CMB spectra from simulations and Planck 2015 data are used to estimate cosmological parameters. Results show that this estimation can be biased if the shape of the (weak) foreground residuals in the angular power spectrum is not well known. In the end, I also present results of the study of a Modified Gravity model called Induced Gravity
10

Landqvist, Ronnie. "Signal processing techniques in mobile communication systems : signal separation, channel estimation and equalization /." Karlskrona : Blekinge Institute of Technology, 2005. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/98bf8bfb44d67d86c1257099003e2fc1?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Herrmann, Felix J., and Eric Verschuur. "Separation of primaries and multiples by non-linear estimation in the curvelet domain." European Association of Geoscientists and Engineers, 2004. http://hdl.handle.net/2429/442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatial-temporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
12

Chen, Jitong. "On Generalization of Supervised Speech Separation." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492038295603502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Vilardaga, García-Cascón Santi. "An integrated framework for trajectory optimisation, prediction and parameter estimation for advanced aircraft separation concepts." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since the birth of commercial aviation, the applications and benefits of aircraft have grown immensely. This, in perfect synchrony with the average increase of purchasing power of the society, has rocketed the number of aircraft flying the skies. This increase comes at a cost, both in environmental and airspace capacity aspects. This thesis works towards the alleviation of the issues caused by the high number of flights, proposing concepts and mechanisms to safely increase the airspace capacity whilst minimising the environmental impact of aviation. This incredibly complex and neverending pursuit is omnipresent in the literature. One promising topic is the four dimensional (4D) trajectory optimisation with higher levels of automation. The research in this PhD thesis proposes an integrated framework for trajectory optimisation, trajectory prediction and parameter estimation, with which new air traffic management concepts can be assessed. This framework has the flexibility to optimise trajectories ranging from a free-flight to a very strict route structure, from a complete freedom at the vertical profile to a specific adherence to flight levels, etc. The 4D optimisation strategy results in a trajectory that complies with the scenario characteristics, which minimises a given functional objective such as the operational cost, time, fuel, etc. Furthermore, the same framework is used in a novel strategy to perform adaptive trajectory prediction (with conformance monitoring), and to estimate unknown parameters of an aircraft. To resolve this problem, an optimal control problem is formulated and converted into a non-linear programming (NLP) problem with direct collocation methods, and numerically resolved by an NLP solver. A comprehensive software architecture is presented, taking benefit from the best of two worlds to enable the flexibility and genericity of the developed optimisation framework: an object-oriented software coding language (C++) and a very powerful algebraic modelling language (GAMS). Based on this optimisation framework, the thesis produces operationally relevant results, demonstrating that the framework can cope with a variety of problems, and contributing to the ultimate goal of safely increasing airspace capacity and air traffic efficiency. Illustrative examples are presented focussed on the departure phase within a terminal manoeuvring area. First, an assessment of the efficiency of required times of arrival as a ways to increase air traffic capacity is presented, providing results on the cost in terms of fuel and time of imposing these time requirements within a TMA (which can get to surprisingly low figures), and its effectiveness for traffic separation. Second, the implementation of an aircraft separation methodology is presented, where an intruder trajectory is predicted and the ownship calculates its own optimal trajectory that deviates from it. A conformance monitoring strategy is implemented to ensure that the separation is maintained throughout the flight, acknowledging deviations, and reacting accordingly. Third, the prediction of the intruder trajectory is enhanced by the estimation of an equivalent mass using known past states. An impressive accuracy is achieved early after the beginning of the flight. Finally, the implementation of a multi-aircraft separation strategy is presented, where multiple aircraft are simultaneously optimised in the same optimisation problem, all whilst maintaining separation between them. The complexity of the alignment of aircraft coordinates for a fair comparison is tackled from a novel perspective. Conclusively, the different strategies for aircraft separation are compared, and quite surprisingly the best results for each strategy are quite similar. Indeed, the increase in operational cost that the different strategies present (when compared to the individual optimal trajectory) is negligible and alledgedly better than the current air traffic control separation paradigm.
Des del naixement de l’aviació comercial, les aplicacions i beneficis dels avions han crescut immensament. Això, en perfecta sincronia amb l’augment mitjà del poder adquisitiu de la societat, ha augmentat el nombre d’avions que volen pel cel. Aquest augment comporta, tanmateix, un cost, tant en aspectes mediambientals com en la capacitat de l’espai aeri. Aquesta tesi és concebuda per treballar en l’alleujament dels problemes que resulten de l’elevat nombre de vols, proposant nous conceptes i mecanismes per augmentar la capacitat de l’espai aeri amb seguretat i alhora minimitzar l’impacte ambiental de l’aviació. Aquesta recerca, complexa però extremadament necessària, és la protagonista d’una gran quantitat de treballs científics publicats. Des de la propulsió, fins a les aerostructures i la gestió del transit aeri, avui en dia es dedica un gran esforç a la reducció de l’impacte ambiental, així com a l’augment de la seguretat i la capacitat de l’espai aeri. Un tema prometedor és la introducció de nous conceptes d’operació que aprofiten al màxim l’optimització de trajectòries en les quatre dimensions (4D) i nivells d’automatització més elevats, tant per a sistemes de bord com de terra. Conceptes com ara operacions de perfil vertical continu són cada cop més utilitzats en el dia a dia. També, la reducció de la distancia recorreguda dels avions mitjançant rutes més directes esdevé una realitat com més va més evident. Per tal d’abastar un àmbit més ampli, els sistemes embarcats i de terra hauran d’esser actualitzats. És per això que s’hauria d’explorar minuciosament la quantificació dels beneficis esperats per als nou conceptes que es proposin, abans d’introduir-los a escala local o global. La investigació d’aquesta tesi doctoral proposa un sistema integrat per a l’optimització de trajectòries, la predicció, i l’estimació de paràmetres, amb el qual es poden avaluar nous conceptes de gestió del trànsit aeri. Aquest sistema té la flexibilitat d’optimitzar trajectòries que van des d’un vol lliure (free-flight) fins a una estructura de ruta molt estricta, des d’una llibertat completa al perfil vertical fins a una adhesió especifica als nivells de vol, etc. La definició d’escenaris és prou genèrica com per permetre una àmplia varietat de tipologies de vol, fases de vol, fases de rendiment, restriccions al llarg de la trajectòria, entre molts altres aspectes. L’estratègia d’optimització 4D d´ona com a resultat una trajectòria que no només compleix les característiques del vol (i de l’entorn configurat), sinó que també minimitza un objectiu funcional determinat, com ara el cost operatiu, el temps, el combustible, etc. I com ja s’ha mencionat breument, aquesta mateixa estratègia d’optimització s’adapta lleugerament per presentar una innovadora estratègia per realitzar prediccions de trajectòria adaptativa (amb monitoratge de conformitat) i per estimar paràmetres crucials inicialment desconeguts d’un avió. Per resoldre un problema tan complex, es formula un problema de control òptim i es converteix en un problema de programació no lineal (NLP) amb mètodes de col·locació directa. Aquest problema es resol numèricament mitjançant un programari de resolució de problemes NLP i se n’extreuen els resultats per a l’anàlisi. Es presenta una arquitectura de programari integral, aprofitant el millor de dos mons: un llenguatge de programació orientat a objectes (C++) i un llenguatge matemàtic algèbric molt potent (GAMS). La interacció entre aquests dos mons permet la flexibilitat i la genericitat del sistema d’optimització desenvolupat A partir d’aquest sistema d’optimització, els diferents capítols de la tesi produeixen resultats operatius rellevants. Això no només demostra que el sistema pot fer front a una gran varietat de problemes, sinó que també contribueix a l’objectiu final d’augmentar de forma segura la capacitat de l’espai aeri i l’eficiència del transit aeri. Es presenten diferents casos d’ ´ us i exemples il·lustratius centrats en enlairaments dins l’àrea de maniobra terminal (TMA). Concretament, quatre etapes formen aquesta part de la tesi. Primer, es presenta una avaluació de l’eficiència dels temps requerits d’arribada (RTA) com a forma d’augmentar la capacitat del transit aeri. Aquest estudi proporciona resultats sobre el cost en termes de combustible i temps d’imposar aquests requisits de temps dins d’una TMA (que pot arribar a xifres sorprenentment baixes). A més, mostra com d’efectiva pot ser aquesta estratègia per a la separació del transit. En segon lloc, es presenta la implementació d’una metodologia de separació d’avions mitjançant el sistema d’optimització. En ella, una aeronau (l’aeronau) genera una predicció de trajectòria d’un avio extern amb qui preveu tenir un conflicte proper (l’intrús). Seguidament, l’aeronau calcula la seva pròpia trajectòria òptima que es desvia d’aquella predita de l’intrús. S’implementa una estratègia de control de la conformitat per assegurar que la separació es mantingui durant tot el vol, reconeixent les desviacions i reaccionant en conseqüència. En tercer lloc, la predicció de la trajectòria intrusa es veu millorada per l’estimació d’una massa equivalent mitjançant estats passats coneguts (el deixant). Com era d’esperar, com més llarg sigui aquest deixant, millor serà l’estimació de la massa. Tanmateix, s’aconsegueix una precisió impressionant molt poc després de l’inici del vol. Finalment, es presenta la implementació d’una estratègia de separació de múltiples aeronaus. En aquesta formulació, s’optimitzen simultàniament les trajectòries de diversos avions dins el mateix problema d’optimització, mantenint la separació entre ells. La complexitat de l’alineació temporal de les coordenades d’avions per a una comparació justa s’aborda des d’una perspectiva innovadora. En conclusió, es comparen les diferents estratègies de separació d’avions i, sorprenentment, els millors resultats de cada estratègia són força similars. De fet, l’augment del cost operatiu que presenten les diferents estratègies (en comparació amb la trajectòria òptima individual) és insignificant i sempre millor que el paradigma actual de separació del control de trànsit aeri.
14

Flake, Darl D. II. "Separation of Points and Interval Estimation in Mixed Dose-Response Curves with Selective Component Labeling." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation develops, applies, and investigates new methods to improve the analysis of logistic regression mixture models. An interesting dose-response experiment was previously carried out on a mixed population, in which the class membership of only a subset of subjects (survivors) were subsequently labeled. In early analyses of the dataset, challenges with separation of points and asymmetric confidence intervals were encountered. This dissertation extends the previous analyses by characterizing the model in terms of a mixture of penalized (Firth) logistic regressions and developing methods for constructing profile likelihood-based confidence and inverse intervals, and confidence bands in the context of such a model. The proposed methods are applied to the motivating dataset and another related dataset, resulting in improved inference on model parameters. Additionally, a simulation experiment is carried out to further illustrate the benefits of the proposed methods and to begin to explore better designs for future studies. The penalized model is shown to be less biased than the traditional model and profile likelihood-based intervals are shown to have better coverage probability than Wald-type intervals. Some limitations, extensions, and alternatives to the proposed methods are discussed.
15

Agrawal, Gaurav. "Systematic optimization and experimental validation of simulated moving bed chromatography systems for ternary separations and equilibrium limited reactions." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Simulated Moving Bed (SMB) chromatography is a separation process where the components are separated due to their varying affinity towards the stationary phase. Over the past decade, many modifications have been proposed in SMB chromatography in order to effectively separate a binary mixture. However, the separation of multi-component mixtures using SMB is still one of the major challenges. Although many different strategies have been proposed, previous studies have rarely performed comprehensive investigations for finding the best ternary separation strategy from various possible alternatives. Furthermore, the concept of combining reaction with SMB has been proposed in the past for driving the equilibrium limited reactions to completion by separating the products from the reaction zone. However, the design of such systems is still challenging due to the complex dynamics of simultaneous reaction and adsorption. The first objective of the study is to find the best ternary separation strategy among various alternatives design of SMB. The performance of several ternary SMB operating schemes, that are proposed in the literature, are compared in terms of the optimal productivity obtained and the amount of solvent consumed. A multi- objective optimization problem is formulated which maximizes the SMB productivity and purity of intermediate eluting component at the same time. Furthermore, the concept of optimizing a superstructure formulation is proposed, where numerous SMB operating schemes can be incorporated into a single formulation. This superstructure approach has a potential to find more advantageous operating scheme compared to existing operating schemes in the literature. The second objective of the study is to demonstrate the Generalized Full Cycle (GFC) operation experimentally for the first time, and compare its performance to the JO process. A Semba OctaveTM chromatography system is used as an experimental SMB unit to implement the optimal operating schemes. In addition, a simultaneous optimization and model correction (SOMC) scheme is used to resolve the model mismatch in a systematic way. We also show a systematic comparison of both JO and GFC operations by presenting a Pareto plot of the productivity achieved against the desired purity of the intermediate eluting component experimentally. The third objective of the study is to develop an simulated moving bed reactor (SMBR) process for an industrial-scale application, and demonstrate the potential of the ModiCon operation for improving the performance of the SMBR compared to the conventional operating strategy. A novel industrial application involving the esterification of acetic acid and 1-methoxy-2-propanol is considered to produce propylene glycol methyl ether (PMA) as the product. A multi-objective optimization study is presented to find the best reactive separation strategy for the production of the PMA product. We also present a Pareto plot that compares the ModiCon operation, which allows periodical change of the feed composition and the conventional operating strategy for the optimal production rate of PMA that can be achieved against the desired conversion of acetic acid.
16

Nguyen, Linh Trung. "Estimation and separation of linear frequency- modulated signals in wireless communications using time - frequency signal processing." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15984/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Signal processing has been playing a key role in providing solutions to key problems encountered in communications, in general, and in wireless communications, in particular. Time-Frequency Signal Processing (TFSP) provides eective tools for analyzing nonstationary signals where the frequency content of signals varies in time as well as for analyzing linear time-varying systems. This research aimed at exploiting the advantages of TFSP, in dealing with nonstationary signals, into the fundamental issues of signal processing, namely the signal estimation and signal separation. In particular, it has investigated the problems of (i) the Instantaneous Frequency (IF) estimation of Linear Frequency-Modulated (LFM) signals corrupted in complex-valued zero-mean Multiplicative Noise (MN), and (ii) the Underdetermined Blind Source Separation (UBSS) of LFM signals, while focusing onto the fast-growing area of Wireless Communications (WCom). A common problem in the issue of signal estimation is the estimation of the frequency of Frequency-Modulated signals which are seen in many engineering and real-life applications. Accurate frequency estimation leads to accurate recovery of the true information. In some applications, the random amplitude modulation shows up when the medium is dispersive and/or when the assumption of point target is not valid; the original signal is considered to be corrupted by an MN process thus seriously aecting the recovery of the information-bearing frequency. The IF estimation of nonstationary signals corrupted by complex-valued zero-mean MN was investigated in this research. We have proposed a Second-Order Statistics approach, rather than a Higher-Order Statistics approach, for IF estimation using Time-Frequency Distributions (TFDs). The main assumption was that the autocorrelation function of the MN is real-valued but not necessarily positive (i.e. the spectrum of the MN is symmetric but does not necessary has the highest peak at zero frequency). The estimation performance was analyzed in terms of bias and variance, and compared between four dierent TFDs: Wigner-Ville Distribution, Spectrogram, Choi-Williams Distribution and Modified B Distribution. To further improve the estimation, we proposed to use the Multiple Signal Classification algorithm and showed its better performance. It was shown that the Modified B Distribution performance was the best for Signal-to-Noise Ratio less than 10dB. In the issue of signal separation, a new research direction called Blind Source Separation (BSS) has emerged over the last decade. BSS is a fundamental technique in array signal processing aiming at recovering unobserved signals or sources from observed mixtures exploiting only the assumption of mutual independence between the signals. The term "blind" indicates that neither the structure of the mixtures nor the source signals are known to the receivers. Applications of BSS are seen in, for example, radar and sonar, communications, speech processing, biomedical signal processing. In the case of nonstationary signals, a TF structure forcing approach was introduced by Belouchrani and Amin by defining the Spatial Time- Frequency Distribution (STFD), which combines both TF diversity and spatial diversity. The benefit of STFD in an environment of nonstationary signals is the direct exploitation of the information brought by the nonstationarity of the signals. A drawback of most BSS algorithms is that they fail to separate sources in situations where there are more sources than sensors, referred to as UBSS. The UBSS of nonstationary signals was investigated in this research. We have presented a new approach for blind separation of nonstationary sources using their TFDs. The separation algorithm is based on a vector clustering procedure that estimates the source TFDs by grouping together the TF points corresponding to "closely spaced" spatial directions. Simulations illustrate the performances of the proposed method for the underdetermined blind separation of FM signals. The method developed in this research represents a new research direction for solving the UBSS problem. The successful results obtained in the research development of the above two problems has led to a conclusion that TFSP is useful for WCom. Future research directions were also proposed.
17

Hu, Junqi. "Estimation of Runway Throughput with Reduced Wake Vortex Separation, Technical Buffer and Runway Occupancy Time Considerations." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis evaluates the potential recovery of the runway throughput under Wake Turbulence Re-categorization (RECAT) Phase II and Time-based Separation (TBS) with a Runway Occupancy Time (ROT) constraint comparing with RECAT Phase I. This research uses aircraft performance parameters (runway occupancy time, approach speed, etc.) from the Airport Surface Detection Equipment, Model X (ASDE-X) data set. The analysis uses a modified version of the Quick Response Runway Capacity Model (RUNSIM). The main contributions of the study are: 1) identifying the technical buffer between in-trail arrivals and regenerate them in RUNSIM; 2) estimate the percentage of the arrival pairs that have wake mitigation separation times in excess of ROT; 3) developed an additional in-trail arrival separation rule based on ROT; 4) measure the risk of potential go-arounds with and without the additional 95 ROT separation rules. 5) generate a sample equivalent time-based RECAT II. The study results show that the distributions of technical buffers have significant differences for different in-trail groups and strong connectivity to airport elevations. This is critical to estimate runway capacities and safety issues especially when advanced wake mitigation separation rules are applied. Also, with decreasing of wake separations, ROT will become a limiting factor in runway throughput in the future. This study shows that by considering a 95 percentile ROT constrain, one single runway can still obtain 4 or 5 more arrivals per hour under RECAT II but keep the same level of potential go-arounds compared with current operation rules (RECAT I). TBS rules seem to benefit more under strong wind conditions compared to RECAT I, and RECAT II. TBS rules need to be tailored to every airport.
Master of Science
18

Nguyen, Linh-Trung. "Estimation and separation of linear frequency- modulated signals in wireless communications using time - frequency signal processing." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15984/1/Nguyen_Linh-Trung_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Signal processing has been playing a key role in providing solutions to key problems encountered in communications, in general, and in wireless communications, in particular. Time-Frequency Signal Processing (TFSP) provides eective tools for analyzing nonstationary signals where the frequency content of signals varies in time as well as for analyzing linear time-varying systems. This research aimed at exploiting the advantages of TFSP, in dealing with nonstationary signals, into the fundamental issues of signal processing, namely the signal estimation and signal separation. In particular, it has investigated the problems of (i) the Instantaneous Frequency (IF) estimation of Linear Frequency-Modulated (LFM) signals corrupted in complex-valued zero-mean Multiplicative Noise (MN), and (ii) the Underdetermined Blind Source Separation (UBSS) of LFM signals, while focusing onto the fast-growing area of Wireless Communications (WCom). A common problem in the issue of signal estimation is the estimation of the frequency of Frequency-Modulated signals which are seen in many engineering and real-life applications. Accurate frequency estimation leads to accurate recovery of the true information. In some applications, the random amplitude modulation shows up when the medium is dispersive and/or when the assumption of point target is not valid; the original signal is considered to be corrupted by an MN process thus seriously aecting the recovery of the information-bearing frequency. The IF estimation of nonstationary signals corrupted by complex-valued zero-mean MN was investigated in this research. We have proposed a Second-Order Statistics approach, rather than a Higher-Order Statistics approach, for IF estimation using Time-Frequency Distributions (TFDs). The main assumption was that the autocorrelation function of the MN is real-valued but not necessarily positive (i.e. the spectrum of the MN is symmetric but does not necessary has the highest peak at zero frequency). The estimation performance was analyzed in terms of bias and variance, and compared between four dierent TFDs: Wigner-Ville Distribution, Spectrogram, Choi-Williams Distribution and Modified B Distribution. To further improve the estimation, we proposed to use the Multiple Signal Classification algorithm and showed its better performance. It was shown that the Modified B Distribution performance was the best for Signal-to-Noise Ratio less than 10dB. In the issue of signal separation, a new research direction called Blind Source Separation (BSS) has emerged over the last decade. BSS is a fundamental technique in array signal processing aiming at recovering unobserved signals or sources from observed mixtures exploiting only the assumption of mutual independence between the signals. The term "blind" indicates that neither the structure of the mixtures nor the source signals are known to the receivers. Applications of BSS are seen in, for example, radar and sonar, communications, speech processing, biomedical signal processing. In the case of nonstationary signals, a TF structure forcing approach was introduced by Belouchrani and Amin by defining the Spatial Time- Frequency Distribution (STFD), which combines both TF diversity and spatial diversity. The benefit of STFD in an environment of nonstationary signals is the direct exploitation of the information brought by the nonstationarity of the signals. A drawback of most BSS algorithms is that they fail to separate sources in situations where there are more sources than sensors, referred to as UBSS. The UBSS of nonstationary signals was investigated in this research. We have presented a new approach for blind separation of nonstationary sources using their TFDs. The separation algorithm is based on a vector clustering procedure that estimates the source TFDs by grouping together the TF points corresponding to "closely spaced" spatial directions. Simulations illustrate the performances of the proposed method for the underdetermined blind separation of FM signals. The method developed in this research represents a new research direction for solving the UBSS problem. The successful results obtained in the research development of the above two problems has led to a conclusion that TFSP is useful for WCom. Future research directions were also proposed.
19

Brodin, Henrik. "CCASENSE: Canonical Correlation Analysis for Estimation of Sensitivity Maps for Fast MRI." Thesis, Linköping University, Department of Biomedical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Magnetic Resonance Imaging is an established technology for both imaging and

functional studies in clinical and research environments. The field is still very

research intense. Two major research areas are acquisition time and signal quality.

The last decade has provided tools for more efficient possibilities of trading these

factors against each other through parallel imaging.

In this thesis one parallel imaging method, Sensitivity Encoding for fast

MRI (SENSE) is examined. An alternative solution CCASENSE is developed.

CCASENSE reduces the acquisition time by estimating the sensitivity maps required

for SENSE to work instead of running a reference scan. The estimation

process is done by Blind Source Separation through Canonical Correlation Analysis.

It is shown that CCASENSE appears to estimate the sensitivity maps better

than ICASENSE which is a similar algorithm.

20

Oppizzi, Filippo. "Non-Gaussianity in CMB analysis: bispectrum estimation and foreground subtraction." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The focus of this work is the development of statistical and numerical methods forthe study of non-Gaussian and/or anisotropic features in cosmological surveys of themicrowave sky. We focus on two very different types of non-Gaussian (NG) signals. The former is primordial non-Gaussianity (PNG), generated in the very Early Universeduring the inflationary expansion stage. In this case the aim of our study will be that ofexploiting the NG component in order to extract useful cosmological information. The latter is non-Gaussianity generated by astrophysical foreground contamination. In thiscase, the goal is instead that of using non-Gaussianity as a tool to help in removingthese spurious, non-cosmological components (of course foregrounds themselves contain relevant astrophysical information, but the focus in this thesis is on Cosmology, thereforeforegrounds are regarded here only as a contaminant). Considerable efforts have been put so far in the search for deviations from Gaussianity in the CMB anisotropies, that are expected to provide invaluable information aboutthe Inflationary epoch. Inflation is in fact expected to produce an isotropic and nearly-Gaussian fluctuation field. However, a large amount of models also predicts very small,but highly model dependent NG signatures. This is the main reason behind the largeinterest in primordial NG studies. Of course, the pursuit for primordial non-Gaussianity must rely on beyond power spectrum statistics. It turns out that the most important higher order correlator produced by interactions during Inflation is the three pointfunction, or, more precisely, its Fourier space counterpart, called the bispectrum. Toovercome the issue of computing the full bispectrum of the observed field, that would require a prohibitive amount of computational time, the search for PNG features is carriedout by fitting theoretically motivated bispectrum templates to the data. Among those, one can find bispectrum templates with a scale-dependent (SD) bispectrum amplitude. Such templates have actually received little attention so far in the literature, especiallyas long as NG statistical estimation and data analysis are concerned. This is why a significant part of this thesis will be devoted to the development and application of efficientstatistical pipelines for CMB scale-dependent bispectra estimation. We present here theresults of the estimation of several primordial running bispectra obtained from WMAP9 year and Planck data-set. iiiThe second part of this thesis deals instead, as mentioned iin the beginning, withthe component separation problem, i.e. the identification of the different sources thatcontributes to the microwave sky brightness. Foreground emission produces several,potentially large, non-Gaussian signatures that can in principle be used to identify andremove the spurious components from the microwave sky maps. Our focus will be onthe development of a foreground cleaning technique relying on the hypothesis that, ifthe data are represented in a proper basis, the foreground signal is sparse. Sparsenessimplies that the majority of the signal is concentrated in few basis elements, that can be used to fit the corresponding component with a thresholding algorithm. We verifythat the spherical needlet frame has the right properties to disentangle the coherentforeground emission from the isotropic stochastic CMB signal. We will make clear inthe following how sparseness in needlet space is actually in several ways linked to thecoherence, anisotropy and non-Gaussianity of the foreground components.. The mainadvantages of our needlet thresholding technique are that it does not requires multi-frequency information as well as that it can be used in combination with other methods. Therefore it can represent a valuable tool in experiments with limited frequency coverage,as current ground-based CMB surveys.
l tema centrale di questa tesi `e lo sviluppo di metodi statistici e numerici per lo studio di caratteristiche non gaussiane e/o anisotrope in esperimenti mirati alla misura dellaradiazione cosmica di fondo (CMB, dall’inglese Cosmic Microwave Background). Ciconcentriamo su due tipi molto diversi di segnali non gaussiani: il primo `e la non Gaussianit`a primordiale, che si ipotizza venga generata nell’Universo primordiale durante l’epoca inflazionaria. Lo studio di questo tipo di non-Gaussianit`a permette di ottenere preziose informazioni cosmologiche. Il secondo `e invece la non-Gaussianit`a generata dalla contaminazione dovuta al foreground astrofisico. In questo caso, invece, il nostroobiettivo `e utilizzare la non-Gaussianit`a come tracciante per identificare e rimuovere lecomponenti spurie non cosmologiche (ovviamente l’emissione di foreground contiene informazioni astrofisiche rilevanti, ma il tema di questa tesi verte sulla cosmologia, quindi verra' considerata solo in virtu' dell’effetto contaminante in esperimenti che mirano a ricostruire la CMB). Sforzi considerevoli sono stati spesi finora nel tentativo di misurare piccole deviazionidalla Gaussianita' nelle anisotropie della CMB, che fornirebbero informazioni inestimabilisull’epoca dell’Inflazione. La teoria prevede che l’Inflazione produca un campo di fluttuazioni isotropo e quasi Gaussiano. Tuttavia, una grande quantit`a di modelli prevede anche l’insorgenza di piccole componenti non Gaussiane, le cui caratteristiche dipendono fortemente dal modello inflazionario sottostante. Questa `e la ragione principaledel grande interesse della comunit`a cosmologica per la misura della non Gaussianita'. Naturalmente, nella ricerca della non-Gaussianit`a primordiale `e necessario ricorrere astatistiche di ordine superiore rispetto allo spettro di potenza. Ci si aspetta che la maggior parte del segnale non Gaussiano prodotto durante l’Inflazione si presenti sotto formadi correlazioni a tre punti, che possono essere misurate nello spazio armonico dal bispettro. Purtroppo, a causa dell’elevato tempo computazionale richiesto, non `e possibile calcolare direttamente il bispettro dai dati. La ricerca di segnali non gaussiani consiste quindi nel misurare la correlazione tra il bispettro dei dati e determinati modelli teorici che riproducono il segnale predetto da specifici modelli inflazionari. Molte teorie inflazionarie producono correlazioni ad alto ordine il cui bispettro presenta un ampiezza dipendente dalla scala. Questo `e il motivo per cui una parte significativa di questa tesi sara' dedicata allo sviluppo di tecniche statistiche per la stima di bispettri con un esplicita dipendenza dalla scala in osservazioni della CMB. I risultati presentati in questa tesi sono ottenuti dalle osservazioni dei satelliti WMAP e Planck. La seconda parte di questo lavoro riguarda invece il problema dell’identificazione dellediverse fonti che contribuiscono alla luminosit`a del cielo nelle frequenze delle microonde. L’emissione di foreground potenzialmente produce grandi deviazioni dalla Gaussianita, che in linea di principio possono essere utilizzate per identificare e rimuovere i componenti spuri dalle mappe del cielo a microonde. Il nostro obiettivo `e lo sviluppo di una tecnica di pulizia dai foreground basata sull’ipotesi che, se i dati vengono rappresentati nella base appropriata, il segnale delle emissioni di foreground appare sparso. La sparsit`a implica che la maggior parte del segnale sia concentrata in pochi elementi della base, che possono essere usati per ricostruire il componente corrispondente ricorrendo a una tecnica detta thresholding. Abbiamo verificato che il frame delle needlet sferiche ha le propriet`a ideali per separare il segnale coerente del foreground dal segnale isotropo e stocastico della CMB. I principali vantaggi della nostra tecnica di needlet thresholding sono, in primo luogo, che non richiede di avere osservazioni a diverse frequenze e inoltre che pu`o essere utilizzata in combinazione con altri metodi. Pertanto pu`o essere uno strumento prezioso in esperimenti che osservano il cielo in un limitato intervallo di frequenza come, per esempio, gli attuali esperimenti che mirano a misurare la CMB da terra.
21

Nauclér, Peter. "Estimation and Control of Resonant Systems with Stochastic Disturbances." Doctoral thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The presence of vibration is an important problem in many engineering applications. Various passive techniques have traditionally been used in order to reduce waves and vibrations, and their harmful effects. Passive techniques are, however, difficult to apply in the low frequency region. In addition, the use of passive techniques often involve adding mass to the system, which is undesirable in many applications.

As an alternative, active techniques can be used to manipulate system dynamics and to control the propagation of waves and vibrations. This thesis deals with modeling, estimation and active control of systems that have resonant dynamics. The systems are exposed to stochastic disturbances. Some of them excite the system and generate vibrational responses and other corrupt measured signals.

Feedback control of a beam with attached piezoelectrical elements is studied. A detailed modeling approach is described and system identification techniques are employed for model order reduction. Disturbance attenuation of a non-measured variable shows to be difficult. This issue is further analyzed and the problems are shown to depend on fundamental design limitations.

Feedforward control of traveling waves is also considered. A device with properties analogous to those of an electrical diode is introduced. An `ideal´ feedforward controller based on the mechanical properties of the system is derived. It has, however, poor noise rejection properties and it therefore needs to be modified. A number of feedforward controllers that treat the measurement noise in a statistically sound way are derived.

Separation of overlapping traveling waves is another topic under investigation. This operation also is sensitive to measurement noise. The problem is thoroughly analyzed and Kalman filtering techniques are employed to derive wave estimators with high statistical performance.

Finally, a nonlinear regression problem with close connections to unbalance estimation of rotating machinery is treated. Different estimation techniques are derived and analyzed with respect to their statistical accuracy. The estimators are evaluated using the example of separator balancing.

22

Oprisan, Ana. "Fluctuations, Phase Separation and Wetting Films near Liquid-Gas Critical Point." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Gravity on Earth limits the study of the properties of pure fluids near critical point because they become stratified under their own weight. Near the critical point, all thermodynamic properties either diverge or converge and the heating and cooling cause instabilities of the convective flow as a consequence of the expansibility divergence. In order to study boiling, fluctuation and phase separation processes near the critical point of pure fluids without the influence of the Earth's gravity, a number of experiments were performed in the weightlessness of Mir space station. The experimental setup called ALICE II instrument was designed to suppress sedimentation and buoyancy-driven flow. Another set of experiments were carried out on Earth using a carefully density matched system of deuterated methanolcycloxexane to observe critical fluctuations directly. The set of experiments performed on board of Mir space station studied boiling and wetting film dynamics during evaporation near the critical point of two pure fluids (sulfur hexafluoride and carbon dioxide) using a defocused grid method. The specially designed cell containing the pure fluid was heated and, as a result, a low contrast line appeared on the wetting film that corresponded to a sharp change in the thickness of the film. A large mechanical response was observed in response to the cell heating and we present quantitative results about the receding contact lines. It is found that the vapor recoil force is responsible for the receding contact line. Local density fluctuations were observed by illuminating a cylindrical cell filled with the pure fluid near its liquid- gas critical point and recorded using a microscope and a video recorder. Microscopic fluctuations were analyzed both in sulfur hexafluoride and in a binary mixture of methanol cyclohexane. Using image processing techniques, we were able to estimate the properties of the fluid from the recorded images showing fluctuations of the transmitted and scattered light. We found that the histogram of an image can be fitted to a Gaussian relationship and by determining its width we were able to estimate the position of the critical point. The characteristic length of the fluctuations corresponding to the maximum of the radial average of the power spectrum was also estimated. The power law growth for the early stage of the phase separation was determined for two different temperature quenches in pure fluid and these results are in agreement with other experimental results and computational simulations.
23

CHAUHAN, SHASHANK. "Parameter Estimation and Signal Processing Techniques for Operational Modal Analysis." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1204829186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Degottex, Gilles. "Glottal source and vocal-tract separation : estimation of glottal parameters, voice transformation and synthesis using a glottal model." Paris 6, 2010. http://www.theses.fr/2010PA066399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette étude s'intéresse au problème de l'inversion d'un modèle de production de la voix étant donné un enregistrement audio de parole pour obtenir une représentation de le source sonore qui est générée au niveau de la glotte, la source glottique, ainsi qu'un représentation des résonances et anti-résonances créées par les cavités du conduit vocal. Cette séparation des éléments composants la voix donne la possibilité de manipuler indépendamment les caractéristiques de la source et le timbre des résonances. Nous supposons que la source glottique est un signal à phase mixte et que la réponse impulsionnelle du filtre du conduit vocal est un signal à minimum de phase. Puis, considérant ces propriétés, différentes méthodes sont proposées pour estimer les paramètres d'un modèle glottique qui minimisent la phase carrée moyenne du résiduel convolutif d'un spectre de parole observé et de son modèle. Une dernière méthode est décrite où un unique paramètre de forme est solution d'une forme quasi fermée du spectre observé. Ces méthodes sont évaluées et comparées avec des méthodes de l'état de l'art en utilisant des signaux synthétiques et electro-glotto-graphiques. Nous proposons également une procédure d'analyse/synthèse qui estime le filtre du conduit vocal en utilisant un spectre observé et sa source estimée. Des tests de préférences ont été menés et leurs résultats sont présentés dans cette étude pour comparer la procédure décrite et d'autres méthodes existantes.
25

Kempe, Henrik. "Advances in Separation Science : . Molecular Imprinting: Development of Spherical Beads and Optimization of the Formulation by Chemometrics." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

An intrinsic mathematical model for simulation of fixed bed chromatography was demonstrated and compared to more simplified models. The former model was shown to describe variations in the physical, kinetic, and operating parameters better than the latter ones. This resulted in a more reliable prediction of the chromatography process as well as a better understanding of the underlying mechanisms responsible for the separation. A procedure based on frontal liquid chromatography and a detailed mathematical model was developed to determine effective diffusion coefficients of proteins in chromatographic gels. The procedure was applied to lysozyme, bovine serum albumin, and immunoglobulin γ in Sepharose™ CL-4B. The effective diffusion coefficients were comparable to those determined by other methods.

Molecularly imprinted polymers (MIPs) are traditionally prepared as irregular particles by grinding monoliths. In this thesis, a suspension polymerization providing spherical MIP beads is presented. Droplets of pre-polymerization solution were formed in mineral oil with no need of stabilizers by vigorous stirring. The droplets were transformed into solid spherical beads by free-radical polymerization. The method is fast and the performance of the beads comparable to that of irregular particles. Optimizing a MIP formulation requires a large number of experiments since the possible combinations of the components are huge. To facilitate the optimization, chemometrics was applied. The amounts of monomer, cross-linker, and porogen were chosen as the factors in the model. Multivariate data analysis indicated the influence of the factors on the binding and an optimized MIP composition was identified. The combined use of the suspension polymerization method to produce spherical beads with the application of chemometrics was shown in this thesis to drastically reduce the number of experiments and the time needed to design and optimize a new MIP.

26

Nessel, James Aaron. "Estimation of Atmospheric Phase Scintillation Via Decorrelation of Water Vapor Radiometer Signals." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1447701180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hu, Ke. "Speech Segregation in Background Noise and Competing Speech." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339018952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Grosfils, Valérie. "Modelling and parametric estimation of simulated moving bed chromatographic processes (SMB)." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La chromatographie à lit mobile simulé (procédé SMB) est une technique de séparation bien maîtrisée dans certains secteurs traditionnels tels que la séparation de sucres et d’hydrocarbures. Cependant, son application à la séparation de composés à haute valeur ajoutée dans l’industrie pharmaceutique pose de nouveaux problèmes liés à la nature des produits à séparer ainsi qu'aux exigences plus strictes en matière de pureté et de quantités produites. Les principaux problèmes ouverts sont notamment la détermination des conditions de fonctionnement optimales, la conception de structures robustes de régulation, et le développement d’un système de supervision permettant la détection et la localisation de dégradations de fonctionnement. Ces tâches requièrent l’usage d’un modèle mathématique du procédé ainsi qu’une méthodologie d’estimation paramétrique. L’étude et le développement des modèles mathématiques pour ce type d’installation ainsi que l’estimation des paramètres des modèles sur base de mesures expérimentales constituent précisément l’objet de nos travaux de doctorat.

Les modèles mathématiques décrivant les procédés SMB consistent en les bilans massiques des composés à séparer. Ce sont des modèles à paramètres distribués (décrit par des équations aux dérivées partielles). Certains ont un comportement dynamique de type hybride (c'est-à-dire faisant intervenir des dynamiques à temps continu et des événements discrets). Quelques modèles ont été développés dans la littérature. Il s’agit de sélectionner ceux qui paraissent les plus intéressants au niveau de leur temps de calcul, de leur efficacité et du nombre de paramètres à déterminer. En outre, de nouvelles structures de modèles sont également proposées afin d’améliorer le compromis précision / temps de calcul.

Ces modèles comportent généralement certains paramètres inconnus. Ils consistent soit, en des grandeurs physiques mal définies au départ des données de base, soit, en des paramètres fictifs, introduits à la suite d'hypothèses simplificatrices et englobant à eux seuls un ensemble de phénomènes. Il s’agit de mettre au point une procédure systématique d’estimation de ces paramètres requérant le moins d’expériences possible et un faible temps de calcul. La valeur des paramètres est estimée, au départ de mesures réelles, à l'aide d'une procédure de minimisation d'une fonction de coût qui indique l’écart entre les grandeurs estimées par le modèle et les mesures. La sensibilité du modèle aux écarts sur les paramètres, ainsi que l’identifiabilité du modèle (possibilité de déterminer de manière univoque les paramètres du modèle) sur la base de mesures en fonctionnement normal sont étudiées. Ceci fournit un critère de comparaison supplémentaire entre les différents modèles et permet en outre de déterminer les conditions expérimentales optimales (choix du type d’expérience, choix des signaux d’entrée, choix du nombre et de la position des points de mesures…) dans lesquelles les mesures utilisées lors de l’estimation paramétrique doivent être relevées. De plus, les erreurs d’estimation sur les paramètres et les erreurs de simulation sont estimées. La procédure choisie est ensuite validée sur des données expérimentales recueillies sur un procédé pilote existant au Max-Planck-Institut für Dynamik komplexer technischer systeme (Magdebourg, Allemagne).


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

29

EL, FALLAH MOHAMMED ZOUBAIR. "Effet du recouvrement de pics sur la separation chromatographique : estimation de la complexite des melanges et de la limite de determination quantitative." Paris 6, 1989. http://www.theses.fr/1989PA066171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'effet du recouvrement de pics dans les chromatogrammes de melanges complexes sur la resolution des systemes chromatographiques et sur la justesse des analyses quantitatives a ete etudie. Plusieurs modeles statistiques de recouvrement de pics ont ete recemment developpes. Pour remedier a leurs limitations, pour les saturations relativement fortes, on a introduit un nouveau critere de separation entre deux pics adjacents, le facteur de discrimination. L'effet de la saturation sur le degre de recouvrement de pics a ete etudie a l'aide de la simulation de chromatogrammes sur ordinateur au moyen de ce facteur. Une nouvelle procedure d'estimation du nombre de pics parents dans un chromatogramme a ete developpee. Elle est basee sur une representation originale de l'information relative a la saturation contenue dans le chromatogramme, appelee saturogramme, qui permet un acces direct a la saturation. Nous avons pu ainsi etendre de 5 fois le domaine de saturation exploitable par rapport aux procesures existantes. Le probleme de l'analyse quantitative dans les chromatogrammes de melanges complexes a ete etudie en termes de probabilite. On peut ainsi estimer la limite de determination d'un constituant du melange. L'etude de sa variation en fonction de la saturation permet de quantifier les effets de matrice. Des exemples d'application de notre procedure d'estimation de la complexite des chromatogrammes d'echantillons d'origine industrielle et de l'estimation de la limite de determination des constituants sont montres
30

Stöter, Fabian-Robert [Verfasser], Bernd [Akademischer Betreuer] Edler, Bernd [Gutachter] Edler, and Gael [Gutachter] Richard. "Separation and Count Estimation for Audio Sources Overlapping in Time and Frequency / Fabian-Robert Stöter ; Gutachter: Bernd Edler, Gael Richard ; Betreuer: Bernd Edler." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2020. http://d-nb.info/1203879490/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Anand, K. "Methods for Blind Separation of Co-Channel BPSK Signals Arriving at an Antenna Array and Their Performance Analysis." Thesis, Indian Institute of Science, 1995. https://etd.iisc.ac.in/handle/2005/123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Capacity improvement of Wireless Communication Systems is a very important area of current research. The goal is to increase the number of users supported by the system per unit bandwidth allotted. One important way of achieving this improvement is to use multiple antennas backed by intelligent signal processing. In this thesis, we present methods for blind separation of co-channel BPSK signals arriving at an antenna array. These methods consist of two parts, Constellation Estimation and Assignment. We give two methods for constellation estimation, the Smallest Distance Clustering and the Maximum Likelihood Estimation. While the latter is theoretically sound,the former is Computationally simple and intuitively appealing. We show that the Maximum Likelihood Constellation Estimation is well approximated by the Smallest Distance Clustering Algorithm at high SNR. The Assignment Algorithm exploits the structure of the BPSK signals. We observe that both the methods for estimating the constellation vectors perform very well at high SNR and nearly attain Cramer-Rao bounds. Using this fact and noting that the Assignment Algorithm causes negligble error at high SNR, we derive an upper bound on the probability of bit error for the above methods at high SNR. This upper bound falls very rapidly with increasing SNR, showing that our constellation estimation-assignment approach is very efficient. Simulation results are given to demonstrate the usefulness of the bounds.
32

Anand, K. "Methods for Blind Separation of Co-Channel BPSK Signals Arriving at an Antenna Array and Their Performance Analysis." Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Capacity improvement of Wireless Communication Systems is a very important area of current research. The goal is to increase the number of users supported by the system per unit bandwidth allotted. One important way of achieving this improvement is to use multiple antennas backed by intelligent signal processing. In this thesis, we present methods for blind separation of co-channel BPSK signals arriving at an antenna array. These methods consist of two parts, Constellation Estimation and Assignment. We give two methods for constellation estimation, the Smallest Distance Clustering and the Maximum Likelihood Estimation. While the latter is theoretically sound,the former is Computationally simple and intuitively appealing. We show that the Maximum Likelihood Constellation Estimation is well approximated by the Smallest Distance Clustering Algorithm at high SNR. The Assignment Algorithm exploits the structure of the BPSK signals. We observe that both the methods for estimating the constellation vectors perform very well at high SNR and nearly attain Cramer-Rao bounds. Using this fact and noting that the Assignment Algorithm causes negligble error at high SNR, we derive an upper bound on the probability of bit error for the above methods at high SNR. This upper bound falls very rapidly with increasing SNR, showing that our constellation estimation-assignment approach is very efficient. Simulation results are given to demonstrate the usefulness of the bounds.
33

Rouvière, Clémentine. "Experimental parameter estimation in incoherent images via spatial-mode demultiplexing." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La résolution des systèmes d'imagerie optique a été historiquement limitée par la diffraction et la limite de Rayleigh a longtemps été considérée comme insurmontable. Des techniques de super-résolution ont été développées pour aller au-delà de cette limite, mais elles sont adaptées spécifiquement à chaque contexte et atteindre une résolution sub-Rayleigh demeure très difficile. En examinant les problèmes d'imagerie à travers le prisme de la théorie d'estimation de paramètre, il est possible de comprendre les contraintes intrinsèques aux approches traditionnelles de super-résolution et de déterminer une méthode optimale, basée sur la détection d'intensité dans la base de modes d'Hermite-Gauss. Dans cette thèse, nous avons mis en place cette approche fournie par la métrologie quantique pour estimer la séparation entre deux sources incohérentes et nous avons atteint une sensibilité cinq ordres de grandeur au-delà du critère de Rayleigh. Avec notre dispositif expérimental, basé sur un démultiplexeur en modes spatiaux, nous avons étudié des scènes avec des sources à haut et bas flux de photons. En portant une attention particulière à l'exactitude et à la sensibilité des mesures, nous démontrons que le démultiplexage en modes spatiaux est particulièrement adapté pour effectuer des estimations de séparations sub-Rayleigh. Ce travail ouvre la voie à l'amélioration de la sensibilité des mesures dans des scènes optiques plus complexes se rapprochant de réelles situations d'imagerie
Historically, the resolution of optical imaging systems was dictated by diffraction, and the Rayleigh criterion was long considered an unsurpassable limit. In superresolution microscopy, this limit is overcome by manipulating the emission properties of the object. However, in passive imaging, when sources are uncontrolled, reaching sub-Rayleigh resolution remains a challenge. Here, we implement a quantum-metrology-inspired approach for estimating the separation between two incoherent sources, achieving a sensitivity five orders of magnitude beyond the Rayleigh limit. Using a spatial mode demultiplexer, we examine scenes with bright and faint sources, through intensity measurements in the Hermite-Gauss basis. Analysing sensitivity and accuracy over an extensive range of separations, we demonstrate the remarkable effectiveness of demultiplexing for sub-Rayleigh separation estimation. These results effectively render the Rayleigh limit obsolete for passive imaging
34

Cohen-Hadria, Alice. "Estimation de descriptions musicales et sonores par apprentissage profond." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En Music Information Retrieval (MIR, ou recherche d'information musicales) et en traitement de la parole, les outils d'apprentissage automatique deviennent de plus en plus standard. En particulier, de nombreux systèmes état de l'art reposent désormais sur l'utilisation des réseaux de neurones. Nous présenterons le travail effectué pour résoudre quatre tâches de traitement de la musique ou de la parole, en utilisant de réseaux de neurones. Plus précisément, nous utiliserons des réseaux de neurones convolutionnels, dont l'utilisation a permis de nombreuses avancées notamment en traitement d'image. La première tâche présentée sera l'estimation de structure musicale. Pour cette tâche, nous montrerons à quel point le choix de la représentation en entrée des réseaux de neurones convolutionnels peut être critique pour l'estimation de structure. La deuxième tâche présentée sera la détection de la voix chantée. Dans cette partie, nous expliquerons comment utiliser un modèle de détection de la voix afin d'aligner automatiquement des paroles et des pistes audio. La séparation de voix chantée sera la troisième tâche présentée. Pour cette tâche, nous présenterons une stratégie d'augmentation de données, un moyen d'augmenter considérablement la taille d'un ensemble d'entraînement. Enfin, nous aborderons l'anonymisation vocale dans des enregistrements urbains. Nous présenterons une méthode d'anonymisation qui masque le contenu et floute l'identité du locuteur, tout en préservant la scène acoustique restante
In Music Information Retrieval (MIR) and voice processing, the use of machine learning tools has become in the last few years more and more standard. Especially, many state-of-the-art systems now rely on the use of Neural Networks.In this thesis, we propose a wide overview of four different MIR and voice processing tasks, using systems built with neural networks. More precisely, we will use convolutional neural networks, an image designed class neural networks. The first task presented is music structure estimation. For this task, we will show how the choice of input representation can be critical, when using convolutional neural networks. The second task is singing voice detection. We will present how to use a voice detection system to automatically align lyrics and audio tracks.With this alignment mechanism, we have created the largest synchronized audio and speech data set, called DALI. Singing voice separation is the third task. For this task, we will present a data augmentation strategy, a way to significantly increase the size of a training set. Finally, we tackle voice anonymization. We will present an anonymization method that both obfuscate content and mask the speaker identity, while preserving the acoustic scene
35

Gan, Tong. "Study to improve measurement accuracy and resolution of atmospheric radars." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/202819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gloaguen, Jean-Rémy. "Estimation du niveau sonore de sources d'intérêt au sein de mixtures sonores urbaines : application au trafic routier." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0023/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Des réseaux de capteurs acoustiques sont actuellement mis en place dans plusieurs grandes villes afin d’obtenir une description plus fine de l’environnement sonore urbain. Un des défis à relever est celui de réussir,à partir d’enregistrements sonores, à estimer des indicateurs utiles tels que le niveau sonore du trafic routier. Cette tâche n’est en rien triviale en raison de la multitude de sources sonores qui composent cet environnement. Pour cela, la Factorisation en Matrices Non-négatives (NMF) est considérée et appliquée sur deux corpus de mixtures sonores urbaines simulés. L’intérêt de simuler de tels mélanges est la possibilité de connaitre toutes les caractéristiques de chaque classe de son dont le niveau sonore exact du trafic routier. Le premier corpus consiste en 750 scènes de 30 secondes mélangeant une composante de trafic routier dont le niveau sonore est calibré et une classe de son plus générique. Les différents résultats ont notamment permis de proposer une nouvelle approche, appelée « NMF initialisée seuillée », qui se révèle être la plus performante. Le deuxième corpus créé permet de simuler des mixtures sonores plus représentatives des enregistrements effectués en villes, dont leur réalisme a été validé par un test perceptif. Avec une erreur moyenne d’estimation du niveau sonore inférieure à 1,2 dB, la NMF initialisée seuillée se révèle, là encore, la méthode la plus adaptée aux différents environnements sonores urbains. Ces résultats ouvrent alors la voie vers l’utilisation de cette méthode à d’autres sources sonores, celles que les voix et les sifflements d’oiseaux, qui pourront mener, à terme, à la réalisation de cartes de bruits multi-sources
Acoustic sensor networks are being set up in several major cities in order to obtain a more detailed description of the urban sound environment. One challenge is to estimate useful indicators such as the road traffic noise level on the basis of sound recordings. This task is by no means trivial because of the multitude of sound sources that composed this environment. For this, Non-negative Matrix Factorization (NMF) is considered and applied on two corpuses of simulated urban sound mixtures. The interest of simulating such mixtures is the possibility of knowing all the characteristics of each sound class including the exact road traffic noise level. The first corpus consists of 750 30-second scenes mixing a road traffic component with a calibrated sound level and a more generic sound class. The various results have notably made it possible to propose a new approach, called ‘Thresholded Initialized NMF', which is proving to be the most effective. The second corpus created makes it possible to simulate sound mixtures more representatives of recordings made in cities whose realism has been validated by a perceptual test. With an average noise level estimation error of less than 1.3 dB, the Thresholded Initialized NMF stays the most suitable method for the different urban noise environments. These results open the way to the use of this method for other sound sources, such as birds' whistling and voices, which can eventually lead to the creation of multi-source noise maps
37

Mamouni, Nezha. "Utilisation des Copules en Séparation Aveugle de Sources Indépendantes/Dépendantes." Thesis, Reims, 2020. http://www.theses.fr/2020REIMS007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le problème de la séparation aveugle de sources (SAS) consiste à retrouver des signaux non observés à partir de mélanges inconnus de ceux-ci, où on ne dispose pas, ou très peu, d'informations sur les signaux source et/ou le système de mélange. Dans cette thèse, nous présentons des algorithmes pour séparer des mélanges linéaires instantanés et convolutifs de sources avec composantes indépendantes/dépendantes. Le principe des algorithmes proposés est de minimiser des critères de séparation, bien définis, basés sur les densités de copules, en utilisant des algorithmes de type descente du gradient.Nous montrons que les méthodes proposées peuvent séparer des mélanges de sources avec composantes dépendantes, où le modèle de copule est inconnu
The problem of Blind Source Separation (BSS) consists in retrieving unobserved mixed signals from unknown mixtures of them, where there is no, or very limited, information about the source signals and/or the mixing system. In this thesis, we present algorithms in order to separate instantaneous and convolutive mixtures. The principle of these algorithms is to minimize, appropriate separation criteria based on copula densities, using descent gradient type algorithms. These methods can magnificently separate instantaneous and convolutive mixtures of possibly dependent source components even when the copula model is unknown
38

Fourer, Dominique. "Approche informée pour l’analyse du son et de la musique." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14973/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En traitement du signal audio, l’analyse est une étape essentielle permettant de comprendre et d’inter-agir avec les signaux existants. En effet, la qualité des signaux obtenus par transformation ou par synthèse des paramètres estimés dépend de la précision des estimateurs utilisés. Cependant, des limitations théoriques existent et démontrent que la qualité maximale pouvant être atteinte avec une approche classique peut s’avérer insuffisante dans les applications les plus exigeantes (e.g. écoute active de la musique). Le travail présenté dans cette thèse revisite certains problèmes d’analyse usuels tels que l’analyse spectrale, la transcription automatique et la séparation de sources en utilisant une approche dite “informée”. Cette nouvelle approche exploite la configuration des studios de musique actuels qui maitrisent la chaîne de traitement avant l’étape de création du mélange. Dans les solutions proposées, de l’information complémentaire minimale calculée est transmise en même temps que le signal de mélange afin de permettre certaines transformations sur celui-ci tout en garantissant le niveau de qualité. Lorsqu’une compatibilité avec les formats audio existants est nécessaire, cette information est cachée à l’intérieur du mélange lui-même de manière inaudible grâce au tatouage audionumérique. Ce travail de thèse présente de nombreux aspects théoriques et pratiques dans lesquels nous montrons que la combinaison d’un estimateur avec de l’information complémentaire permet d’améliorer les performances des approches usuelles telles que l’estimation non informée ou le codage pur
In the field of audio signal processing, analysis is an essential step which allows interactions with existing signals. In fact, the quality of transformed or synthesized audio signals depends on the accuracy over the estimated model parameters. However, theoretical limits exist and show that the best accuracy which can be reached by a classic estimator can be insufficient for the most demanding applications (e.g. active listening of music). The work which is developed in this thesis revisits well known audio analysis problems like spectral analysis, automatic transcription of music and audio sources separation using the novel ``informed'' approach. This approach takes advantage of a specific configuration where the parameters of the elementary signals which compose a mixture are known before the mixing process. Using the tools which are proposed in this thesis, the minimal side information is computed and transmitted with the mixture signal. This allows any kind of transformation of the mixture signal with a constraint over the resulting quality. When the compatibility with existing audio formats is required, the side information is embedded directly into the analyzed audio signal using a watermarking technique. This work describes several theoretical and practical aspects of audio signal processing. We show that a classic estimator combined with the sufficient side information can obtain better performances than classic approaches (classic estimation or pure coding)
39

Bentley, Jason A. "Systematic process development by simultaneous modeling and optimization of simulated moving bed chromatography." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Adsorption separation processes are extremely important to the chemical industry, especially in the manufacturing of food, pharmaceutical, and fine chemical products. This work addresses three main topics: first, systematic decision-making between rival gas phase adsorption processes for the same separation problem; second, process development for liquid phase simulated moving bed chromatography (SMB); third, accelerated startup for SMB units. All of the work in this thesis uses model-based optimization to answer complicated questions about process selection, process development, and control of transient operation. It is shown in this thesis that there is a trade-off between productivity and product recovery in the gaseous separation of enantiomers using SMB and pressure swing adsorption (PSA). These processes are considered as rivals for the same separation problem and it is found that each process has a particular advantage that may be exploited depending on the production goals and economics. The processes are compared on a fair basis of equal capitol investment and the same multi-objective optimization problem is solved with equal constraints on the operating parameters. Secondly, this thesis demonstrates by experiment a systematic algorithm for SMB process development that utilizes dynamic optimization, transient experimental data, and parameter estimation to arrive at optimal operating conditions for a new separation problem in a matter of hours. Comparatively, the conventional process development for SMB relies on careful system characterization using single-column experiments, and manual tuning of operating parameters, that may take days and weeks. The optimal operating conditions that are found by this new method ensure both high purity constraints and optimal productivity are satisfied. The proposed algorithm proceeds until the SMB process is optimized without manual tuning. In some case studies, it is shown with both linear and nonlinear isotherm systems that the optimal performance can be reached in only two changes of operating conditions following the proposed algorithm. Finally, it is shown experimentally that the startup time for a real SMB unit is significantly reduced by solving model-based startup optimization problems using the SMB model developed from the proposed algorithm. The startup acceleration with purity constraints is shown to be successful at reducing the startup time by about 44%, and it is confirmed that the product purities are maintained during the operation. Significant cost savings in terms of decreased processing time and increased average product concentration can be attained using a relatively simple startup acceleration strategy.
40

Marxer, Piñón Ricard. "Audio source separation for music in low-latency and high-latency scenarios." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.
Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.
This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals.
41

Cavalcante, Charles Casimiro. "Sobre separação cega de fontes : proposições e analise de estrategias para processamento multi-usuario." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Orientadores: João Marcos Travassos Romano, Francisco Rodrigo Porto Cavalcanti
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T00:19:46Z (GMT). No. of bitstreams: 1 Cavalcante_CharlesCasimiro_D.pdf: 8652621 bytes, checksum: bf432c4988b60a8e2465828f4f748b47 (MD5) Previous issue date: 2004
Resumo: Esta tese é dedicada ao estudo de tecnicas de separação cega de fontes aplicadas ao contexto de processamento multiusuario em comunicações digitais. Utilizando estrategias de estimação da função de densidade de probabilidade (fdp), são propostos dois metodos de processamento multiusuario que permitem recuperar os sinais transmitidos pela medida de similaridade de Kullback-Leibler entre a fdp dos sinais a saida do dispositivo de separação e um modelo parametrico que contem as caracteristicas dos sinais transmitidos. Alem desta medida de similaridade, são empregados diferentes metodos que garantem a descorrelação entre as estimativas das fontes de tal forma que os sinais recuperados sejam provenientes de diferentes fontes. E ainda realizada a analise de convergencia dos metodos e suas equivalencias com tecnicas classicas resultando em algumas importantes relações entre criterios cegos e supervisionados, tais como o criterio proposto e o criterio de maxima a posteriori. Estes novos metodos aliam a capacidade de recuperação da informação uma baixa complexidade computacional. A proposição de metodos baseados na estimativa da fdp permitiu a realização de um estudo sobre o impacto das estatisticas de ordem superior em algoritmos adaptativos para separação cega de fontes. A utilização da expansão da fdp em series ortonormais permite avaliar atraves dos cumulantes a dinamica de um processo de separação de fontes. Para tratar com problemas de comunicação digital e proposta uma nova serie ortonormal, desenvolvida em torno de uma função de densidade de probabilidade dada por um somatorio de gaussianas. Esta serie e utilizada para evidenciar as diferenças em relação ao desempenho em tempo real ao se reter mais estatisticas de ordem superior. Simulações computacionais são realizadas para evidenciar o desempenho das propostas frente a tecnicas conhecidas da literatura em varias situações de necessidade de alguma estrategia de recuperação de sinais
Abstract: This thesis is devoted to study blind source separation techniques applied to multiuser processing in digital communications. Using probability density function (pdf) estimation strategies, two multiuser processing methods are proposed. They aim for recovering transmitted signal by using the Kullback-Leibler similarity measure between the signals pdf and a parametric model that contains the signals characteristics. Besides the similarity measure, different methods are employed to guarantee the decorrelation of the sources estimates, providing that the recovered signals origin from different sources. The convergence analysis of the methods as well as their equivalences with classical techniques are presented, resulting on important relationships between blind and supervised criteria such as the proposal and the maximum a posteriori one. Those new methods have a good trade-off between the recovering ability and computational complexity. The proposal os pdf estimation-based methods had allowed the investigation on the impact of higher order statistics on adaptive algorithms for blind source separation. Using pdf orthonormal series expansion we are able to evaluate through cumulants the dynamics of a source separation process. To be able to deal with digital communication signals, a new orthonormal series expansion is proposed. Such expansion is developed in terms of a Gaussian mixture pdf. This new expansion is used to evaluate the differences in real time processing when we retain more higher order statistics. Computational simulations are carried out to stress the performance of the proposals, faced to well known techniques reported in the literature, under the situations where a recovering signal strategy is required.
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
42

Carlo, Diego Di. "Echo-aware signal processing for audio scene analysis." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La plupart des méthodes de traitement du signal audio considèrent la réverbération et en particulier les échos acoustiques comme une nuisance. Cependant, ceux-ci transmettent des informations spatiales et sémantiques importantes sur les sources sonores et des méthodes essayant de les prendre en compte ont donc récemment émergé.. Dans ce travail, nous nous concentrons sur deux directions. Tout d’abord, nous étudions la manière d’estimer les échos acoustiques à l’aveugle à partir d’enregistrements microphoniques. Deux approches sont proposées, l’une s’appuyant sur le cadre des dictionnaires continus, l’autre sur des techniques récentes d’apprentissage profond. Ensuite, nous nous concentrons sur l’extension de méthodes existantes d’analyse de scènes audio à leurs formes sensibles à l’écho. Le cadre NMF multicanal pour la séparation de sources audio, la méthode de localisation SRP-PHAT et le formateur de voies MVDR pour l’amélioration de la parole sont tous étendus pour prendre en compte les échos. Ces applications montrent comment un simple modèle d’écho peut conduire à une amélioration des performances
Most of audio signal processing methods regard reverberation and in particular acoustic echoes as a nuisance. However, they convey important spatial and semantic information about sound sources and, based on this, recent echo-aware methods have been proposed. In this work we focus on two directions. First, we study the how to estimate acoustic echoes blindly from microphone recordings. Two approaches are proposed, one leveraging on continuous dictionaries, one using recent deep learning techniques. Then, we focus on extending existing methods in audio scene analysis to their echo-aware forms. The Multichannel NMF framework for audio source separation, the SRP-PHAT localization method, and the MVDR beamformer for speech enhancement are all extended to their echo-aware versions
43

Madrolle, Stéphanie. "Méthodes de traitement du signal pour l'analyse quantitative de gaz respiratoires à partir d’un unique capteur MOX." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT065/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Prélevés de manière non invasive, les gaz respiratoires sont constitués de nombreux composés organiques volatils (VOCs) dont la quantité dépend de l’état de santé du sujet. L’analyse quantitative de l’air expiré présente alors un fort intérêt médical, que ce soit pour le diagnostic ou le suivi de traitement. Dans le cadre de ma thèse, nous proposons d’étudier un dispositif d’analyse des gaz respiratoires, et notamment de ces VOCs. Cette thèse multidisciplinaire aborde différents aspects, tels que le choix des capteurs, du matériel et des modes d’acquisition, l’acquisition des données à l’aide d’un banc gaz, et ensuite le traitement des signaux obtenus de manière à quantifier un mélange de gaz. Nous étudions la réponse d’un capteur à oxyde métallique (MOX) à des mélanges de deux gaz (acétone et éthanol) dilués dans de l’air synthétique (oxygène et azote). Ensuite, nous utilisons des méthodes de séparation de sources de manière à distinguer les deux gaz, et déterminer leur concentration. Pour donner des résultats satisfaisants, ces méthodes nécessitent d’utiliser plusieurs capteurs dont on connait la forme mathématique du modèle décrivant l’interaction du mélange avec le capteur, et qui présentent une diversité suffisante dans les mesures d’étalonnage pour estimer les coefficients de ce modèle. Dans cette thèse, nous montrons que les capteurs MOX peuvent être décrits par un modèle de mélange linéaire quadratique, et qu’un mode d’acquisition fonctionnant en double température permet de générer deux capteurs virtuels à partir d’un unique capteur physique. Pour quantifier précisément les composants du mélange à partir des mesures sur ces capteurs (virtuels), nous avons conçu des méthodes de séparation de sources, supervisées et non supervisées appliquées à ce modèle non-linéaire : l’analyse en composantes indépendantes, des méthodes de moindres carrés (algorithme de Levenberg-Marquardt), et une méthode bayésienne ont été étudiées. Les résultats expérimentaux montrent que ces méthodes permettent d’estimer les concentrations de VOCs contenus dans un mélange de gaz, de façon précise, en ne nécessitant que très peu de points de calibration
Non-invasively taken, exhaled breath contains many volatile organic compounds (VOCs) whose amount depends on the health of the subject. Quantitative analysis of exhaled air is of great medical interest, whether for diagnosis or for a treatment follow-up. As part of my thesis, we propose to study a device to analyze exhaled breath, including these VOCs. This multidisciplinary thesis addresses various aspects, such as the choice of sensors, materials and acquisition modes, the acquisition of data using a gas bench, and then the processing of the signals obtained to quantify a gas mixture. We study the response of a metal oxide sensor (MOX) to mixtures of two gases (acetone and ethanol) diluted in synthetic air (oxygen and nitrogen). Then, we use source separation methods in order to distinguish the two gases, and to determine their concentration. To give satisfactory results, these methods require first to use several sensors for which we know the mathematical model describing the interaction of the mixture with the sensor, and which present a sufficient diversity in the calibration measurements to estimate the model coefficients. In this thesis, we show that MOX sensors can be described by a linear-quadratic mixing model, and that a dual temperature acquisition mode can generate two virtual sensors from a single physical sensor. To quantify the components of the mixture from measurements on these (virtual) sensors, we have develop supervised and unsupervised source separation methods, applied to this nonlinear model: independent component analysis, least squares methods (Levenberg Marquardt algorithm), and a Bayesian method were studied. The experimental results show that these methods make it possible to estimate the VOC concentrations of a gas mixture, accurately, while requiring only a few calibration points
44

Boudjellal, Abdelouahab. "Contributions à la localisation et à la séparation de sources." Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les premières recherches en détection, localisation et séparation de signaux remontent au début du 20ème siècle. Ces recherches sont d’actualité encore aujourd’hui, notamment du fait de la croissance rapide des systèmes de communications constatée ces deux dernières décennies. Par ailleurs, la littérature du domaine consacre très peu d’études relatives à certains contextes jugés difficiles dont certains sont traités dans cette thèse. Ce travail porte sur la localisation de signaux par détection des temps d’arrivée ou estimation des directions d’arrivée et sur la séparation de sources dépendantes ou à module constant. L’idée principale est de tirer profit de certaines informations a priori disponibles sur les signaux sources telles que la parcimonie, la cyclostationarité, la non-circularité, le module constant, la structure autoregressive et les séquences pilote dans un contexte coopératif. Une première partie détaille trois contributions : (i) un nouveau détecteur pour l’estimation des temps d’arrivée basé sur la minimisation de la probabilité d’erreur ; (ii) une estimation améliorée de la puissance du bruit, basée sur les statistiques d’ordre ; (iii) une quantification de la précision et de la résolution de l’estimation des directions d’arrivée au regard de certains a priori considérés sur les sources. Une deuxième partie est consacrée à la séparation de sources exploitant différentes informations sur celles-ci : (i) la séparation de signaux de communication à module constant ; (ii) la séparation de sources dépendantes connaissant la nature de la dépendance et (iii) la séparation de sources autorégressives dépendantes connaissant la structure autorégressive
Signal detection, localization, and separation problems date back to the beginning of the twentieth century. Nowadays, this subject is still a hot topic receiving more and more attention, notably with the rapid growth of wireless communication systems that arose in the last two decades and it turns out that many challenging aspects remain poorly addressed by the available literature relative to this subject. This thesis deals with signal detection, localization using temporal or directional measurements, and separation of dependent source signals. The main objective is to make use of some available priors about the source signals such as sparsity, cyclo-stationarity, non-circularity, constant modulus, autoregressive structure or training sequences in a cooperative framework. The first part is devoted to the analysis of (i) signal’s time-of-arrival estimation using a new minimum error rate based detector, (ii) noise power estimation using an improved order-statistics estimator and (iii) side information impact on direction-of-arrival estimation accuracy and resolution. In the second part, the source separation problem is investigated at the light of different priors about the original sources. Three kinds of prior have been considered : (i) separation of constant modulus communication signals, (ii) separation of dependent source signals knowing their dependency structure and (iii) separation of dependent autoregressive sources knowing their autoregressive structure
45

Rafi, Selwa. "Chaînes de Markov cachées et séparation non supervisée de sources." Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0020/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le problème de la restauration est rencontré dans domaines très variés notamment en traitement de signal et de l'image. Il correspond à la récupération des données originales à partir de données observées. Dans le cas de données multidimensionnelles, la résolution de ce problème peut se faire par différentes approches selon la nature des données, l'opérateur de transformation et la présence ou non de bruit. Dans ce travail, nous avons traité ce problème, d'une part, dans le cas des données discrètes en présence de bruit. Dans ce cas, le problème de restauration est analogue à celui de la segmentation. Nous avons alors exploité les modélisations dites chaînes de Markov couples et triplets qui généralisent les chaînes de Markov cachées. L'intérêt de ces modèles réside en la possibilité de généraliser la méthode de calcul de la probabilité à posteriori, ce qui permet une segmentation bayésienne. Nous avons considéré ces méthodes pour des observations bi-dimensionnelles et nous avons appliqué les algorithmes pour une séparation sur des documents issus de manuscrits scannés dans lesquels les textes des deux faces d'une feuille se mélangeaient. D'autre part, nous avons attaqué le problème de la restauration dans un contexte de séparation aveugle de sources. Une méthode classique en séparation aveugle de sources, connue sous l'appellation "Analyse en Composantes Indépendantes" (ACI), nécessite l'hypothèse d'indépendance statistique des sources. Dans des situations réelles, cette hypothèse n'est pas toujours vérifiée. Par conséquent, nous avons étudié une extension du modèle ACI dans le cas où les sources peuvent être statistiquement dépendantes. Pour ce faire, nous avons introduit un processus latent qui gouverne la dépendance et/ou l'indépendance des sources. Le modèle que nous proposons combine un modèle de mélange linéaire instantané tel que celui donné par ACI et un modèle probabiliste sur les sources avec variables cachées. Dans ce cadre, nous montrons comment la technique d'Estimation Conditionnelle Itérative permet d'affaiblir l'hypothèse usuelle d'indépendance en une hypothèse d'indépendance conditionnelle
The restoration problem is usually encountered in various domains and in particular in signal and image processing. It consists in retrieving original data from a set of observed ones. For multidimensional data, the problem can be solved using different approaches depending on the data structure, the transformation system and the noise. In this work, we have first tackled the problem in the case of discrete data and noisy model. In this context, the problem is similar to a segmentation problem. We have exploited Pairwise and Triplet Markov chain models, which generalize Hidden Markov chain models. The interest of these models consist in the possibility to generalize the computation procedure of the posterior probability, allowing one to perform bayesian segmentation. We have considered these methods for two-dimensional signals and we have applied the algorithms to retrieve of old hand-written document which have been scanned and are subject to show through effect. In the second part of this work, we have considered the restoration problem as a blind source separation problem. The well-known "Independent Component Analysis" (ICA) method requires the assumption that the sources be statistically independent. In practice, this condition is not always verified. Consequently, we have studied an extension of the ICA model in the case where the sources are not necessarily independent. We have introduced a latent process which controls the dependence and/or independence of the sources. The model that we propose combines a linear instantaneous mixing model similar to the one of ICA model and a probabilistic model on the sources with hidden variables. In this context, we show how the usual independence assumption can be weakened using the technique of Iterative Conditional Estimation to a conditional independence assumption
46

Lassami, Nacerredine. "Représentations parcimonieuses et analyse multidimensionnelle : méthodes aveugles et adaptatives." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Au cours de la dernière décennie, l’étude mathématique et statistique des représentations parcimonieuses de signaux et de leurs applications en traitement du signal audio, en traitement d’image, en vidéo et en séparation de sources a connu une activité intensive. Cependant, l'exploitation de la parcimonie dans des contextes de traitement multidimensionnel comme les communications numériques reste largement ouverte. Au même temps, les méthodes aveugles semblent être la réponse à énormément de problèmes rencontrés récemment par la communauté du traitement du signal et des communications numériques tels que l'efficacité spectrale. Aussi, dans un contexte de mobilité et de non-stationnarité, il est important de pouvoir mettre en oeuvre des solutions de traitement adaptatives de faible complexité algorithmique en vue d'assurer une consommation réduite des appareils. L'objectif de cette thèse est d'aborder ces challenges de traitement multidimensionnel en proposant des solutions aveugles de faible coût de calcul en utilisant l'à priori de parcimonie. Notre travail s'articule autour de trois axes principaux : la poursuite de sous-espace principal parcimonieux, la séparation adaptative aveugle de sources parcimonieuses et l'identification aveugle des systèmes parcimonieux. Dans chaque problème, nous avons proposé de nouvelles solutions adaptatives en intégrant l'information de parcimonie aux méthodes classiques de manière à améliorer leurs performances. Des simulations numériques ont été effectuées pour confirmer l’intérêt des méthodes proposées par rapport à l'état de l'art en termes de qualité d’estimation et de complexité calculatoire
During the last decade, the mathematical and statistical study of sparse signal representations and their applications in audio, image, video processing and source separation has been intensively active. However, exploiting sparsity in multidimensional processing contexts such as digital communications remains a largely open problem. At the same time, the blind methods seem to be the answer to a lot of problems recently encountered by the signal processing and the communications communities such as the spectral efficiency. Furthermore, in a context of mobility and non-stationarity, it is important to be able to implement adaptive processing solutions of low algorithmic complexity to ensure reduced consumption of devices. The objective of this thesis is to address these challenges of multidimensional processing by proposing blind solutions of low computational cost by using the sparsity a priori. Our work revolves around three main axes: sparse principal subspace tracking, adaptive sparse source separation and identification of sparse systems. For each problem, we propose new adaptive solutions by integrating the sparsity information to the classical methods in order to improve their performance. Numerical simulations have been conducted to confirm the superiority of the proposed methods compared to the state of the art
47

BABINSKI, MARLY. "Idades isocronicas Pb/Pb e geoquimica isotopica de Pb das rochas carbonaticas do grupo Bambui na porcao sul da bacia do Sao Francisco." reponame:Repositório Institucional do IPEN, 1993. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2014-10-09T12:37:32Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:03:17Z (GMT). No. of bitstreams: 1 05174.pdf: 4868507 bytes, checksum: ef4d7e8312562d1a1e608bf1fd65e9ec (MD5)
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares, Sao Paulo
48

Raguet, Hugo. "A Signal Processing Approach to Voltage-Sensitive Dye Optical Imaging." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090031/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’imagerie optique par colorant potentiométrique est une méthode d’enregistrement de l’activité corticale prometteuse, mais dont le potentiel réel est limité par la présence d’artefacts et d’interférences dans les acquisitions. À partir de modèles existant dans la littérature, nous proposons un modèle génératif du signal basé sur un mélange additif de composantes, chacune contrainte dans une union d’espaces linéaires déterminés par son origine biophysique. Motivés par le problème de séparation de composantes qui en découle, qui est un problème inverse linéaire sous-déterminé, nous développons : (1) des régularisations convexes structurées spatialement, favorisant en particulier des solutions parcimonieuses ; (2) un nouvel algorithme proximal de premier ordre pour minimiser efficacement la fonctionnelle qui en résulte ; (3) des méthodes statistiques de sélection de paramètre basées sur l’estimateur non biaisé du risque de Stein. Nous étudions ces outils dans un cadre général, et discutons leur utilité pour de nombreux domaines des mathématiques appliqués, en particulier pour les problèmes inverses ou de régression en grande dimension. Nous développons par la suite un logiciel de séparation de composantes en présence de bruit, dans un environnement intégré adapté à l’imagerie optique par colorant potentiométrique. Finalement, nous évaluons ce logiciel sur différentes données, synthétiques et réelles, montrant des résultats encourageants quant à la possibilité d’observer des dynamiques corticales complexes
Voltage-sensitive dye optical imaging is a promising recording modality for the cortical activity, but its practical potential is limited by many artefacts and interferences in the acquisitions. Inspired by existing models in the literature, we propose a generative model of the signal, based on an additive mixtures of components, each one being constrained within an union of linear spaces, determined by its biophysical origin. Motivated by the resulting component separation problem, which is an underdetermined linear inverse problem, we develop: (1) convex, spatially structured regularizations, enforcing in particular sparsity on the solutions; (2) a new rst-order proximal algorithm for minimizing e›ciently the resulting functional; (3) statistical methods for automatic parameters selection, based on Stein’s unbiased risk estimate.We study thosemethods in a general framework, and discuss their potential applications in variouselds of applied mathematics, in particular for large scale inverse problems or regressions. We develop subsequently a soŸware for noisy component separation, in an integrated environment adapted to voltage-sensitive dye optical imaging. Finally, we evaluate this soŸware on dišerent data set, including synthetic and real data, showing encouraging perspectives for the observation of complex cortical dynamics
49

Patanchon, Guillaume. "Analyse multi-composantes d'observations du fond diffus cosmologique." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2003. http://tel.archives-ouvertes.fr/tel-00004512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les anisotropies du fond diffus cosmologique (CMB) sont une mine d'informations sur la physique de l'Univers primordial. Leur spectre spatial de puissance depend des parametres cosmologiques qui caracterisent le contenu et l'evolution de l'Univers. La mesure precise de ce spectre de puissance permet de contraindre fortement les modeles, mais est rendue difficile par la presence de emissions astrophysiques d'avant-plan qui se superposent au signal primordial. Apres une presentation succinte du modele standard de la cosmologie et des differentes sources d'emission dans le domaine millimetrique, nous passons en revue les methodes de separation de composantes couramment employees au sein de la communaute. Nous developpons dans cette these des methodes ameliorees pour la separation et la caracterisation des composantes de signal et de bruit presentes dans les donnees. Tout d'abord, les methodes classiques, basees sur un modele extremement simplifie, ne permettent pas de separer efficacement certaines composantes systematiques instrumentales correlees entre les detecteurs. Nous presentons une methode de separation de composante adaptee au traitement de tels effets systematiques. Cette technique est testee sur des simulations d'observation du satellite Planck.Par ailleurs, les methodes "standard" de separation de composantes ne sont pas entierement satisfaisantes, car elles ne sont pas optimisees pour l'estimation des spectres de puissance, et elles necessitent la connaissance, a priori, des spectres electromagnetiques des composantes. Nous decrivons une nouvelle methode destimation du spectre de puissance du CMB (et des autres composantes) realisant en aveugle la separation des composantes. Cette approche est validee a l'aide de nombreuses simulations. Dans une derniere partie, nous appliquons cette methode aux donnees obtenues avec l'experience Archeops.
50

Essebbar, Abderrahman. "Séparation paramétrique des ondes en sismique." Phd thesis, Grenoble INPG, 1992. http://tel.archives-ouvertes.fr/tel-00785644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans cette étude, nous nous intéressons à la séparation des ondes sismiques, traitement principal conditionnant toute interprétation physique des données. En première partie, les méthodes de séparation non paramétriques (Matrice spectrale, Transformée de Karhunen-Loève et Filtrage FK) sont étudiées. La limitation de ces méthodes nous a conduit à utiliser l'approche paramétrique. Cette approche fait apparaître une modélisation qui permet de tenir compte des divers types d'ondes sismiques. La séparation paramétrique des ondes utilise l'estimateur du maximum de vraisemblance. Elle est réalisée en estimant les différents paramètres (lenteur apparente, amplitude, forme de l'onde, phase et angle d'incidence) caractérisant chaque onde. Les différentes méthodes d'estimation ainsi que les limites et les performances de la séparation paramétrique sont étudiées en deuxième partie. Les résultats de cette étude sont appliqués, en dernière partie, au traitement des signaux sismiques de puits issus d'une expérimentation ainsi qu'à des ondes dispersives de sismique de, surface.

To the bibliography