Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Blind source separation.

Rozprawy doktorskie na temat „Blind source separation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Blind source separation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Gao, Bin. "Single channel blind source separation". Thesis, University of Newcastle Upon Tyne, 2011. http://hdl.handle.net/10443/1300.

Pełny tekst źródła
Streszczenie:
Single channel blind source separation (SCBSS) is an intensively researched field with numerous important applications. This research sets out to investigate the separation of monaural mixed audio recordings without relying on training knowledge. This research proposes a novel method based on variable regularised sparse nonnegative matrix factorization which decomposes an information-bearing matrix into two-dimensional convolution of factor matrices that represent the spectral basis and temporal code of the sources. In this work, a variational Bayesian approach has been developed for computing the sparsity parameters of the matrix factorization. To further improve the previous work, this research proposes a new method based on decomposing the mixture into a series of oscillatory components termed as the intrinsic mode functions (IMF). It is shown that IMFs have several desirable properties unique to SCBSS problem and how these properties can be advantaged to relax the constraints posed by the problem. In addition, this research develops a novel method for feature extraction using psycho-acoustic model. The monaural mixed signal is transformed to a cochleagram using the gammatone filterbank, whose bandwidths increase incrementally as the center frequency increases; thus resulting to non-uniform time-frequency (TF) resolution in the analysis of audio signal. Within this domain, a family of Itakura-Saito (IS) divergence based novel two-dimensional matrix factorization has been developed. The proposed matrix factorizations have the property of scale invariant which enables lower energy components in the cochleagram to be treated with equal importance as the high energy ones. Results show that all the developed algorithms presented in this thesis have outperformed conventional methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Abrar, Shafayat. "Blind channel equalization and instantaneous blind source separation". Thesis, University of Liverpool, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.540044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Latif, Mohamed Amin. "Localization of brain signal sources using blind source separation". Thesis, Cardiff University, 2006. http://orca.cf.ac.uk/54567/.

Pełny tekst źródła
Streszczenie:
Reliable localization of brain signal sources by using convenient, easy, and hazardless data acquisition techniques can potentially play a key role in the understanding, analysis, and tracking of brain activities for determination of physiological, pathological, and functional abnormalities. The sources can be due to normal brain activities, mental disorders, stimulation of the brain, or movement related tasks. The focus of this thesis is therefore the development of novel source localization techniques based upon EEG measurements. Independent component analysis is used in blind separation (BSS) of the EEG sources to yield three different approaches for source localization. In the first method the sources are localized over the scalp pattern using BSS in various subbands, and by investigating the number of components which are likely to be the true sources. In the second method, the sources are separated and their corresponding topographical information is used within a least-squares algorithm to localize the sources within the brain region. The locations of the known sources, such as some normal brain rhythms, are also utilized to help in determining the unknown sources. The final approach is an effective BSS algorithm partially constrained by information related to the known sources. In addition, some investigation have been undertaken to incorporate non-homogeneity of the head layers in terms of the changes in electrical and magnetic characteristics and also with respect to the noise level within the processing methods. Experimental studies with real and synthetic data sets are undertaken using MATLAB and the efficacy of each method discussed.
Style APA, Harvard, Vancouver, ISO itp.
4

Naqvi, Syed Mohsen Raza. "Multimodal methods for blind source separation of audio sources". Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/36117.

Pełny tekst źródła
Streszczenie:
The enhancement of the performance of frequency domain convolutive blind source separation (FDCBSS) techniques when applied to the problem of separating audio sources recorded in a room environment is the focus of this thesis. This challenging application is termed the cocktail party problem and the ultimate aim would be to build a machine which matches the ability of a human being to solve this task. Human beings exploit both their eyes and their ears in solving this task and hence they adopt a multimodal approach, i.e. they exploit both audio and video modalities. New multimodal methods for blind source separation of audio sources are therefore proposed in this work as a step towards realizing such a machine. The geometry of the room environment is initially exploited to improve the separation performance of a FDCBSS algorithm. The positions of the human speakers are monitored by video cameras and this information is incorporated within the FDCBSS algorithm in the form of constraints added to the underlying cross-power spectral density matrix-based cost function which measures separation performance.
Style APA, Harvard, Vancouver, ISO itp.
5

Khor, Li Chin. "Blind source separation under model misfits". Thesis, University of Newcastle upon Tyne, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.490154.

Pełny tekst źródła
Streszczenie:
Blind Signal Separation (BSS) is a statistical signal processing-based technique and has recently been developed for many potential applications. This thesis aims to investigate model misfits in BSS problems as well as identify and develop efficient solutions for enhancing the performance of signal separation. This research sets out to investigate model misfits associated with finite signal sample size, mixing model, source signal and noise models. The effects of finite signal sample size on several well-known cost functions have been studied and this thesis has identified the most optimal cost function in separating signals with and without the presence of noise. A set of statistical tests is further developed to measure the performance in terms of speed, accuracy and convergence of the tested BSS algorithms. This work further explores the limitations of conventional assumptions of the noiseless and square mixing model which are often violated in practice and result in poor performance in signal separation. The separation of underdetermined mixing models as well as the assumptions of the source signals and noise are also addressed. This thesis presents the development of a Bayesian framework for underdetermined mixtures that produce accurate results in the estimation of mixing matrix and signals corrupted by noise. The proposed algorithm for underdetermined mixtures is capable of modelling a wide variety of signals ranging from unimodal to multimodal and symmetric to nonsymmetric signals. An integrated noise reduction procedure provides robustness against Gaussian noise and the commonly neglected non-Gaussian noise. Results justify the customisation of an algorithm for underdetermined mixtures and demonstrate the efficacy of the proposed algorithm which is three to five times better than existing algorithms. Finally, the work investigates another model misfit in the form of nonlinearly mixed signals and the difficulty of the problem. An algorithm that accurately separates nonlinear mixtures in the presence of noise is proposed. This algorithm features a system that maintains efficient convergence rate while minimising the risk of divergence regardless of the initialised parameters. There is also a mechanism that ameliorates global convergence. Results show that the proposed algorithm outperforms existing algorithms by at least three times with its features that simultaneously address the two crucial issues in the blind separation of nonlinear mixtures.
Style APA, Harvard, Vancouver, ISO itp.
6

Klajman, Maurice. "Mixed statistics in blind source separation". Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406683.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zhou, Lihong. "Blind source separation systems for hearing aids". Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28395.

Pełny tekst źródła
Streszczenie:
For many real-life situations, there is more than one speaker at a given time and people need to concentrate on a target sound signal to extract it. This process happens naturally for people with a normal hearing ability, but it is very difficult for hearing impaired persons. In this thesis, we present a system for enhancing the quality of the signal produced by a hearing aid. The proposed system combines spatial information with blind source separation (BSS) to extract the target signal. Results show that the proposed system can locate a target signal in different environments, with a good learning ability. The problem of locating and extracting a target source signal is first investigated. By applying a time-frequency masking method, it is then shown that the performance can be improved. Finally, the problem of underdetermined BSS is investigated and solved by combining a MVDR beamformer with a determined BSS system.
Style APA, Harvard, Vancouver, ISO itp.
8

Smith, Paul Carson. "Broadband analog opto-electronic blind source separation". Diss., Connect to online resource, 2005. http://wwwlib.umi.com/dissertations/fullcit/3178354.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Badran, Salah Al-Din Ibrahim. "Efficient multiband algorithms for blind source separation". Thesis, De Montfort University, 2016. http://hdl.handle.net/2086/16089.

Pełny tekst źródła
Streszczenie:
The problem of blind separation refers to recovering original signals, called source signals, from the mixed signals, called observation signals, in a reverberant environment. The mixture is a function of a sequence of original speech signals mixed in a reverberant room. The objective is to separate mixed signals to obtain the original signals without degradation and without prior information of the features of the sources. The strategy used to achieve this objective is to use multiple bands that work at a lower rate, have less computational cost and a quicker convergence than the conventional scheme. Our motivation is the competitive results of unequal-passbands scheme applications, in terms of the convergence speed. The objective of this research is to improve unequal-passbands schemes by improving the speed of convergence and reducing the computational cost. The first proposed work is a novel maximally decimated unequal-passbands scheme. This scheme uses multiple bands that make it work at a reduced sampling rate, and low computational cost. An adaptation approach is derived with an adaptation step that improved the convergence speed. The performance of the proposed scheme was measured in different ways. First, the mean square errors of various bands are measured and the results are compared to a maximally decimated equal-passbands scheme, which is currently the best performing method. The results show that the proposed scheme has a faster convergence rate than the maximally decimated equal-passbands scheme. Second, when the scheme is tested for white and coloured inputs using a low number of bands, it does not yield good results; but when the number of bands is increased, the speed of convergence is enhanced. Third, the scheme is tested for quick changes. It is shown that the performance of the proposed scheme is similar to that of the equal-passbands scheme. Fourth, the scheme is also tested in a stationary state. The experimental results confirm the theoretical work. For more challenging scenarios, an unequal-passbands scheme with over-sampled decimation is proposed; the greater number of bands, the more efficient the separation. The results are compared to the currently best performing method. Second, an experimental comparison is made between the proposed multiband scheme and the conventional scheme. The results show that the convergence speed and the signal-to-interference ratio of the proposed scheme are higher than that of the conventional scheme, and the computation cost is lower than that of the conventional scheme.
Style APA, Harvard, Vancouver, ISO itp.
10

Anemüller, Jörn. "Across-frequency processing in convolutive blind source separation". [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=962819247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Zhang, Jingyi. "Statistical blind source separation of post-nonlinear mixture". Thesis, University of Newcastle upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485858.

Pełny tekst źródła
Streszczenie:
Blind Source Separation (BSS) is a statistical signal processing technique and has recently been developed for many applications. The aim of this thesis is to investigate the blind signal separation problem under the environment where noise, reverberation and nonlinear distortion exist in the mixture and to develop novel solutions to solve the problem. The success and efficacy of the proposed algorithms is analysed in terms of robustness to noise, accuracy of recovered signal and speed of convergence. Linear BSS algorithms for instantaneous and convolutive mixtures are investigated and tested by a set of specially designed simulated experiments under various conditions. In addition, the post-nonlinear instantaneous mixture model has been critically researched and the theory of signal separability has been established. To overcome the limitation and drawbacks of the existing works on post-nonlinear mixture, a novel solution has been developed to separate noisy post-nonlinear instantaneous mixtures of non-stationary and temporally correlated sources and this work further extends to the case of noisy convolutive mixture. The proposed models allow source non-stationarity and temporal correlation to be incorporated into the new solutions. The Maximum Likelihood (ML) approach has been developed for both of the proposed algorithms to estimate the model parameters by the Expectation Maximisation (EM) algorithm and the post-nonlinearity is estimated by a set of self-updating polynomials whose coefficients are updated as part of the model parameters. The theoretical foundation of the proposed solutions has been rigorously developed and discussed in detail. The new algorithms have been tested by simulations using both synthetically generated and recorded speech signals to verify the accuracy and efficacy. The results show that the proposed algorithms outperform existing algorithms in the separation performance where significant improvement has been obtained.
Style APA, Harvard, Vancouver, ISO itp.
12

Liu, Xianhua Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Blind source separation methods and their mechanical applications". Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/24961.

Pełny tekst źródła
Streszczenie:
Blind Source Separation is a modern signal processing technique which recovers both the unknown sources and unknown mixing systems from only measured mixtures of signals. It has application in diverse fields such as communication, image processing, geological exploration and biomedical signal processing etc. This project studies the BSS problem, develop separation methods and reveal the potential for mechanical engineering applications. There are two models for blind source separation corresponding to the two ways that the sources are mixed, the instantaneous mixing model and the convolved mixing model. The author carried out a theoretical study of the first model by proposing an idea called Redundant Data Elimination which leads to geometric interpretation of the model, explains that circular distribution property is the reason why Gaussian signal mixtures can not be separated, and showed that this idea can improve separation accuracy for unsymmetrically distributed sources. This new idea enabled evaluation and comparison of two well-known algorithms and proposal of a simplified algorithm based on Joint Approximate Diagonalization of fourth order cumulant matrices, which is further developed by determining an optimized parameter value for separation convergence. Also based on the understanding from the RDE, an outlier spherical projection method is proposed to improve separation accuracy against outlier errors. Mechanical vibration or acoustic problems belong to the second model. After some theoretical study of the problem and the model, a novel application of the Blind Least Mean Square algorithm using Gray's variable norm as cost function is applied to engine vibration data to separate piston slap, fuel injection noise and cylinder pressure effects. Further, the algorithm is combined with a deflation algorithm for successive subtraction of recovered source responses from the measured mixture to enable the recovery of more sources. The algorithms are verified to be successful by simulation, and the separated engine sources are proved reasonable by analysing the engine operation and physical properties of the sources. The author also studied the relationship between these two models, the problems of different approaches for solving the model such as the frequency domain approach and the Bussgang approach, and sets out future research interests.
Style APA, Harvard, Vancouver, ISO itp.
13

Sansrimahachai, Puttachad. "Blind source separation algorithms for MIMO communication systems". Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419916.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Addison, W. D. "Blind source separation using spatial and temporal priors". Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.525254.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Parathai, Phetcharat. "Blind source separation using statistical nonnegative matrix factorization". Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/2830.

Pełny tekst źródła
Streszczenie:
Blind Source Separation (BSS) attempts to automatically extract and track a signal of interest in real world scenarios with other signals present. BSS addresses the problem of recovering the original signals from an observed mixture without relying on training knowledge. This research studied three novel approaches for solving the BSS problem based on the extensions of non-negative matrix factorization model and the sparsity regularization methods. 1) A framework of amalgamating pruning and Bayesian regularized cluster nonnegative tensor factorization with Itakura-Saito divergence for separating sources mixed in a stereo channel format: The sparse regularization term was adaptively tuned using a hierarchical Bayesian approach to yield the desired sparse decomposition. The modified Gaussian prior was formulated to express the correlation between different basis vectors. This algorithm automatically detected the optimal number of latent components of the individual source. 2) Factorization for single-channel BSS which decomposes an information-bearing matrix into complex of factor matrices that represent the spectral dictionary and temporal codes: A variational Bayesian approach was developed for computing the sparsity parameters for optimizing the matrix factorization. This approach combined the advantages of both complex matrix factorization (CMF) and variational-sparse analysis. An imitated-stereo mixture model developed by weighting and time-shifting the original single-channel mixture where source signals can be modelled by the AR processes. The proposed mixing mixture is analogous to a stereo signal created by two microphones with one being real and another virtual. The imitated-stereo mixture employed the nonnegative tensor factorization for separating the observed mixture. The separability analysis of the imitated-stereo mixture was derived using Wiener masking. All algorithms were tested with real audio signals. Performance of source separation was assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. The experimental results demonstrate that the proposed uninformed audio separation algorithms have surpassed among the conventional BSS methods; i.e. IS-cNTF, SNMF and CMF methods, with average SDR improvement in the ranges from 2.6dB to 6.4dB per source.
Style APA, Harvard, Vancouver, ISO itp.
16

Leong, Wai Yie. "Implementing blind source separation in signal processing and telecommunications /". [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19158.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Lösch, Benedikt [Verfasser]. "Complex Blind Source Separation with Audio Applications / Benedikt Lösch". München : Verlag Dr. Hut, 2013. http://d-nb.info/1042307806/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Abadi, Bahador Makki. "New tensor factorization based approaches for blind source separation". Thesis, University of Surrey, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543925.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

E, Okwelume Gozie, i Ezeude Anayo Kingsley. "BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS". Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1312.

Pełny tekst źródła
Streszczenie:
Our thesis work focuses on Frequency-domain Blind Source Separation (BSS) in which the received mixed signals are converted into the frequency domain and Independent Component Analysis (ICA) is applied to instantaneous mixtures at each frequency bin. Computational complexity is also reduced by using this method. We also investigate the famous problem associated with Frequency-Domain Blind Source Separation using ICA referred to as the Permutation and Scaling ambiguities, using methods proposed by some researchers. This is our main target in this project; to solve the permutation and scaling ambiguities in real time applications
Gozie: modebelu2001@yahoo.com Anayo: ezeudea@yahoo.com
Style APA, Harvard, Vancouver, ISO itp.
20

Herrmann, Frank. "Independent component analysis with applications to blind source separation". Thesis, University of Liverpool, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399147.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Alphey, Marcus J. T. "Blind source separation : the effects of signal non-stationarity". Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/11220.

Pełny tekst źródła
Streszczenie:
This thesis investigates the effect of non-stationarity reduction, in the form of silence removal, on the performance of blind separation and deconvolution techniques for speech signals. An information-maximisation-based system is used for the separation of instantaneously mixed signals, and a decorrelating system for convolutively mixed signals. An introduction to the concepts of adaptive signal processing, blind signal processing and artificial neural networks is presented. A review of approaches to solving the blind signal separation and deconvolution problems is provided. The susceptibility of the information-maximisation approach to signal non-stationarity is discussed and two methods of silence identification and removal are compared and used to pre-process data before blind separation. The "infomax" approach is used to separate instantaneous mixtures, and is also modified to incorporate silence assessment and removal techniques to form an on-line system. Further modifications are made to the algorithm to investigate the effect of alternative update strategies, and these are compared with experimental results from identical modifications to diverse separating algorithms. A performance metric is used to assess the quality of separation achieved. The application of these techniques to convolutively mixed speech signals is also investigated, using the CoB1iSS algorithm. The effectiveness of the application of the silence removal techniques to both the time domain and frequency domain representations of the outputs is tested. While this form of non-stationarity reduction improves the rate of convergence for instantaneous mixtures, it does not cause any significant improvement in separation performance under most of the experimental conditions tested. No significant difference in performance was noted for the separation of convolutive mixtures in either the time or frequency domain.
Style APA, Harvard, Vancouver, ISO itp.
22

Guddeti, Ram Mohana Reddy. "Perceptually motivated blind source separation of convolutive audio mixtures". Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/12073.

Pełny tekst źródła
Streszczenie:
The first objective of this thesis is to apply psycho-acoustic principles to the spatial processing of speech signals in noisy and reverberant environment. The key assumption that will be adopted is that modern signal processing has failed to mimic the cock-tail party effect because there has been no attempt to adequately incorporate the psycho acoustical phenomenon of audio masking to aid source separation. A quasi linear mechanism for mimicking simultaneous frequency masking and temporal masking (post masking) techniques are developed. This frame work is used to construct blind source separation algorithms that exploit audio masking prior to source separation (preprocessor) and after source separation (postprocessor). The final objective of this thesis is to exploit the perceptual irrelevancy of some of the input speech spectrum using the perceptual masking techniques before utilizing the subspace method as a preprocessor of the frequency-domain ICA (FDICA) which reduces the effect of room reflections in advance and the remaining direct sounds then being separated by ICA. Incorporating the perceptual masking techniques prior to the application of FDICA with the subspace method as preprocessor not only reduces the computational complexity of similarity measure for solving the permutations but also avoids the so-called permutation problem by targeting a specific speech signal more intelligible than the available microphone signals.
Style APA, Harvard, Vancouver, ISO itp.
23

Babaiezadeh, Malmiri Massoud. "On blind source separation in convolutive and nonlinear mixtures". Grenoble INPG, 2002. http://www.theses.fr/2002INPG0065.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, la séparation aveugle de sources dans des mélanges convolutif Post Non-linéaire (CPNL) est étudiée. Pour séparer ce type de mélanges, nous avons d'abord développé des nouvelles méthodes pour séparer les mélanges convultifs et les mélanges Post Non-Linéaires (PNL). Ces méthodes sont toutes basées sur la minimisation de l'information mutuelle des sorties. Pour minimiser l'information mutuelle, nous calculons d'abord sa "différentielle", c'est-à-dire, sa variation en fonction d'une petite variation de son argument. Cette différentielle est alors utilisée pour concevoir des approches de type gradient pour minimiser l'information mutuelle des sorties. Ces approches peuvent être appliquées pour séparation aveugle des mélanges linéaires instantanés, convolutifs, PNL et CPNL.
Style APA, Harvard, Vancouver, ISO itp.
24

Riaz, Areeb. "Adaptive blind source separation based on intensity vector statistics". Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/810208/.

Pełny tekst źródła
Streszczenie:
Human brain has the ability to focus on desired acoustic source when several sources are active. In the domain of digital electronics this problem is termed as the cock- tail party problem. Over the past few decades many algorithms have been proposed which attempt to solve this problem; they are generally termed as acoustic source separation algorithms. The proposed algorithms achieve separation of individual source components from observed acoustic mixtures. The source separation system may be capable of estimating the number of sources, their physical locations, the room impulse response and/or any target source signal information. A system that approximates this information is termed as blind. Source separation systems which require any such information beforehand are termed as semi-blind. Most of the proposed source separation algorithms deal with acoustic sources that are stationary in space. A more challenging task is to approximate unmixing filters while the sources are constantly moving. To maintain output performance in such a scenario, the source separation system has to swiftly and accurately detect the time variant mix- ing parameters, and update unmixing filters accordingly. The area of moving sources has still not been heavily investigated by researchers. The aim of this thesis is to further the field of acoustic source separation. Investigation of intensity vector direction (IVD) based source separation algorithm was carried out to analyse and improve the system, both in terms of applicability and output sound quality. The algorithm under investigation provides a robust and nearly closed-form solution to the source separation problem with a low processing time. However, the algorithm initially required unmixing filter coefficients as input for dealing with practical acoustic scenarios. Analysis performed with microphone array response, microphone array geometry and the room response yielded three different modifications to the baseline system, improving system applicability and output sound quality. The IVD based system was investigated to deal with more challenging acoustic scenarios, such as time variant number of sources. Likewise, the IVD statistics were analysed to propose solutions for moving sources scenario. The system exhibited potential to swiftly, accurately and reliably detect changes in the time varying mixing parameters. As a result of these investigations, a novel system pipeline is proposed, capable of detecting, tracking and separating moving sources in a blind manner. The proposed algorithms were evaluated for processing time and separation performance. Optimisation of output sound quality was carried out through objective performance measures, while speaker tracking was evaluated subjectively. Finally, a demonstration was developed in Matlab based on the proposed algorithms to facilitate user interaction with the surrounding acoustic environment.
Style APA, Harvard, Vancouver, ISO itp.
25

Kervazo, Christophe. "Optimization framework for large-scale sparse blind source separation". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS354/document.

Pełny tekst źródła
Streszczenie:
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d’un algorithme classique de BSS parcimonieuse pour l’application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse non-linéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées
During the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed
Style APA, Harvard, Vancouver, ISO itp.
26

Zou, Liang. "Underdetermined joint blind source separation with application to physiological data". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63013.

Pełny tekst źródła
Streszczenie:
Blind Source Separation (BSS) methods have been attracting increasing attention for their promising applications in signal processing. Despite recent progress on the research of BSS, there are still remaining challenges. Specifically, this dissertation focuses on developing novel Underdetermined Blind Source Separation (UBSS) methods that can deal with several specific challenges in real applications, including limited number of observations, self/cross dependence information and source inference in the underdetermined case. First, by taking advantage of the Noise Assisted Multivariate Empirical Mode Decomposition (NAMEMD) and Multiset Canonical Correlation Analysis (MCCA), we propose a novel BSS framework and apply it to extract the heart beat signal form noisy nano-sensor signals. Furthermore, we generalize the idea of (over)determined joint BSS to that of the underdetermined case. We explore the dependence information between two datasets and propose an underdetermined joint BSS method for two datasets, termed as UJBSS-2. In addition, by exploiting the cross correlation between each pair of datasets, we develop a novel and effective method to jointly estimate the mixing matrices from multiple datasets, referred to as Underdetermined Joint Blind Source Separation for Multiple Datasets (UJBSS-M). In order to improve the time efficiency and relax the sparsity constraint, we recover the latent sources based on subspace representation when the mixing matrices are estimated. As an example application for noise enhanced signal processing, the proposed UJBSS-M method also can be utilized to solve the single-set UBSS problem when suitable noise is added to the observations. Finally, considering the recent increasing need for biomedical signal processing in the ambulatory environment, we propose a novel UBSS method for removing electromyogram (EMG) from Electroencephalography (EEG) signals. The proposed method for recovering the underlying sources is also applicable to other artifact removal problems. Simulation results demonstrate that the proposed methods yield superior performances over conventional approaches. We also evaluate the proposed methods on real physiological data, and the proposed methods are shown to effectively and efficiently recover the underlying sources.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
27

Kvernelv, Vegard Berg. "Optimization on Matrix Manifolds with Applications to Blind Source Separation". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for fysikk, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22688.

Pełny tekst źródła
Streszczenie:
Studere hvordan konsepter fra optimeringsteori generaliseres til mangfoldigheter, mer spesifikt matrisemangfoldigheter, og vurdere hvordan dette kan anvendes på "blind source separation"-problemer
Style APA, Harvard, Vancouver, ISO itp.
28

Wehr, Stefan [Verfasser]. "Robust Binaural Blind Source Separation in Hearing Aids / Stefan Wehr". München : Verlag Dr. Hut, 2013. http://d-nb.info/1031844627/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Jafari, Maria Grazia. "Novel sequential algorithms for blind source separation of instantaneous mixtures". Thesis, King's College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397682.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Remaggi, Luca. "Acoustic reflector localisation for blind source separation and spatial audio". Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/842217/.

Pełny tekst źródła
Streszczenie:
From a physical point of view, sound is classically defined by wave functions. Like every other physical model based on waves, during its propagation, it interacts with the obstacles it encounters. These interactions result in reflections of the main signal that can be defined as either being supportive or interfering. In the signal processing research field, it is, therefore, important to identify these reflections, in order to either exploit or avoid them, respectively. The main contribution of this thesis focuses on the acoustic reflector localisation. Four novel methods are proposed: a method localising the image source before finding the reflector position; two variants of this method, which utilise information from multiple loudspeakers; a method directly localising the reflector without any pre-processing. Finally, utilising both simulated and measured data, a comparative evaluation is conducted among different acoustic reflector localisation methods. The results show the last proposed method outperforming the state-of-the-art. The second contribution of this thesis is given by applying the acoustic reflector localisation solution into spatial audio, with the main objective of enabling the listeners with the sensation of being in the recorded environment. A novel way of encoding and decoding the room acoustic information is proposed, by parametrising sounds, and defining them as reverberant spatial audio objects (RSAOs). A set of subjective assessments are performed. The results prove both the high quality of the sound produced by the proposed parametrisation, and the reliability on manually modifying the acoustic of recorded environments. The third contribution is proposed in the field of speech source separation. A modified version of a state-of-the-art method is presented, where the direct sound and first reflection information is utilised to model binaural cues. Experiments were performed to separate speech sources in different environments. The results show the new method to outperform the state-of-the-art, where one interferer is present in the recordings. The simulation and experimental results presented in this thesis represent a significant addition to the literature and will influence the future choices of acoustic reflector localisation systems, 3D rendering, and source separation techniques. Future work may focus on the fusion of acoustic and visual cues to enhance the acoustic scene analysis.
Style APA, Harvard, Vancouver, ISO itp.
31

Sudhakara, Murthy Prasad. "Sparse models and convex optimisation for convolutive blind source separation". Rennes 1, 2011. https://tel.archives-ouvertes.fr/tel-00586610.

Pełny tekst źródła
Streszczenie:
Blind source separation from underdetermined mixtures is usually a two-step process: the estimation of the mixing filters, followed by that of the sources. An enabling assumption is that the sources are sparse and disjoint in the time-frequency domain. For convolutive mixtures, the solution is not straightforward due to the permutation and scaling ambiguities. The sparsity of the filters in the time-domain is also an enabling factor for blind filter estimation approaches that are based on cross-relation. However, such approaches are restricted to the single source setting. In this thesis, we jointly exploit the sparsity of the sources and mixing filters for blind estimation of sparse filters from stereo convolutive mixtures of several sources. First, we show why the sparsity of the filters can help solve the permutation problem in convolutive source separation, in the absence of scaling. Then, we propose a twostage estimation framework, which is primarily based on the time-frequency domain cross-relation and an ℓ1 minimisation formulation: a) a clustering step to group the time-frequency points where only one source is active, for each source; b) a convex optimisation step which estimates the filters. The resulting algorithms are assessed on audio source separation and filter estimation problems
La séparation aveugle de sources à partir de mélanges sous-déterminés se fait traditionnellement en deux étapes: l’estimation des filtres de mélange, puis celle des sources. L’hypothèse de parcimonie temps-fréquence des sources facilite la séparation, qui reste cependant difficile dans le cas de mélanges convolutifs à cause des ambiguités de permutation et de mise à l’échelle. Par ailleurs, la parcimonie temporelle des filtres facilite les techniques d’estimation aveugle de filtres fondées sur des corrélations croisées, qui restent cependant limitées au cas où une seule source est active. Dans cette thèse, on exploite conjointement la parcimonie des sources et des filtres de mélange pour l’estimation aveugle de filtres parcimonieux à partir de mélanges convolutifs stéréophoniques de plusieurs sources. Dans un premier temps, on montre comment la parcimonie des filtres permet de résoudre le problème de permutation, en l’absence de problème de mise à l’échelle. Ensuite, on propose un cadre constitu é de deux étapes pour l’estimation, basé sur des versions temps-fréquence de la corrélation croisée et sur la minimisation de norme ℓ1 : a) un clustering qui regroupe les points temps-fréquence où une seule source est active; b) la résolution d’un problème d’optimisation convexe pour estimer les filtres. La performance des algorithmes qui en résultent est évalués numériquement sur des problèmes de filtre d’estimation de filtres et de séparation de sources audio
Style APA, Harvard, Vancouver, ISO itp.
32

Roussos, Evangelos. "Bayesian methods for sparse data decomposition and blind source separation". Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589766.

Pełny tekst źródła
Streszczenie:
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or 'sources' via a generally unknown mapping. Reconstructing sources from their mixtures is an extremely ill-posed problem in general. However, solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian method- ology, allowing us to incorporate "soft" constraints in a natural manner. This Thesis proposes the use of sparse statistical decomposition methods for ex- ploratory analysis of datasets. We make use of the fact that many natural signals have a sparse representation in appropriate signal dictionaries. The work described in this Thesis is mainly driven by problems in the analysis of large datasets, such as those from functional magnetic resonance imaging of the brain for the neuro-scientific goal of extracting relevant 'maps' from the data. We first propose Bayesian Iterative Thresholding, a general method for solv- ing blind linear inverse problems under sparsity constraints, and we apply it to the problem of blind source separation. The algorithm is derived by maximiz- ing a variational lower-bound on the likelihood. The algorithm generalizes the recently proposed method of Iterative Thresholding. The probabilistic view en- ables us to automatically estimate various hyperparameters, such as those that control the shape of the prior and the threshold, in a principled manner. We then derive an efficient fully Bayesian sparse matrix factorization model for exploratory analysis and modelling of spatio-temporal data such as fMRI. We view sparse representation as a problem in Bayesian inference, following a ma- chine learning approach, and construct a structured generative latent-variable model employing adaptive sparsity-inducing priors. The construction allows for automatic complexity control and regularization as well as denoising. The performance and utility of the proposed algorithms is demonstrated on a variety of experiments using both simulated and real datasets. Experimental results with benchmark datasets show that the proposed algorithms outper- form state-of-the-art tools for model-free decompositions such as independent component analysis.
Style APA, Harvard, Vancouver, ISO itp.
33

Vikram, Anil Babu. "Tracking in wireless sensor network using blind source separation algorithms". Cleveland, Ohio : Cleveland State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=csu1259959597.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Cleveland State University, 2009.
Abstract. Title from PDF t.p. (viewed on Dec. 2, 2009). Includes bibliographical references (p. 65-72). Available online via the OhioLINK ETD Center and also available in print.
Style APA, Harvard, Vancouver, ISO itp.
34

Qi, Huan. "Video-based cardiac physiological measurements using joint blind source separation approaches". Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54005.

Pełny tekst źródła
Streszczenie:
Non-contact measurements of human cardiopulmonary physiological parameters based on photoplethysmography (PPG) can lead to efficient and comfortable medical assessment. It was shown that human facial blood volume variation during cardiac cycle can be indirectly captured by regular Red-Green-Blue (RGB) cameras. However, few attempts have been made to incorporate data from different facial sub-regions to improve remote measurement performance. In this thesis, we propose a novel framework for non-contact video-based human heart rate (HR) measurement by exploring correlations among facial sub-regions via joint blind source separation (J-BSS). In an experiment involving video data collected from 16 subjects, we compare the non-contact HR measurement results obtained from a commercial digital camera to results from a Health Canada and Food and Drug Administration (FDA) licensed contact blood volume pulse (BVP) sensor. We further test our framework on a large public database, which provides subjects' left-thumb plethysmograph signal as ground truth. Experimental results show that the proposed framework outperforms the state-of-the-art independent component analysis (ICA)-based methodologies. Driver physiological monitoring in vehicle is of great importance to provide a comfortable driving environment and prevent road accidents. Contact sensors can be placed on the driver's body to measure various physiological parameters. However such sensors may cause discomfort or distraction. The development of non-contact techniques can provide a promising solution. In this thesis, we employ our proposed non-contact video-based HR measurement framework to monitor the drivers heart rate and do heart rate variability analysis using a simple consumer-level webcam. Experiments of real-world road driving demonstrate that the proposed non-contact framework is promising even with the presence of unstable illumination variation and head movement.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
35

Choi, Hyung Keun. "Blind source separation of the audio signals in a real world". Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/14986.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Abolghasemi, Vahid. "Advances in compressive sensing and its application in blind source separation". Thesis, University of Surrey, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543283.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Kiani, Saeed. "Blind source separation in dynamic contrast enhanced magnetic resonance imaging renography". Thesis, University of Surrey, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.616917.

Pełny tekst źródła
Streszczenie:
Dynamic contrast~enhanced magnetic resonance imaging (DCE-MRI) renography is a desirable kidney assessment methodology owing to the lack of ionizing radiation in MRI and its capability of producing high-resolution anatomical image data as well as physiological data. DCE-MRI renography emerged with the view to provide a minimally invasive framework to quickly and accurately assess kidney function, for example, to measure glomerular filtration rate (GFR). However, despite considerable developments, it is not yet considered a robust technique of renal assessment. This is due to a number of confounding factors ranging from · optimization of data acquisition parameters to data post-processing challenges such as organ motion (mainly due to breathing), segmentation, partial volume (PV) effect (a signal mixing phenomenon) and tracer kinetic modelling. Prior works including registration-based motion correction techniques, semi-automatic segmentation based on similarity measures and a template-based PV correction method have not provided a complete and practical solution. In this work, a blind source separation (BSS) approach based on time-delayed decorrelation and temporal independent component analysis (ICA) was proposed to unmix physiological signals and remove the undesired motion artefacts. To evahtate the technique, test data were constructed using kidney, liver and non- . specific tissue dynamic MR signals. The source signals were correctly identified with small errors and coefficient of determination r2 values of 0.85 - 0.99 between the independent components (ICs) and source signals.
Style APA, Harvard, Vancouver, ISO itp.
38

Stokes, Tobias W. "Improving the perceptual quality of single-channel blind audio source separation". Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/807786/.

Pełny tekst źródła
Streszczenie:
Given a mixture of audio sources, a blind audio source separation (BASS) tool is required to extract audio relating to one specific source whilst attenuating that related to all others. This thesis answers the question “How can the perceptual quality of BASS be improved for broadcasting applications?” The most common source separation scenario, particularly in the field of broadcasting, is single channel, and this is particularly challenging as a limited set of cues are available. Broadcasting also requires that a source separator is automated, capable of handling non-stationary, reverberant mixtures and able to separate an unknown number of sources. In the single-channel case, the time- frequency mask is common as a method of separation. However, this process produces artefacts in the separated audio. The perceptual evaluation for audio source separation (PEASS) toolkit represents an efficient way to generate a multi-dimensional measure of perceptual quality. Initial experimental work, using ideal target and interferer estimates, uses PEASS to test variations on the ideal binary mask and shows continuous masks are perceptually better than binary while identifying a trade-off between artefacts and interferer suppression. To explore the optimisation of this trade-off, a series of sigmoidal functions are used to map target-to-mixture ratios to mask coefficients. This leads to a mask, with less target-to-mixture based discrimination than those typically found in literature, being identified as the optimum. Further experiments applying offsets, hysteresis, smoothing and frequency-dependency to the mask do not show any benefit in audio quality. The optimal sigmoidal mask is demonstrated to also be superior under non-ideal conditions using a non-negative matrix factorisation algorithm to produce the estimates. A final listening test compares the outputs of binary, ratio and optimal sigmoidal masks concluding that listeners prefer the ratio mask to the sigmoidal mask and both continuous masks to the binary mask.
Style APA, Harvard, Vancouver, ISO itp.
39

Lee, In Tae. "Machine learning algorithms for independent vector analysis and blind source separation". Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3373454.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed October 22, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 59-63) and index.
Style APA, Harvard, Vancouver, ISO itp.
40

Domingo, Almenara Xavier. "Automated mass spectrometry-based metabolomics data processing by blind source separation methods". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/397799.

Pełny tekst źródła
Streszczenie:
Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.
Una de las principales limitaciones de la metabolómica es la transformación de datos crudos en información biológica. Además, la metabolómica basada en espectrometría de masas genera grandes cantidades de datos complejos caracterizados por la co-elución de compuestos y artefactos experimentales. El objetivo de esta tesis es desarrollar estrategias automatizadas basadas en deconvolución ciega de la señal para mejorar las capacidades de los métodos existentes que tratan las limitaciones de los diferentes pasos del procesamiento de datos en metabolómica. El objetivo de esta tesis es también desarrollar herramientas capaces de ejecutar el flujo de trabajo del procesamiento de datos en metabolómica, que incluye el preprocessamiento de datos, deconvolución espectral, alineamiento e identificación. Como resultado, tres nuevos métodos automáticos para deconvolución espectral basados en deconvolución ciega de la señal fueron desarrollados. Estos métodos fueron incluidos en dos herramientas computacionales que permiten convertir automáticamente datos crudos en información biológica interpretable y por lo tanto, permiten resolver hipótesis biológicas y adquirir nuevos conocimientos biológicos.
One of the major bottlenecks in metabolomics is to convert raw data samples into biological interpretable information. Moreover, mass spectrometry-based metabolomics generates large and complex datasets characterized by co-eluting compounds and with experimental artifacts. This thesis main objective is to develop automated strategies based on blind source separation to improve the capabilities of the current methods that tackle the different metabolomics data processing workflow steps limitations. Also, the objective of this thesis is to develop tools capable of performing the entire metabolomics workflow for GC--MS, including pre-processing, spectral deconvolution, alignment and identification. As a result, three new automated methods for spectral deconvolution based on blind source separation were developed. These methods were embedded into two computation tools able to automatedly convert raw data into biological interpretable information and thus, allow resolving biological answers and discovering new biological insights.
Style APA, Harvard, Vancouver, ISO itp.
41

Kokkinakis, Konstantinos. "Multichannel blind deconvolution methods for source separation in convolutive mixtures of speech". Thesis, University of Liverpool, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Wedekind, Daniel, Alexander Trumpp, Frederik Gaetjen, Stefan Rasche, Klaus Matschke, Hagen Malberg i Sebastian Zaunseder. "Assessment of blind source separation techniques for video-based cardiac pulse extraction". SPIE, 2017. https://tud.qucosa.de/id/qucosa%3A35267.

Pełny tekst źródła
Streszczenie:
Blind source separation (BSS) aims at separating useful signal content from distortions. In the contactless acquisition of vital signs by means of the camera-based photoplethysmogram (cbPPG), BSS has evolved the most widely used approach to extract the cardiac pulse. Despite its frequent application, there is no consensus about the optimal usage of BSS and its general benefit. This contribution investigates the performance of BSS to enhance the cardiac pulse from cbPPGs in dependency to varying input data characteristics. The BSS input conditions are controlled by an automated spatial preselection routine of regions of interest. Input data of different characteristics (wavelength, dominant frequency, and signal quality) from 18 postoperative cardiovascular patients are processed with standard BSS techniques, namely principal component analysis (PCA) and independent component analysis (ICA). The effect of BSS is assessed by the spectral signal-tonoise ratio (SNR) of the cardiac pulse. The preselection of cbPPGs, appears beneficial providing higher SNR compared to standard cbPPGs. Both, PCA and ICA yielded better outcomes by using monochrome inputs (green wavelength) instead of inputs of different wavelengths. PCA outperforms ICA for more homogeneous input signals. Moreover, for high input SNR, the application of ICA using standard contrast is likely to decrease the SNR.
Style APA, Harvard, Vancouver, ISO itp.
43

Chao, Jih-Cheng. "On the design of robust criteria and algorithms for blind source separation". Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3284333.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D. in Electrical Engineering)--S.M.U., 2007.
Title from PDF title page (viewed Nov. 19, 2009). Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7530. Adviser: Scott C. Douglas. Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
44

Gascon-Pelegri, Vicente Zarzoso. "Closed-form higher-order estimators for blind separation of independent source signals in instantaneous linear mixtures". Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343754.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

El, halabi Ramzi. "Blind source separation of single-sensor recordings : Application to ground reaction force signals". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES031/document.

Pełny tekst źródła
Streszczenie:
Les signaux multicanaux sont des signaux captés à travers plusieurs canaux ou capteurs, portant chacun un mélange de sources, une partie desquelles est connue alors que le reste des sources reste inconnu. Les méthodes à l’aide desquelles l’isolement ou la séparation des sources est accomplie sont connues par les méthodes de séparation de sources en général, et si le degré d’inconnu est large, par la séparation aveugle des sources (SAS). Cependant, la SAS appliquée aux signaux multicanaux est en fait plus facile de point de vue mathématique que l’application de la SAS sur des signaux monocanaux, ou un seul capteur existe et tous les signaux arrivent au même point pour enfin produire un mélange de sources inconnues. Tel est le domaine de cette thèse. Nous avons développé une nouvelle technique de SAS : une combinaison de plusieurs méthodes de séparation et d’optimisation, basée sur la factorisation non-négative des matrices (NMF). Cette méthode peut être utilisée dans de nombreux domaines comme l’analyse des sons et de la parole, les variations de la bourse, et les séismographes. Néanmoins, ici, les signaux de force de réaction de terre verticaux (VGRF) monocanaux d’un groupe d’athlètes coureurs d’ultra-marathon sont analysés et séparés pour l’extraction du peak passif du peak actif d’une nouvelle manière adaptée à la nature de ces signaux. Les signaux VGRF sont des signaux cyclo-stationnaires caractérisés par des double-peaks, chacun étant très rapide et parcimonieux, indiquant les phases de course de l’athlète. L’analyse des peaks est extrêmement importante pour déterminer et prédire la condition du coureur : problème physiologique, problème anatomique, fatigue etc. De plus, un grand nombre de chercheurs ont prouvé que l’impact du pied postérieur avec la terre d’une manière brutale, l’analyse de ce phénomène peut nous ramener à une prédiction de blessure interne. Ils essayent même d’adopter une technique de course - Non-Heel-strike Running (NHS) - par laquelle ils obligent les coureurs à courir sur le pied-antérieur seulement. Afin d'étudier ce phénomène, la séparation du peak d’impact du VGRF permet d'isoler la source portant les informations patho-physiologiques et le degré de fatigue. Nous avons introduit de nouvelles méthodes de prétraitement et de traitement des signaux VGRF pour remplacer le filtrage de bruit traditionnel utilisé partout, et qui peut parfois détruire les peaks d’impact qui sont nos sources à séparer, base sur le concept de soustraction spectrale pour le filtrage, utilisée avec les signaux de parole, après l’application d’un algorithme d’échantillonnage intelligent et adaptatif qui décompose les signaux en pas isolés. Une analyse des signaux VGRF en fonction du temps a été faite pour la détection et la quantification de la fatigue des coureurs durant les 24 heures de course. Cette analyse a été accomplie au domaine fréquentiel/spectral où nous avons détecté un décalage clair du contenu fréquentiel avec la progression de la course indiquant la progression de la fatigue. Nous avons défini les signaux cyclosparse au domaine temporel, puis traduit cette définition à son équivalent au domaine temps-fréquence utilisant la transformée Fourier a court-temps (STFT). Cette représentation a été décomposée à travers une nouvelle méthode que l’on a appelé Cyclosparse Non-negative Matrix Factorisation (Cyclosparse-NMF), basée sur l’optimisation de la minimisation de la divergence Kullback-Leibler (KL) avec pénalisation liée à la périodicité et la parcimonie des sources, ayant comme but final d’extraire les sources cyclosparse du mélange monocanal appliquée aux signaux VGRF monocanaux. La méthode a été testée sur des signaux analytiques afin de prouver l’efficacité de l’algorithme. Les résultats se sont avéré satisfaisants, et le peak impact a été séparé du mélange VGRF monocanal
The purpose of the presented work is to develop a customized Single-channel Blind Source Separation technique that aims to separate cyclostationary and transient pulse-like patterns/sources from a linear instantaneous mixture of unknown sources. For that endeavor, synthetic signals of the mentioned characteristic were created to confirm the separation success, in addition to real life signals acquired throughout an experiment in which experienced athletes were asked to participate in a 24-hour ultra-marathon in a lab environment on an instrumented treadmill through which their VGRF, which carries a cyclosparse Impact Peak, is continuously recorded with very short discontinuities during which blood is drawn for in-run testing, short enough not to provide rest to the athletes. The synthetic and VGRF signals were then pre-processed, processed for Impact Pattern extraction via a customized Single-channel Blind Source Separation technique that we termed Cyclo-sparse Non-negative Matrix Factorization and analyzed for fatigue assessment. As a result, the Impact Patterns for all of the participating athletes were extracted at 10 different time intervals indicating the progression of the ultra-marathon for 24 hours, and further analysis and comparison of the resulting signals proved major significance in the field of fatigue assessment; the Impact Pattern power monotonically increased for 90% of the subjects by an average of 24.4 15% with the progression of the ultra-marathon during the 24-hour period. Upon computation of the Impact Pattern separation algorithm, fatigue progression showed to be manifested by an increase in reliance on heel-strike impact to push to the bodyweight as a compensation for the decrease in muscle power during propulsion at toe-off. This study among other presented work in the field of VGRF processing forms methods that could be implemented in wearable devices to assess and track runners’ gait as a part of sports performance analysis, rehabilitation phase tracking and classification of healthy vs. unhealthy gait
Style APA, Harvard, Vancouver, ISO itp.
46

Picquenot, Adrien. "Introduction and application of a new blind source separation method for extended sources in X-ray astronomy". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP028.

Pełny tekst źródła
Streszczenie:
Certaines sources étendues, telles que les vestiges de supernovae, présentent en rayons X une remarquable diversité de morphologie que les téléscopes de spectro-imagerie actuels parviennent à détecter avec un exceptionnel niveau de précision. Cependant, les outils d’analyse actuellement utilisés dans l’étude des phénomènes astrophysiques à haute énergie peinent à exploiter pleinement le potentiel de ces données : les méthodes d’analyse standard se concentrent sur l’information spectrale sans exploiter la multiplicité des morphologies ni les corrélations existant entre les dimensions spatiales et spectrales ; pour cette raison, leurs capacités sont souvent limitées, et les mesures de paramètres physiques peuvent être largement contaminées par d’autres composantes.Dans cette thèse, nous explorerons une nouvelle méthode de séparation de source exploitant pleinement les informations spatiales et spectrales contenues dans les données X, et leur corrélation. Nous commencerons par présenter son fonctionnement et les principes mathématiques sur lesquels il repose, puis nous étudierons ses performances sur des modèles de vestiges de supernovae. Nous nous pencherons ensuite sur la vaste question de la quantification des erreurs, domaine encore largement inexploré dans le milieu bouillonnant de l’analyse de données. Enfin, nous appliquerons notre méthode à l’étude de trois problèmes physiques : les asymétries dans la distribution des éléments lourds du vestige Cassiopeia A, les structures filamentaires dans l’émission synchrotron du même vestige, et la contrepartie X des structures filamentaires visibles en optique dans l’amas de galaxies Perseus
Some extended sources, among which we find the supernovae remnants, present an outstanding diversity of morphologies that the current generation of spectro-imaging telescopes can detect with an unprecedented level of details. However, the data analysis tools currently in use in the high energy astrophysics community fail to take full advantage of these data : most of them only focus on the spectral information without using the many spatial specificities or the correlation between the spectral and spatial dimensions. For that reason, the physical parameters that are retrieved are often widely contaminated by other components. In this thesis, we will explore a new blind source separation method exploiting fully both spatial and spectral information with X-ray data, and their correlations. We will begin with an exposition of the mathematical concepts on which the algorithm rely, and particularly on the wavelet transforms. Then, we will benchmark its performances on supernovae remnants models, and we will investigate the vast question of the error bars on non-linear estimators, still largely unanswered yet essential for data analysis and machine learning methods. Finally, we will apply our method to the study of three physical problems : the asymmetries in the heavy elements distribution in the supernova remnant Cassiopeia A, the filamentary structures in the synchrotron of the same remnant and the X-ray counterpart to optical filamentary structures in the Perseus galaxy cluster
Style APA, Harvard, Vancouver, ISO itp.
47

Harris, Jack D. "Online source separation in reverberant environments exploiting known speaker locations". Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19627.

Pełny tekst źródła
Streszczenie:
This thesis concerns blind source separation techniques using second order statistics and higher order statistics for reverberant environments. A focus of the thesis is algorithmic simplicity with a view to the algorithms being implemented in their online forms. The main challenge of blind source separation applications is to handle reverberant acoustic environments; a further complication is changes in the acoustic environment such as when human speakers physically move. A novel time-domain method which utilises a pair of finite impulse response filters is proposed. The method of principle angles is defined which exploits a singular value decomposition for their design. The pair of filters are implemented within a generalised sidelobe canceller structure, thus the method can be considered as a beamforming method which cancels one source. An adaptive filtering stage is then employed to recover the remaining source, by exploiting the output of the beamforming stage as a noise reference. A common approach to blind source separation is to use methods that use higher order statistics such as independent component analysis. When dealing with realistic convolutive audio and speech mixtures, processing in the frequency domain at each frequency bin is required. As a result this introduces the permutation problem, inherent in independent component analysis, across the frequency bins. Independent vector analysis directly addresses this issue by modeling the dependencies between frequency bins, namely making use of a source vector prior. An alternative source prior for real-time (online) natural gradient independent vector analysis is proposed. A Student's t probability density function is known to be more suited for speech sources, due to its heavier tails, and is incorporated into a real-time version of natural gradient independent vector analysis. The final algorithm is realised as a real-time embedded application on a floating point Texas Instruments digital signal processor platform. Moving sources, along with reverberant environments, cause significant problems in realistic source separation systems as mixing filters become time variant. A method which employs the pair of cancellation filters, is proposed to cancel one source coupled with an online natural gradient independent vector analysis technique to improve average separation performance in the context of step-wise moving sources. This addresses `dips' in performance when sources move. Results show the average convergence time of the performance parameters is improved. Online methods introduced in thesis are tested using impulse responses measured in reverberant environments, demonstrating their robustness and are shown to perform better than established methods in a variety of situations.
Style APA, Harvard, Vancouver, ISO itp.
48

Cheong, Took Clive. "Blind source separation via independent and sparse component analysis with application to temporomandibular disorder". Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54590/.

Pełny tekst źródła
Streszczenie:
Blind source separation (BSS) addresses the problem of separating multi channel signals observed by generally spatially separated sensors into their constituent underlying sources. The passage of these sources through an unknown mixing medium results in these observed multichannel signals. This study focuses on BSS, with special emphasis on its application to the temporomandibular joint disorder (TMD). TMD refers to all medical problems related to the temporomandibular joint (TMJ), which holds the lower jaw (mandible) and the temporal bone (skull). The overall objective of the work is to extract the two TMJ sound sources generated by the two TMJs, from the bilateral recordings obtained from the auditory canals, so as to aid the clinician in diagnosis and planning treatment policies. Firstly, the concept of 'variable tap length' is adopted in convolutive blind source separation. This relatively new concept has attracted attention in the field of adaptive signal processing, notably the least mean square (LMS) algorithm, but has not yet been introduced in the context of blind signal separation. The flexibility of the tap length of the proposed approach allows for the optimum tap length to be found, thereby mitigating computational complexity or catering for fractional delays arising in source separation. Secondly, a novel fixed point BSS algorithm based on Ferrante's affine transformation is proposed. Ferrante's affine transformation provides the freedom to select the eigenvalues of the Jacobian matrix of the fixed point function and thereby improves the convergence properties of the fixed point iteration. Simulation studies demonstrate the improved convergence of the proposed approach compared to the well-known fixed point FastICA algorithm. Thirdly, the underdetermined blind source separation problem using a filtering approach is addressed. An extension of the FastICA algorithm is devised which exploits the disparity in the kurtoses of the underlying sources to estimate the mixing matrix and thereafter achieves source recovery by employing the i-norm algorithm. Additionally, it will be shown that FastICA can also be utilised to extract the sources. Furthermore, it is illustrated how this scenario is particularly suitable for the separation of TMJ sounds. Finally, estimation of fractional delays between the mixtures of the TMJ sources is proposed as a means for TMJ separation. The estimation of fractional delays is shown to simplify the source separation to a case of in stantaneous BSS. Then, the estimated delay allows for an alignment of the TMJ mixtures, thereby overcoming a spacing constraint imposed by a well- known BSS technique, notably the DUET algorithm. The delay found from the TMJ bilateral recordings corroborates with the range reported in the literature. Furthermore, TMJ source localisation is also addressed as an aid to the dental specialist.
Style APA, Harvard, Vancouver, ISO itp.
49

Hattay, Jamel. "Wavelet-based lifting structures and blind source separation : applications to digital in-line holography". Rouen, 2016. http://www.theses.fr/2016ROUES016.

Pełny tekst źródła
Streszczenie:
Ce projet de thèse expose des méthodes de traitement, dans le domaine des ondelettes, pour résoudre certains problèmes liés à la mise en oeuvre de l’holographie numérique dans l’axe. Ce developpement utilise des outils de la théorie de l’information et des divers moyens de traitement du signal tels que la séparation aveugle de sources (SAS). Cette technique est exploitée, ici, pour améliorer l’efficacité de l’holographie numérique, tels que la suppression de l’image jumelle, l’estimation de l’indice de réfraction, le codage et la transmission temps réel des hologrammes. Tout d’abord, nous donnons une brève introduction à la configuration dans l’axe de l’holographie numérique telle qu’elle est mise en oeuvre à l’UMR 6614 CORIA: l'explication de l’étape d’enregistrement ainsi que les différentes approches de restitution des hologrammes utilisés dans cette thèse. Ensuite, nous présentons un état de l’art des méthodes permettant de résoudre les deux principaux obstacles rencontrés dans la reconstruction des hologrammes numériques: l’étape de mise au point et la suppression de l’image jumelle. Ensuite, nous expliquons méticuleusement l’outil basé sur la transformée d’ondelettes, pour assurer une décomposition multi résolution de l’image, qui permet la séparation aveugle des images mélangées par un produit de convolution. Notre proposition consiste à utiliser la 2ème génération de la transformée en ondelettes d’une manière adaptative appelée aussi Schéma de lifting en quinconce Adaptif (SLQA). Cette décomposition est couplée à un algorithme de séparation appropriée pour former les trois étapes suivantes : les images d’entrées, mélangées par convolution, sont décomposées par SLQA pour former un arbre d’ondelettes. Ensuite, on applique l’algorithme de séparation sur le noeud le plus parcimonieux, généralement à la résolution la plus élevée, et enfin les images séparées sont reconstruites à l’aide de l’inverse de SLQA. Cet outil est appliqué pour résoudre plusieurs problèmes liés à des applications d’holographie numérique dans l’axe. Dans ce contexte, deux méthodes sont proposées. La première méthode, utilisant l’entropie globale, est développée pour rechercher de manière automatique le meilleur plan de mise au point des images holographiques. La deuxième méthode sert à supprimer l’image jumelle qui accompagne l’image restituée. Cette dernière se base sur la décomposition SLQA avec un algorithme de séparation statistique qui utilise la fameuse technique Analyse en Composantes Indépendantes (ACI). Vu que le formalisme d’un produit de convolution est retenu dans l’étape de formation de l’hologramme, l’outil SLQA et ACI assurent parfaitement la tâche de déconvolution. Les résultats expérimentaux confirment bien que nos deux méthodes proposées sont capables d’estimer le meilleur plan de mise au point et d’éliminer l’effet de l’image jumelle dans l’image restituée. Puis, nous proposons d’estimer l’épaisseur d’un anneau dans une image restituée d’un hologramme contenant la diffraction d’une bulle de vapeur stable dans une gouttelette d’un liquide. La dernière partie met en oeuvre le nouveau concept de Télé-Holographie. Il s’agit de mettre en place un échange de flux interactif entre la chambre d’enregistrement des hologrammes in-situ et un laboratoire distant au sein duquel s’effectue le traitement numérique de ces hologrammes. Pour atteindre cet objectif, nous proposons de réaliser une compression sans perte des hologrammes numériques par transformée en ondelettes. Pour la phase de la transmission progressive, selon la capacité du canal de transmission, nous proposons une manière efficace pour le codage de l’arbre des zéros des coefficients emboités obtenu par la transformée d’ondelette en quinconce (SLQA). Ce codeur nous permet une réduction considérable du débit binaire lors de la transmission des hologrammes. Les premiers tests effectués sur des hologrammes réels, enregistrés au sein du laboratoire CORIA, montrent une amélioration significative des taux de compression totale et de la taille de l’hologramme compressé
The present thesis is meant to develop specific processes, in the realm of wavelets domain, for certain digital holography applications. We mainly use the so-called blind source separation (BSS) techniques to solve numerous digital holography problems, namely, the twin image suppression, real time coding and transmission of holograms. Firstly, we give a brief introduction to in-line configuration of digital holography in flow measurements: the recording step explanation and the study of two reconstruction approaches that have been used during this thesis. Then, we emphasize the two well known obstacles of digital holograms reconstruction, namely, the determination of the best focus plane and the twin image removal. Secondly, we propose a meticulous scrutiny of the tool, based on the Blind Source Separation (BSS), enhanced by a multiscale decomposition algorithm, which enables the blind separation of convolutively mixed images. The suggested algorithm uses a wavelet-based transformer, called Adaptive Quincunx Lifting Scheme (AQLS), coupled with an appropriate unmixing algorithm. The resulting deconvolution process is made up of three steps. In the first step, the convolutively mixed images are decomposed by AQLS. Then, separation algorithm is applied to the most relevant component to unmix the transformed images. The unmixed images are, thereafter, reconstructed using the inverse of the AQLS transform. In a subsequent part, we adopt the blind source separation technique in the wavelet field domain to solve several problems related to digital holography. In this context, we present two main contributions for digital in-line hologram processing. The first contribution consists in an entropy-based method to retrieve the best focus plane, a crucial issue in digital hologram reconstruction. The second contribution consists in a new approach to remove a common unwanted artifact in holography called the twin image. The latter contribution is based on the blind source separation technique, and the resulting algorithm is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) based on the wavelet packet transform and a statistical unmixing algorithm based on Independent Component Analysis (ICA) tool. The role of the AQLS is to maximize the sparseness of the input holograms. Since the convolutive formalism is retained in digital in-line holography, BSS-based tool is extended and coupled with wavelet-based AQLS to fulfill the deconvolution task. Experimental results confirm that convolutive blind source separation is able to discard the unwanted twin image from digital in-line holograms. The last of this part consists in measuring the thickness of a ring. This ring is obtained from an improved reconstructed image of an hologram containing a vapor bubble created by thermal coupling between a laser pulse and nanoparticles in a droplet of a liquid. The last part introduces the Tele-Holography concept. Once the image of the object is perfectly reconstructed, the next objective is to code and transmit the reconstructed image for an interactive flow of exchange between a given laboratory, where the holograms are recorded, and a distant partner research. We propose the tele-holography process that involves the wavelet transform tool for lossless compression and transmission of digital holograms. The concept of tele-holography is motivated by the fact that the digital holograms are considered as a 2D image yielding the depth information of 3D objects. Besides, we propose a quincunx embedded zero-tree wavelet coder (QEZW) for scalable transmission. Owing to the transmission channel capacity, it reduces drastically the bit rate of the holography transmission flow. A flurry of experimental results carried out on real digital holograms show that the proposed lossless compression process yields a significant improvement in compression ratio and total compressed size. These experimentations reveal the capacities of the proposed coder in terms of real bitrate for progressive transmission
Style APA, Harvard, Vancouver, ISO itp.
50

Wang, Niya. "Unsupervised Signal Deconvolution for Multiscale Characterization of Tissue Heterogeneity". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73772.

Pełny tekst źródła
Streszczenie:
Characterizing complex tissues requires precise identification of distinctive cell types, cell-specific signatures, and subpopulation proportions. Tissue heterogeneity, arising from multiple cell types, is a major confounding factor in studying individual subpopulations and repopulation dynamics. Tissue heterogeneity cannot be resolved directly by most global molecular and genomic profiling methods. While signal deconvolution has widespread applications in many real-world problems, there are significant limitations associated with existing methods, mainly unrealistic assumptions and heuristics, leading to inaccurate or incorrect results. In this study, we formulate the signal deconvolution task as a blind source separation problem, and develop novel unsupervised deconvolution methods within the Convex Analysis of Mixtures (CAM) framework, for characterizing multi-scale tissue heterogeneity. We also explanatorily test the application of Significant Intercellular Genomic Heterogeneity (SIGH) method. Unlike existing deconvolution methods, CAM can identify tissue-specific markers directly from mixed signals, a critical task, without relying on any prior knowledge. Fundamental to the success of our approach is a geometric exploitation of tissue-specific markers and signal non-negativity. Using a well-grounded mathematical framework, we have proved new theorems showing that the scatter simplex of mixed signals is a rotated and compressed version of the scatter simplex of pure signals and that the resident markers at the vertices of the scatter simplex are the tissue-specific markers. The algorithm works by geometrically locating the vertices of the scatter simplex of measured signals and their resident markers. The minimum description length (MDL) criterion is applied to determine the number of tissue populations in the sample. Based on CAM principle, we integrated nonnegative independent component analysis (nICA) and convex matrix factorization (CMF) methods, developed CAM-nICA/CMF algorithm, and applied them to multiple gene expression, methylation and protein datasets, achieving very promising results validated by the ground truth or gene enrichment analysis. We integrated CAM with compartment modeling (CM) and developed multi-tissue compartment modeling (MTCM) algorithm, tested on real DCE-MRI data derived from mouse models with consistent and plausible results. We also developed an open-source R-Java software package that implements various CAM based algorithms, including an R package approved by Bioconductor specifically for tumor-stroma deconvolution. While intercellular heterogeneity is often manifested by multiple clones with distinct sequences, systematic efforts to characterize intercellular genomic heterogeneity must effectively distinguish significant genuine clonal sequences from probabilistic fake derivatives. Based on the preliminary studies originally targeting immune T-cells, we tested and applied the SIGH algorithm to characterize intercellular heterogeneity directly from mixed sequencing reads. SIGH works by exploiting the statistical differences in both the sequencing error rates at different nucleobases and the read counts of fake sequences in relation to genuine clones of variable abundance.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii