Academic literature on the topic 'Sparse Bayesian learning (SBL)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Bayesian learning (SBL).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Bayesian learning (SBL)"

1

Yuan, Cheng, and Mingjun Su. "Seismic spectral sparse reflectivity inversion based on SBL-EM: experimental analysis and application." Journal of Geophysics and Engineering 16, no. 6 (October 18, 2019): 1124–38. http://dx.doi.org/10.1093/jge/gxz082.

Full text
Abstract:
Abstract In this paper, we propose a new method of seismic spectral sparse reflectivity inversion that, for the first time, introduces Expectation-Maximization-based sparse Bayesian learning (SBL-EM) to enhance the accuracy of stratal reflectivity estimation based on the frequency spectrum of seismic reflection data. Compared with the widely applied sequential algorithm-based sparse Bayesian learning (SBL-SA), SBL-EM is more robust to data noise and, generally, can not only find a sparse solution with higher precision, but also yield a better lateral continuity along the final profile. To investigate the potential of SBL-EM in a seismic spectral sparse reflectivity inversion, we evaluate the inversion results by comparing them with those of a SBL-SA-based approach in multiple aspects, including the sensitivity to different frequency bands, the robustness to data noise, the lateral continuity of the final profiles and so on. Furthermore, we apply the mean square error (MSE), residual variance (RV) of seismograms and residual energy (RE) between the frequency spectra of the true and inverted reflectivity model to highlight the advantages of the proposed method over the SBL-SA-based approach in terms of spectral sparse reflectivity inversion within a sparse Bayesian learning framework. Multiple examples, including both numerical and field experiments, are carried out to validate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Shin, Myoungin, Wooyoung Hong, Keunhwa Lee, and Youngmin Choo. "Frequency Analysis of Acoustic Data Using Multiple-Measurement Sparse Bayesian Learning." Sensors 21, no. 17 (August 30, 2021): 5827. http://dx.doi.org/10.3390/s21175827.

Full text
Abstract:
Passive sonar systems are used to detect the acoustic signals that are radiated from marine objects (e.g., surface ships, submarines, etc.), and an accurate estimation of the frequency components is crucial to the target detection. In this paper, we introduce sparse Bayesian learning (SBL) for the frequency analysis after the corresponding linear system is established. Many algorithms, such as fast Fourier transform (FFT), estimate signal parameters via rotational invariance techniques (ESPRIT), and multiple signal classification (RMUSIC) has been proposed for frequency detection. However, these algorithms have limitations of low estimation resolution by insufficient signal length (FFT), required knowledge of the signal frequency component number, and performance degradation at low signal to noise ratio (ESPRIT and RMUSIC). The SBL, which reconstructs a sparse solution from the linear system using the Bayesian framework, has an advantage in frequency detection owing to high resolution from the solution sparsity. Furthermore, in order to improve the robustness of the SBL-based frequency analysis, we exploit multiple measurements over time and space domains that share common frequency components. We compare the estimation results from FFT, ESPRIT, RMUSIC, and SBL using synthetic data, which displays the superior performance of the SBL that has lower estimation errors with a higher recovery ratio. We also apply the SBL to the in-situ data with other schemes and the frequency components from the SBL are revealed as the most effective. In particular, the SBL estimation is remarkably enhanced by the multiple measurements from both space and time domains owing to remaining consistent signal frequency components while diminishing random noise frequency components.
APA, Harvard, Vancouver, ISO, and other styles
3

NYEO, SU-LONG, and RAFAT R. ANSARI. "EARLY CATARACT DETECTION BY DYNAMIC LIGHT SCATTERING WITH SPARSE BAYESIAN LEARNING." Journal of Innovative Optical Health Sciences 02, no. 03 (July 2009): 303–13. http://dx.doi.org/10.1142/s1793545809000632.

Full text
Abstract:
Dynamic light scattering (DLS) is a promising technique for early cataract detection and for studying cataractogenesis. A novel probabilistic analysis tool, the sparse Bayesian learning (SBL) algorithm, is described for reconstructing the most-probable size distribution of α-crystallin and their aggregates in an ocular lens from the DLS data. The performance of the algorithm is evaluated by analyzing simulated correlation data from known distributions and DLS data from the ocular lenses of a fetal calf, a Rhesus monkey, and a man, so as to establish the required efficiency of the SBL algorithm for clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Taiyong, Zhenda Hu, Yanchi Jia, Jiang Wu, and Yingrui Zhou. "Forecasting Crude Oil Prices Using Ensemble Empirical Mode Decomposition and Sparse Bayesian Learning." Energies 11, no. 7 (July 19, 2018): 1882. http://dx.doi.org/10.3390/en11071882.

Full text
Abstract:
Crude oil is one of the most important types of energy and its prices have a great impact on the global economy. Therefore, forecasting crude oil prices accurately is an essential task for investors, governments, enterprises and even researchers. However, due to the extreme nonlinearity and nonstationarity of crude oil prices, it is a challenging task for the traditional methodologies of time series forecasting to handle it. To address this issue, in this paper, we propose a novel approach that incorporates ensemble empirical mode decomposition (EEMD), sparse Bayesian learning (SBL), and addition, namely EEMD-SBL-ADD, for forecasting crude oil prices, following the “decomposition and ensemble” framework that is widely used in time series analysis. Specifically, EEMD is first used to decompose the raw crude oil price data into components, including several intrinsic mode functions (IMFs) and one residue. Then, we apply SBL to build an individual forecasting model for each component. Finally, the individual forecasting results are aggregated as the final forecasting price by simple addition. To validate the performance of the proposed EEMD-SBL-ADD, we use the publicly-available West Texas Intermediate (WTI) and Brent crude oil spot prices as experimental data. The experimental results demonstrate that the EEMD-SBL-ADD outperforms some state-of-the-art forecasting methodologies in terms of several evaluation criteria such as the mean absolute percent error (MAPE), the root mean squared error (RMSE), the directional statistic (Dstat), the Diebold–Mariano (DM) test, the model confidence set (MCS) test and running time, indicating that the proposed EEMD-SBL-ADD is promising for forecasting crude oil prices.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Qi, Xianpeng Wang, Mengxing Huang, Xiang Lan, and Lu Sun. "DOA and Range Estimation for FDA-MIMO Radar with Sparse Bayesian Learning." Remote Sensing 13, no. 13 (June 29, 2021): 2553. http://dx.doi.org/10.3390/rs13132553.

Full text
Abstract:
Due to grid division, the existing target localization algorithms based on sparse signal recovery for the frequency diverse array multiple-input multiple-output (FDA-MIMO) radar not only suffer from high computational complexity but also encounter significant estimation performance degradation caused by off-grid gaps. To tackle the aforementioned problems, an effective off-grid Sparse Bayesian Learning (SBL) method is proposed in this paper, which enables the calculation the direction of arrival (DOA) and range estimates. First of all, the angle-dependent component is split by reconstructing the received data and contributes to immediately extract rough DOA estimates with the root SBL algorithm, which, subsequently, are utilized to obtain the paired rough range estimates. Furthermore, a discrete grid is constructed by the rough DOA and range estimates, and the 2D-SBL model is proposed to optimize the rough DOA and range estimates. Moreover, the expectation-maximization (EM) algorithm is utilized to update the grid points iteratively to further eliminate the errors caused by the off-grid model. Finally, theoretical analyses and numerical simulations illustrate the effectiveness and superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Kaikai, Zheng Qian, and Niya Chen. "Probabilistic Short-Term Wind Power Forecasting Using Sparse Bayesian Learning and NWP." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/785215.

Full text
Abstract:
Probabilistic short-term wind power forecasting is greatly significant for the operation of wind power scheduling and the reliability of power system. In this paper, an approach based on Sparse Bayesian Learning (SBL) and Numerical Weather Prediction (NWP) for probabilistic wind power forecasting in the horizon of 1–24 hours was investigated. In the modeling process, first, the wind speed data from NWP results was corrected, and then the SBL was used to build a relationship between the combined data and the power generation to produce probabilistic power forecasts. Furthermore, in each model, the application of SBL was improved by using modified-Gaussian kernel function and parameters optimization through Particle Swarm Optimization (PSO). To validate the proposed approach, two real-world datasets were used for construction and testing. For deterministic evaluation, the simulation results showed that the proposed model achieves a greater improvement in forecasting accuracy compared with other wind power forecast models. For probabilistic evaluation, the results of indicators also demonstrate that the proposed model has an outstanding performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Gerstoft, Peter, Christoph Mecklenbrauker, Santosh Nannuru, and Geert Leus. "DOA Estimation in Heteroscedastic Noise with sparse Bayesian Learning." Applied Computational Electromagnetics Society 35, no. 11 (February 5, 2021): 1439–40. http://dx.doi.org/10.47037/2020.aces.j.351188.

Full text
Abstract:
We consider direction of arrival (DOA) estimation from long-term observations in a noisy environment. In such an environment the noise source might evolve, causing the stationary models to fail. Therefore a heteroscedastic Gaussian noise model is introduced where the variance can vary across observations and sensors. The source amplitudes are assumed independent zero-mean complex Gaussian distributed with unknown variances (i.e., source powers), leading to stochastic maximum likelihood (ML) DOA estimation. The DOAs are estimated from multi-snapshot array data using sparse Bayesian learning (SBL) where the noise is estimated across both sensors and snapshots.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Meiyue, and Shizhong Xu. "A coordinate descent approach for sparse Bayesian learning in high dimensional QTL mapping and genome-wide association studies." Bioinformatics 35, no. 21 (April 9, 2019): 4327–35. http://dx.doi.org/10.1093/bioinformatics/btz244.

Full text
Abstract:
AbstractMotivationGenomic scanning approaches that detect one locus at a time are subject to many problems in genome-wide association studies and quantitative trait locus mapping. The problems include large matrix inversion, over-conservativeness for tests after Bonferroni correction and difficulty in evaluation of the total genetic contribution to a trait’s variance. Targeting these problems, we take a further step and investigate a multiple locus model that detects all markers simultaneously in a single model.ResultsWe developed a sparse Bayesian learning (SBL) method for quantitative trait locus mapping and genome-wide association studies. This new method adopts a coordinate descent algorithm to estimate parameters (marker effects) by updating one parameter at a time conditional on current values of all other parameters. It uses an L2 type of penalty that allows the method to handle extremely large sample sizes (>100 000). Simulation studies show that SBL often has higher statistical powers and the simulated true loci are often detected with extremely small P-values, indicating that SBL is insensitive to stringent thresholds in significance testing.Availability and implementationAn R package (sbl) is available on the comprehensive R archive network (CRAN) and https://github.com/MeiyueComputBio/sbl/tree/master/R%20packge.Supplementary informationSupplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
9

Shekaramiz, Mohammad, Todd Moon, and Jacob Gunther. "Bayesian Compressive Sensing of Sparse Signals with Unknown Clustering Patterns." Entropy 21, no. 3 (March 5, 2019): 247. http://dx.doi.org/10.3390/e21030247.

Full text
Abstract:
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or clumpy behavior, along each column, as well as joint sparsity across the columns. In this paper, we propose a new sparse Bayesian learning (SBL) method that incorporates a total variation-like prior as a measure of the overall clustering pattern in the solution. We further incorporate a parameter in this prior to account for the emphasis on the amount of clumpiness in the supports of the solution to improve the recovery performance of sparse signals with an unknown clustering pattern. This parameter does not exist in the other existing algorithms and is learned via our hierarchical SBL algorithm. While the proposed algorithm is constructed for the MMVs, it can also be applied to the single measurement vector (SMV) problems. Simulation results show the effectiveness of our algorithm compared to other algorithms for both SMV and MMVs.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Guo, and Wang. "Exploring the Laplace Prior in Radio Tomographic Imaging with Sparse Bayesian Learning towards the Robustness to Multipath Fading." Sensors 19, no. 23 (November 22, 2019): 5126. http://dx.doi.org/10.3390/s19235126.

Full text
Abstract:
Radio tomographic imaging (RTI) is a technology for target localization by using radiofrequency (RF) sensors in a wireless network. The change of the attenuation field caused by thetarget is represented by a shadowing image, which is then used to estimate the target’s position.The shadowing image can be reconstructed from the variation of the received signal strength (RSS)in the wireless network. However, due to the interference from multi-path fading, not all the RSSvariations are reliable. If the unreliable RSS variations are used for image reconstruction, someartifacts will appear in the shadowing image, which may cause the target’s position being wronglyestimated. Due to the sparse property of the shadowing image, sparse Bayesian learning (SBL) canbe employed for signal reconstruction. Aiming at enhancing the robustness to multipath fading,this paper explores the Laplace prior to characterize the shadowing image under the frameworkof SBL. Bayesian modeling, Bayesian inference and the fast algorithm are presented to achieve themaximum-a-posterior (MAP) solution. Finally, imaging, localization and tracking experiments fromthree different scenarios are conducted to validate the robustness to multipath fading. Meanwhile,the improved computational efficiency of using Laplace prior is validated in the localization-timeexperiment as well.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sparse Bayesian learning (SBL)"

1

Chen, Cong. "High-Dimensional Generative Models for 3D Perception." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Full text
Abstract:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.
Doctor of Philosophy
The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
APA, Harvard, Vancouver, ISO, and other styles
2

Higson, Edward John. "Bayesian methods and machine learning in astrophysics." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289728.

Full text
Abstract:
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Junyang. "Novel methods for biological network inference : an application to circadian Ca2+ signaling network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/285323.

Full text
Abstract:
Biological processes involve complex biochemical interactions among a large number of species like cells, RNA, proteins and metabolites. Learning these interactions is essential to interfering artificially with biological processes in order to, for example, improve crop yield, develop new therapies, and predict new cell or organism behaviors to genetic or environmental perturbations. For a biological process, two pieces of information are of most interest. For a particular species, the first step is to learn which other species are regulating it. This reveals topology and causality. The second step involves learning the precise mechanisms of how this regulation occurs. This step reveals the dynamics of the system. Applying this process to all species leads to the complete dynamical network. Systems biology is making considerable efforts to learn biological networks at low experimental costs. The main goal of this thesis is to develop advanced methods to build models for biological networks, taking the circadian system of Arabidopsis thaliana as a case study. A variety of network inference approaches have been proposed in the literature to study dynamic biological networks. However, many successful methods either require prior knowledge of the system or focus more on topology. This thesis presents novel methods that identify both network topology and dynamics, and do not depend on prior knowledge. Hence, the proposed methods are applicable to general biological networks. These methods are initially developed for linear systems, and, at the cost of higher computational complexity, can also be applied to nonlinear systems. Overall, we propose four methods with increasing computational complexity: one-to-one, combined group and element sparse Bayesian learning (GESBL), the kernel method and reversible jump Markov chain Monte Carlo method (RJMCMC). All methods are tested with challenging dynamical network simulations (including feedback, random networks, different levels of noise and number of samples), and realistic models of circadian system of Arabidopsis thaliana. These simulations show that, while the one-to-one method scales to the whole genome, the kernel method and RJMCMC method are superior for smaller networks. They are robust to tuning variables and able to provide stable performance. The simulations also imply the advantage of GESBL and RJMCMC over the state-of-the-art method. We envision that the estimated models can benefit a wide range of research. For example, they can locate biological compounds responsible for human disease through mathematical analysis and help predict the effectiveness of new treatments.
APA, Harvard, Vancouver, ISO, and other styles
4

Subramanian, Harshavardhan. "Combining scientific computing and machine learning techniques to model longitudinal outcomes in clinical trials." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176427.

Full text
Abstract:
Scientific machine learning (SciML) is a new branch of AI research at the edge of scientific computing (Sci) and machine learning (ML). It deals with efficient amalgamation of data-driven algorithms along with scientific computing to discover the dynamics of the time-evolving process. The output of such algorithms is represented in the form of a governing equation(s) (e.g., ordinary differential equation(s), ODE(s)), which one can solve then for any time point and, thus, obtain a rigorous prediction.  In this thesis, we present a methodology on how to incorporate the SciML approach in the context of clinical trials to predict IPF disease progression in the form of governing equation. Our proposed methodology also quantifies the uncertainties associated with the model by fitting 95\% high density interval (HDI) for the ODE parameters and 95\% posterior prediction interval for posterior predicted samples. We have also investigated the possibility of predicting later outcomes by using the observations collected at early phase of the study. We were successful in combining ML techniques, statistical methodologies and scientific computing tools such as bootstrap sampling, cubic spline interpolation, Bayesian inference and sparse identification of nonlinear dynamics (SINDy) to discover the dynamics behind the efficacy outcome as well as in quantifying the uncertainty of the parameters of the governing equation in the form of 95 \% HDI intervals. We compared the resulting model with the existed disease progression model described by the Weibull function. Based on the mean squared error (MSE) criterion between our ODE approximated values and population means of respective datasets, we achieved the least possible MSE of 0.133,0.089,0.213 and 0.057. After comparing these MSE values with the MSE values obtained after using Weibull function, for the third dataset and pooled dataset, our ODE model performed better in reducing error than the Weibull baseline model by 7.5\% and 8.1\%, respectively. Whereas for the first and second datasets, the Weibull model performed better in reducing errors by 1.5\% and 1.2\%, respectively. Comparing the overall performance in terms of MSE, our proposed model approximates the population means better in all the cases except for the first and second datasets, assuming the latter case's error margin is very small. Also, in terms of interpretation, our dynamical system model contains the mechanistic elements that can explain the decay/acceleration rate of the efficacy endpoint, which is missing in the Weibull model. However, our approach had a limitation in predicting final outcomes using a model derived from  24, 36, 48 weeks observations with good accuracy where as on the contrast, the Weibull model do not possess the predicting capability. However, the extrapolated trend based on 60 weeks of data was found to be close to population mean and the ODE model built on 72 weeks of data. Finally we highlight potential questions for the future work.
APA, Harvard, Vancouver, ISO, and other styles
5

Francisco, André Biasin Segalla. "Esparsidade estruturada em reconstrução de fontes de EEG." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-13052018-112615/.

Full text
Abstract:
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos.
Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Cherief-Abdellatif, Badr-Eddine. "Contributions to the theoretical study of variational inference and robustness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG001.

Full text
Abstract:
Cette thèse de doctorat traite de l'inférence variationnelle et de la robustesse en statistique et en machine learning. Plus précisément, elle se concentre sur les propriétés statistiques des approximations variationnelles et sur la conception d'algorithmes efficaces pour les calculer de manière séquentielle, et étudie les estimateurs basés sur le Maximum Mean Discrepancy comme règles d'apprentissage qui sont robustes à la mauvaise spécification du modèle.Ces dernières années, l'inférence variationnelle a été largement étudiée du point de vue computationnel, cependant, la littérature n'a accordé que peu d'attention à ses propriétés théoriques jusqu'à très récemment. Dans cette thèse, nous étudions la consistence des approximations variationnelles dans divers modèles statistiques et les conditions qui assurent leur consistence. En particulier, nous abordons le cas des modèles de mélange et des réseaux de neurones profonds. Nous justifions également d'un point de vue théorique l'utilisation de la stratégie de maximisation de l'ELBO, un critère numérique qui est largement utilisé dans la communauté VB pour la sélection de modèle et dont l'efficacité a déjà été confirmée en pratique. En outre, l'inférence Bayésienne offre un cadre d'apprentissage en ligne attrayant pour analyser des données séquentielles, et offre des garanties de généralisation qui restent valables même en cas de mauvaise spécification des modèles et en présence d'adversaires. Malheureusement, l'inférence Bayésienne exacte est rarement tractable en pratique et des méthodes d'approximation sont généralement employées, mais ces méthodes préservent-elles les propriétés de généralisation de l'inférence Bayésienne ? Dans cette thèse, nous montrons que c'est effectivement le cas pour certains algorithmes d'inférence variationnelle (VI). Nous proposons de nouveaux algorithmes tempérés en ligne et nous en déduisons des bornes de généralisation. Notre résultat théorique repose sur la convexité de l'objectif variationnel, mais nous soutenons que notre résultat devrait être plus général et présentons des preuves empiriques à l'appui. Notre travail donne des justifications théoriques en faveur des algorithmes en ligne qui s'appuient sur des méthodes Bayésiennes approchées.Une autre question d'intérêt majeur en statistique qui est abordée dans cette thèse est la conception d'une procédure d'estimation universelle. Cette question est d'un intérêt majeur, notamment parce qu'elle conduit à des estimateurs robustes, un thème d'actualité en statistique et en machine learning. Nous abordons le problème de l'estimation universelle en utilisant un estimateur de minimisation de distance basé sur la Maximum Mean Discrepancy. Nous montrons que l'estimateur est robuste à la fois à la dépendance et à la présence de valeurs aberrantes dans le jeu de données. Nous mettons également en évidence les liens qui peuvent exister avec les estimateurs de minimisation de distance utilisant la distance L2. Enfin, nous présentons une étude théorique de l'algorithme de descente de gradient stochastique utilisé pour calculer l'estimateur, et nous étayons nos conclusions par des simulations numériques. Nous proposons également une version Bayésienne de notre estimateur, que nous étudions à la fois d'un point de vue théorique et d'un point de vue computationnel
This PhD thesis deals with variational inference and robustness. More precisely, it focuses on the statistical properties of variational approximations and the design of efficient algorithms for computing them in an online fashion, and investigates Maximum Mean Discrepancy based estimators as learning rules that are robust to model misspecification.In recent years, variational inference has been extensively studied from the computational viewpoint, but only little attention has been put in the literature towards theoretical properties of variational approximations until very recently. In this thesis, we investigate the consistency of variational approximations in various statistical models and the conditions that ensure the consistency of variational approximations. In particular, we tackle the special case of mixture models and deep neural networks. We also justify in theory the use of the ELBO maximization strategy, a model selection criterion that is widely used in the Variational Bayes community and is known to work well in practice.Moreover, Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even under model mismatch and with adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference? In this thesis, we show that this is indeed the case for some variational inference algorithms. We propose new online, tempered variational algorithms and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that our result should hold more generally and present empirical evidence in support of this. Our work presents theoretical justifications in favor of online algorithms that rely on approximate Bayesian methods. Another point that is addressed in this thesis is the design of a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. We tackle the problem of universal estimation using a minimum distance estimator based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations. We also propose a Bayesian version of our estimator, that we study from both a theoretical and a computational points of view
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Folgoc Loïc. "Apprentissage statistique pour la personnalisation de modèles cardiaques à partir de données d’imagerie." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4098/document.

Full text
Abstract:
Cette thèse porte sur un problème de calibration d'un modèle électromécanique de cœur, personnalisé à partir de données d'imagerie médicale 3D+t ; et sur celui - en amont - de suivi du mouvement cardiaque. A cette fin, nous adoptons une méthodologie fondée sur l'apprentissage statistique. Pour la calibration du modèle mécanique, nous introduisons une méthode efficace mêlant apprentissage automatique et une description statistique originale du mouvement cardiaque utilisant la représentation des courants 3D+t. Notre approche repose sur la construction d'un modèle statistique réduit reliant l'espace des paramètres mécaniques à celui du mouvement cardiaque. L'extraction du mouvement à partir d'images médicales avec quantification d'incertitude apparaît essentielle pour cette calibration, et constitue l'objet de la seconde partie de cette thèse. Plus généralement, nous développons un modèle bayésien parcimonieux pour le problème de recalage d'images médicales. Notre contribution est triple et porte sur un modèle étendu de similarité entre images, sur l'ajustement automatique des paramètres du recalage et sur la quantification de l'incertitude. Nous proposons une technique rapide d'inférence gloutonne, applicable à des données cliniques 4D. Enfin, nous nous intéressons de plus près à la qualité des estimations d'incertitude fournies par le modèle. Nous comparons les prédictions du schéma d'inférence gloutonne avec celles données par une procédure d'inférence fidèle au modèle, que nous développons sur la base de techniques MCMC. Nous approfondissons les propriétés théoriques et empiriques du modèle bayésien parcimonieux et des deux schémas d'inférence
This thesis focuses on the calibration of an electromechanical model of the heart from patient-specific, image-based data; and on the related task of extracting the cardiac motion from 4D images. Long-term perspectives for personalized computer simulation of the cardiac function include aid to the diagnosis, aid to the planning of therapy and prevention of risks. To this end, we explore tools and possibilities offered by statistical learning. To personalize cardiac mechanics, we introduce an efficient framework coupling machine learning and an original statistical representation of shape & motion based on 3D+t currents. The method relies on a reduced mapping between the space of mechanical parameters and the space of cardiac motion. The second focus of the thesis is on cardiac motion tracking, a key processing step in the calibration pipeline, with an emphasis on quantification of uncertainty. We develop a generic sparse Bayesian model of image registration with three main contributions: an extended image similarity term, the automated tuning of registration parameters and uncertainty quantification. We propose an approximate inference scheme that is tractable on 4D clinical data. Finally, we wish to evaluate the quality of uncertainty estimates returned by the approximate inference scheme. We compare the predictions of the approximate scheme with those of an inference scheme developed on the grounds of reversible jump MCMC. We provide more insight into the theoretical properties of the sparse structured Bayesian model and into the empirical behaviour of both inference schemes
APA, Harvard, Vancouver, ISO, and other styles
8

Dang, Hong-Phuong. "Approches bayésiennes non paramétriques et apprentissage de dictionnaire pour les problèmes inverses en traitement d'image." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0019/document.

Full text
Abstract:
L'apprentissage de dictionnaire pour la représentation parcimonieuse est bien connu dans le cadre de la résolution de problèmes inverses. Les méthodes d'optimisation et les approches paramétriques ont été particulièrement explorées. Ces méthodes rencontrent certaines limitations, notamment liées au choix de paramètres. En général, la taille de dictionnaire doit être fixée à l'avance et une connaissance des niveaux de bruit et éventuellement de parcimonie sont aussi nécessaires. Les contributions méthodologies de cette thèse concernent l'apprentissage conjoint du dictionnaire et de ces paramètres, notamment pour les problèmes inverses en traitement d'image. Nous étudions et proposons la méthode IBP-DL (Indien Buffet Process for Dictionary Learning) en utilisant une approche bayésienne non paramétrique. Une introduction sur les approches bayésiennes non paramétriques est présentée. Le processus de Dirichlet et son dérivé, le processus du restaurant chinois, ainsi que le processus Bêta et son dérivé, le processus du buffet indien, sont décrits. Le modèle proposé pour l'apprentissage de dictionnaire s'appuie sur un a priori de type Buffet Indien qui permet d'apprendre un dictionnaire de taille adaptative. Nous détaillons la méthode de Monte-Carlo proposée pour l'inférence. Le niveau de bruit et celui de la parcimonie sont aussi échantillonnés, de sorte qu'aucun réglage de paramètres n'est nécessaire en pratique. Des expériences numériques illustrent les performances de l'approche pour les problèmes du débruitage, de l'inpainting et de l'acquisition compressée. Les résultats sont comparés avec l'état de l'art.Le code source en Matlab et en C est mis à disposition
Dictionary learning for sparse representation has been widely advocated for solving inverse problems. Optimization methods and parametric approaches towards dictionary learning have been particularly explored. These methods meet some limitations, particularly related to the choice of parameters. In general, the dictionary size is fixed in advance, and sparsity or noise level may also be needed. In this thesis, we show how to perform jointly dictionary and parameter learning, with an emphasis on image processing. We propose and study the Indian Buffet Process for Dictionary Learning (IBP-DL) method, using a bayesian nonparametric approach.A primer on bayesian nonparametrics is first presented. Dirichlet and Beta processes and their respective derivatives, the Chinese restaurant and Indian Buffet processes are described. The proposed model for dictionary learning relies on an Indian Buffet prior, which permits to learn an adaptive size dictionary. The Monte-Carlo method for inference is detailed. Noise and sparsity levels are also inferred, so that in practice no parameter tuning is required. Numerical experiments illustrate the performances of the approach in different settings: image denoising, inpainting and compressed sensing. Results are compared with state-of-the art methods is made. Matlab and C sources are available for sake of reproducibility
APA, Harvard, Vancouver, ISO, and other styles
9

Gerchinovitz, Sébastien. "Prédiction de suites individuelles et cadre statistique classique : étude de quelques liens autour de la régression parcimonieuse et des techniques d'agrégation." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00653550.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de l'apprentissage statistique. Le cadre principal est celui de la prévision de suites déterministes arbitraires (ou suites individuelles), qui recouvre des problèmes d'apprentissage séquentiel où l'on ne peut ou ne veut pas faire d'hypothèses de stochasticité sur la suite des données à prévoir. Cela conduit à des méthodes très robustes. Dans ces travaux, on étudie quelques liens étroits entre la théorie de la prévision de suites individuelles et le cadre statistique classique, notamment le modèle de régression avec design aléatoire ou fixe, où les données sont modélisées de façon stochastique. Les apports entre ces deux cadres sont mutuels : certaines méthodes statistiques peuvent être adaptées au cadre séquentiel pour bénéficier de garanties déterministes ; réciproquement, des techniques de suites individuelles permettent de calibrer automatiquement des méthodes statistiques pour obtenir des bornes adaptatives en la variance du bruit. On étudie de tels liens sur plusieurs problèmes voisins : la régression linéaire séquentielle parcimonieuse en grande dimension (avec application au cadre stochastique), la régression linéaire séquentielle sur des boules L1, et l'agrégation de modèles non linéaires dans un cadre de sélection de modèles (régression avec design fixe). Enfin, des techniques stochastiques sont utilisées et développées pour déterminer les vitesses minimax de divers critères de performance séquentielle (regrets interne et swap notamment) en environnement déterministe ou stochastique.
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Minghui. "Bayesian Sparse Learning for High Dimensional Data." Diss., 2011. http://hdl.handle.net/10161/3869.

Full text
Abstract:

In this thesis, we develop some Bayesian sparse learning methods for high dimensional data analysis. There are two important topics that are related to the idea of sparse learning -- variable selection and factor analysis. We start with Bayesian variable selection problem in regression models. One challenge in Bayesian variable selection is to search the huge model space adequately, while identifying high posterior probability regions. In the past decades, the main focus has been on the use of Markov chain Monte Carlo (MCMC) algorithms for these purposes. In the first part of this thesis, instead of using MCMC, we propose a new computational approach based on sequential Monte Carlo (SMC), which we refer to as particle stochastic search (PSS). We illustrate PSS through applications to linear regression and probit models.

Besides the Bayesian stochastic search algorithms, there is a rich literature on shrinkage and variable selection methods for high dimensional regression and classification with vector-valued parameters, such as lasso (Tibshirani, 1996) and the relevance vector machine (Tipping, 2001). Comparing with the Bayesian stochastic search algorithms, these methods does not account for model uncertainty but are more computationally efficient. In the second part of this thesis, we generalize this type of ideas to matrix valued parameters and focus on developing efficient variable selection method for multivariate regression. We propose a Bayesian shrinkage model (BSM) and an efficient algorithm for learning the associated parameters .

In the third part of this thesis, we focus on the topic of factor analysis which has been widely used in unsupervised learnings. One central problem in factor analysis is related to the determination of the number of latent factors. We propose some Bayesian model selection criteria for selecting the number of latent factors based on a graphical factor model. As it is illustrated in Chapter 4, our proposed method achieves good performance in correctly selecting the number of factors in several different settings. As for application, we implement the graphical factor model for several different purposes, such as covariance matrix estimation, latent factor regression and classification.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse Bayesian learning (SBL)"

1

Chatzis, Sotirios P. "Sparse Bayesian Recurrent Neural Networks." In Machine Learning and Knowledge Discovery in Databases, 359–72. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23525-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yong, and James L. Beck. "Sparse Bayesian Learning and its Application in Bayesian System Identification." In Bayesian Inverse Problems, 79–111. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/b22018-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Guanghao, Dongshun Cui, Shangbo Mao, and Guang-Bin Huang. "Sparse Bayesian Learning for Extreme Learning Machine Auto-encoder." In Proceedings in Adaptation, Learning and Optimization, 319–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23307-5_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lei, Yun, Xiaoqing Ding, and Shengjin Wang. "Adaptive Sparse Vector Tracking Via Online Bayesian Learning." In Advances in Machine Vision, Image Processing, and Pattern Analysis, 35–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821045_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Michel, Vincent, Evelyn Eger, Christine Keribin, and Bertrand Thirion. "Multi-Class Sparse Bayesian Regression for Neuroimaging Data Analysis." In Machine Learning in Medical Imaging, 50–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15948-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Lu, Lifan Zhao, Guoan Bi, and Xin Liu. "Alternative Extended Block Sparse Bayesian Learning for Cluster Structured Sparse Signal Recovery." In Wireless and Satellite Systems, 3–12. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19153-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bo, Liefeng, Ling Wang, and Licheng Jiao. "Sparse Bayesian Learning Based on an Efficient Subset Selection." In Advances in Neural Networks – ISNN 2004, 264–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28647-9_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Shuanzhu, Zhong Han, Xiaolong Qi, Chunlei Zhou, Tiancheng Zhang, Bei Song, and Yang Gao. "An Incremental Approach for Sparse Bayesian Network Structure Learning." In Big Data, 350–65. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2922-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Min, and Dayun Mu. "Multi-reservoir Echo State Network with Sparse Bayesian Learning." In Advances in Neural Networks - ISNN 2010, 450–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13278-0_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sabuncu, Mert R. "A Sparse Bayesian Learning Algorithm for Longitudinal Image Data." In Lecture Notes in Computer Science, 411–18. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24574-4_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Bayesian learning (SBL)"

1

Srivastava, Suraj, Ch Suraj Kumar Patro, Aditya K. Jagannatham, and Govind Sharma. "Sparse Bayesian Learning (SBL)-Based Frequency-Selective Channel Estimation for Millimeter Wave Hybrid MIMO Systems." In 2019 National Conference on Communications (NCC). IEEE, 2019. http://dx.doi.org/10.1109/ncc.2019.8732197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Srivastava, Suraj, and Aditya K. Jagannatham. "Sparse Bayesian Learning-Based Kalman Filtering (SBL-KF) for Group-Sparse Channel Estimation in Doubly Selective mmWave Hybrid MIMO Systems." In 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). IEEE, 2019. http://dx.doi.org/10.1109/spawc.2019.8815509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Xiao, and Qinghua Zheng. "Sparse Bayesian learning for ranking." In 2009 IEEE International Conference on Granular Computing (GRC). IEEE, 2009. http://dx.doi.org/10.1109/grc.2009.5255164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Giri, Ritwik, and Bhaskar D. Rao. "Bootstrapped sparse Bayesian learning for sparse signal recovery." In 2014 48th Asilomar Conference on Signals, Systems and Computers. IEEE, 2014. http://dx.doi.org/10.1109/acssc.2014.7094748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nannuru, Santosh, Kay L. Gemba, and Peter Gerstoft. "Sparse Bayesian learning with multiple dictionaries." In 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2017. http://dx.doi.org/10.1109/globalsip.2017.8309149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Jing, Yacong Ding, and Bhaskar Rao. "Sparse Bayesian Learning for Robust PCA." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8682785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pal, Piya, and P. P. Vaidyanathan. "Parameter identifiability in Sparse Bayesian Learning." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6853919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

K.V., Aiswarya, Gundabattula Naga Rama Mangaiah Naidu, Kalaiarasi K., Kavya D., and Kirthiga S. "Spectrum Sensing using Sparse Bayesian Learning." In 2019 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2019. http://dx.doi.org/10.1109/iccsp.2019.8698108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Park, Yongsung, Florian Meyer, and Peter Gerstoft. "Sequential Sparse Bayesian Learning For Doa." In 2020 54th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2020. http://dx.doi.org/10.1109/ieeeconf51394.2020.9443444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koyama, Shoichi, Atsushi Matsubayashi, Naoki Murata, and Hiroshi Saruwatari. "Sparse sound field decomposition using group sparse Bayesian learning." In 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2015. http://dx.doi.org/10.1109/apsipa.2015.7415391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography