Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Bayesian recovery.

Rozprawy doktorskie na temat „Bayesian recovery”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 18 najlepszych rozpraw doktorskich naukowych na temat „Bayesian recovery”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Tan, Xing. "Bayesian sparse signal recovery". [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041176.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Karseras, Evripidis. "Hierarchical Bayesian models for sparse signal recovery and sampling". Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/32102.

Pełny tekst źródła
Streszczenie:
This thesis builds upon the problem of sparse signal recovery from the Bayesian standpoint. The advantages of employing Bayesian models are underscored, with the most important being the ease at which a model can be expanded or altered; leading to a fresh class of algorithms. The thesis fills out several gaps between sparse recovery algorithms and sparse Bayesian models; firstly the lack of global performance guarantees for the latter and secondly what the signifying differences are between the two. These questions are answered by providing; a refined theoretical analysis and a new class of algorithms that combines the benefits from classic recovery algorithms and sparse Bayesian modelling. The said Bayesian techniques find application in tracking dynamic sparse signals, something impossible under the Kalman filter approach. Another innovation of this thesis are Bayesian models for signals whose components are known a priori to exhibit a certain statistical trend. These situations require that the model enforces a given statistical bias on the solutions. Existing Bayesian models can cope with this input, but the algorithms to carry out the task are computationally expensive. Several ways are proposed to remedy the associated problems while still attaining some form of optimality. The proposed framework finds application in multipath channel estimation with some very promising results. Not far from the same area lies that of Approximate Message Passing. This includes extremely low-complexity algorithms for sparse recovery with a powerful analysis framework. Some results are derived, regarding the differences between these approximate methods and the aforementioned models. This can be seen as preliminary work for future research. Finally, the thesis presents a hardware implementation of a wideband spectrum analyser based on sparse recovery methods. The hardware consists of a Field-Programmable Gate Array coupled with an Analogue to Digital Converter. Some critical results are drawn, regarding the gains and viability of such methods.
Style APA, Harvard, Vancouver, ISO itp.
3

Echavarria, Gregory Maria Angelica. "Predictive Data-Derived Bayesian Statistic-Transport Model and Simulator of Sunken Oil Mass". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/471.

Pełny tekst źródła
Streszczenie:
Sunken oil is difficult to locate because remote sensing techniques cannot as yet provide views of sunken oil over large areas. Moreover, the oil may re-suspend and sink with changes in salinity, sediment load, and temperature, making deterministic fate models difficult to deploy and calibrate when even the presence of sunken oil is difficult to assess. For these reasons, together with the expense of field data collection, there is a need for a statistical technique integrating limited data collection with stochastic transport modeling. Predictive Bayesian modeling techniques have been developed and demonstrated for exploiting limited information for decision support in many other applications. These techniques brought to a multi-modal Lagrangian modeling framework, representing a near-real time approach to locating and tracking sunken oil driven by intrinsic physical properties of field data collected following a spill after oil has begun collecting on a relatively flat bay bottom. Methods include (1) development of the conceptual predictive Bayesian model and multi-modal Gaussian computational approach based on theory and literature review; (2) development of an object-oriented programming and combinatorial structure capable of managing data, integration and computation over an uncertain and highly dimensional parameter space; (3) creating a new bi-dimensional approach of the method of images to account for curved shoreline boundaries; (4) confirmation of model capability for locating sunken oil patches using available (partial) real field data and capability for temporal projections near curved boundaries using simulated field data; and (5) development of a stand-alone open-source computer application with graphical user interface capable of calibrating instantaneous oil spill scenarios, obtaining sets maps of relative probability profiles at different prediction times and user-selected geographic areas and resolution, and capable of performing post-processing tasks proper of a basic GIS-like software. The result is a predictive Bayesian multi-modal Gaussian model, SOSim (Sunken Oil Simulator) Version 1.0rc1, operational for use with limited, randomly-sampled, available subjective and numeric data on sunken oil concentrations and locations in relatively flat-bottomed bays. The SOSim model represents a new approach, coupling a Lagrangian modeling technique with predictive Bayesian capability for computing unconditional probabilities of mass as a function of space and time. The approach addresses the current need to rapidly deploy modeling capability without readily accessible information on ocean bottom currents. Contributions include (1) the development of the apparently first pollutant transport model for computing unconditional relative probabilities of pollutant location as a function of time based on limited available field data alone; (2) development of a numerical method of computing concentration profiles subject to curved, continuous or discontinuous boundary conditions; (3) development combinatorial algorithms to compute unconditional multimodal Gaussian probabilities not amenable to analytical or Markov-Chain Monte Carlo integration due to high dimensionality; and (4) the development of software modules, including a core module containing the developed Bayesian functions, a wrapping graphical user interface, a processing and operating interface, and the necessary programming components that lead to an open-source, stand-alone, executable computer application (SOSim - Sunken Oil Simulator). Extensions and refinements are recommended, including the addition of capability for accepting available information on bathymetry and maybe bottom currents as Bayesian prior information, the creation of capability of modeling continuous oil releases, and the extension to tracking of suspended oil (3-D).
Style APA, Harvard, Vancouver, ISO itp.
4

Tang, Man. "Bayesian population dynamics modeling to guide population restoration and recovery of endangered mussels in the Clinch River, Tennessee and Virginia". Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/49598.

Pełny tekst źródła
Streszczenie:
Freshwater mussels have played an important role in the history of human culture and also in ecosystem functioning. But during the past several decades, the abundance and diversity of mussel species has declined all over the world. To address the urgent need to maintain and restore populations of endangered freshwater mussels, quantitative population dynamics modeling is needed to evaluate population status and guide the management of endangered freshwater mussels. One endangered mussel species, the oyster mussel (Epioblasma capsaeformis), was selected to study its population dynamics for my research. The analysis was based on two datasets, length frequency data from annual surveys conducted at three sites in Clinch River: Wallen Bend (Clinch River Mile 192) from 2004-2010, Frost Ford (CRM 182) from 2005 to 2010 and Swan Island (CRM 172) from 2005 to 2010, and age-length data based on shell thin-sections. Three hypothetical scenarios were assumed in model estimations: (1) constant natural mortality; (2) one constant natural mortality rate for young mussels and another one for adult mussels; (3) age-specific natural mortality. A Bayesian approach was used to analyze the age-structured models and a Bayesian model averaging approach was applied to average the results by weighting each model using the deviance information criterion (DIC). A risk assessment was conducted to evaluate alternative restoration strategies for E. capsaeformis. The results indicated that releasing adult mussels was the quickest way to increase mussel population size and increasing survival and fertility of young mussels was a suitable way to restore mussel populations in the long term. The population of E. capsaeformis at Frost Ford had a lower risk of decline compared with the populations at Wallen Bend and Swan Island.
Passive integrated transponder (PIT) tags were applied in my fieldwork to monitor the translocation efficiency of E. capsaeformis and Actinonaias pectorosa at Cleveland Islands (CRM 270.8). Hierarchical Bayesian models were developed to address the individual variability and sex-related differences in growth. In model selection, the model considering individual variability and sex-related differences (if a species has sexual dimorphism) yielded the lowest DIC value. The results from the best model showed that the mean asymptotic length and mean growth rate of female E. capsaeformis were 45.34 mm and 0.279, which were higher than values estimated for males (42.09 mm and 0.216). The mean asymptotic length and mean growth rate for A. pectorosa were 104.2 mm and 0.063, respectively.
To test for the existence of individual and sex-related variability in survival and recapture rates, Bayesian models were developed to address the variability in the analysis of the mark-recapture data of E. capsaeformis and A. pectorosa. DIC was used to compare different models. The median survival rates of male E. capsaeformis, female E. capsaeformis and A. pectorosa were high (>87%, >74% and >91%), indicating that the habitat at Cleveland Islands was suitable for these two mussel species within this survey duration. In addition, the median recapture rates for E. capsaeformis and A. pectorosa were >93% and >96%, indicating that the PIT tag technique provided an efficient monitoring approach. According to model comparison results, the non-hierarchical model or the model with sex--related differences (if a species is sexually dimorphic) in survival rate was suggested for analyzing mark-recapture data when sample sizes are small.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
5

Cave, Vanessa M. "Statistical models for the long-term monitoring of songbird populations : a Bayesian analysis of constant effort sites and ring-recovery data". Thesis, St Andrews, 2010. http://hdl.handle.net/10023/885.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Dine, James. "A habitat suitability model for Ricord's iguana in the Dominican Republic". Connect to resource online, 2009. http://hdl.handle.net/1805/1889.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Indiana University, 2009.
Title from screen (viewed on August 27, 2009). Department of Geography, Indiana University-Purdue University Indianapolis (IUPUI). Advisor(s): Jan Ramer, Aniruddha Banergee, Jeffery Wilson. Includes vita. Includes bibliographical references (leaves 47-52).
Style APA, Harvard, Vancouver, ISO itp.
7

Sugimoto, Tatsuhiro. "Anelastic Strain Recovery Method for In-situ Stress Measurements: A novel analysis procedure based on Bayesian statistical modeling and application to active fault drilling". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263637.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Cong. "High-Dimensional Generative Models for 3D Perception". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Pełny tekst źródła
Streszczenie:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.
Doctor of Philosophy
The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
Style APA, Harvard, Vancouver, ISO itp.
9

SEDDA, GIULIA. "The interplay between movement and perception: how interaction can influence sensorimotor performance and neuromotor recovery". Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1011732.

Pełny tekst źródła
Streszczenie:
Movement and perception interact continuously in daily activities. Motor output changes the outside world and affect perceptual representations. Similarly, perception has consequences on movement. Nevertheless, how movement and perception influence each other and share information is still an open question. Mappings from movement to perceptual outcome and vice versa change continuously throughout life. For example, a cerebrovascular accident (stroke) elicits in the nervous system a complex series of reorganization processes at various levels and with different temporal scales. Functional recovery after a stroke seems to be mediated by use-dependent reorganization of the preserved neural circuitry. The goal of this thesis is to discuss how interaction with the environment can influence the progress of both sensorimotor performance and neuromotor recovery. I investigate how individuals develop an implicit knowledge of the ways motor outputs regularly correlate with changes in sensory inputs, by interacting with the environment and experiencing the perceptual consequences of self-generated movements. Further, I applied this paradigm to model the exercise-based neurorehabilitation in stroke survivors, which aims at gradually improving both perceptual and motor performance through repeated exercise. The scientific findings of this thesis indicate that motor learning resolve visual perceptual uncertainty and contributes to persistent changes in visual and somatosensory perception. Moreover, computational neurorehabilitation may help to identify the underlying mechanisms of both motor and perceptual recovery, and may lead to more personalized therapies.
Style APA, Harvard, Vancouver, ISO itp.
10

Quer, Giorgio. "Optimization of Cognitive Wireless Networks using Compressive Sensing and Probabilistic Graphical Models". Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421992.

Pełny tekst źródła
Streszczenie:
In-network data aggregation to increase the efficiency of data gathering solutions for Wireless Sensor Networks (WSNs) is a challenging task. In the first part of this thesis, we address the problem of accurately reconstructing distributed signals through the collection of a small number of samples at a Data Collection Point (DCP). We exploit Principal Component Analysis (PCA) to learn the relevant statistical characteristics of the signals of interest at the DCP. Then, at the DCP we use this knowledge to design a matrix required by the recovery techniques, that exploit convex optimization (Compressive Sensing, CS) in order to recover the whole signal sensed by the WSN from a small number of samples gathered. In order to integrate this monitoring model in a compression/recovery framework, we apply the logic of the cognition paradigm: we first observe the network, then we learn the relevant statistics of the signals, we apply it to recover the signal and to make decisions, that we effect through the control loop. This compression/recovery framework with a feedback control loop is named "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). The whole framework is designed for a WSN architecture, called WSN-control, that is accessible from the Internet. We also analyze with a Bayesian approach the whole framework to justify theoretically the choices made in our protocol design. The second part of the thesis deals with the application of the cognition paradigm to the optimization of a Wireless Local Area Network (WLAN). In this work, we propose an architecture for cognitive networking that can be integrated with the existing layered protocol stack. Specifically, we suggest the use of a probabilistic graphical model for modeling the layered protocol stack. In particular, we use a Bayesian Network (BN), a graphical representation of statistical relationships between random variables, in order to describe the relationships among a set of stack-wide protocol parameters and to exploit this cross-layer approach to optimize the network. In doing so, we use the knowledge learned from the observation of the data to predict the TCP throughput in a single-hop wireless network and to infer the future occurrence of congestion at the TCP layer in a multi-hop wireless network. The approach followed in the two main topics of this thesis consists of the following phases: (i) we apply the cognition paradigm to learn the specific probabilistic characteristics of the network, (ii) we exploit this knowledge acquired in the first phase to design novel protocol techniques, (iii) we analyze theoretically and through extensive simulation such techniques, comparing them with other state of the art techniques, and (iv) we evaluate their performance in real networking scenarios.
La combinazione delle informazioni nelle reti di sensori wireless è una soluzione promettente per aumentare l'efficienza delle techiche di raccolta dati. Nella prima parte di questa tesi viene affrontato il problema della ricostruzione di segnali distribuiti tramite la raccolta di un piccolo numero di campioni al punto di raccolta dati (DCP). Viene sfruttato il metodo dell'analisi delle componenti principali (PCA) per ricostruire al DCP le caratteristiche statistiche del segnale di interesse. Questa informazione viene utilizzata al DCP per determinare la matrice richiesta dalle tecniche di recupero che sfruttano algoritmi di ottimizzazione convessa (Compressive Sensing, CS) per ricostruire l'intero segnale da una sua versione campionata. Per integrare questo modello di monitoraggio in un framework di compressione e recupero del segnale, viene applicata la logica del paradigma 'cognitive': prima si osserva la rete; poi dall'osservazione si derivano le statistiche di interesse, che vengono applicate per il recupero del segnale; si sfruttano queste informazioni statistiche per prenderere decisioni e infine si rendono effettive queste decisioni con un controllo in retroazione. Il framework di compressione e recupero con controllo in retroazione è chiamato "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). L'intero framework è stato implementato in una architettura per WSN detta WSN-control, accessibile da Internet. Le scelte nella progettazione del protocollo sono state giustificate da un'analisi teorica con un approccio di tipo Bayesiano. Nella seconda parte della tesi il paradigma cognitive viene utilizzato per l'ottimizzazione di reti locali wireless (WLAN). L'architetture della rete cognitive viene integrata nello stack protocollare della rete wireless. Nello specifico, vengono utilizzati dei modelli grafici probabilistici per modellare lo stack protocollare: le relazioni probabilistiche tra alcuni parametri di diversi livelli vengono studiate con il modello delle reti Bayesiane (BN). In questo modo, è possibile utilizzare queste informazioni provenienti da diversi livelli per ottimizzare le prestazioni della rete, utilizzando un approccio di tipo cross-layer. Ad esempio, queste informazioni sono utilizzate per predire il throughput a livello di trasporto in una rete wireless di tipo single-hop, o per prevedere il verificarsi di eventi di congestione in una rete wireless di tipo multi-hop. L'approccio seguito nei due argomenti principali che compongono questa tesi è il seguente: (i) viene applicato il paradigma cognitive per ricostruire specifiche caratteristiche probabilistiche della rete, (ii) queste informazioni vengono utilizzate per progettare nuove tecniche protocollari, (iii) queste tecniche vengono analizzate teoricamente e confrontate con altre tecniche esistenti, e (iv) le prestazioni vengono simulate, confrontate con quelle di altre tecniche e valutate in scenari di rete realistici.
Style APA, Harvard, Vancouver, ISO itp.
11

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom". Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Pełny tekst źródła
Streszczenie:
CAPES
Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.
Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
Style APA, Harvard, Vancouver, ISO itp.
12

Al-Rabah, Abdullatif R. "Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach". Thesis, 2013. http://hdl.handle.net/10754/291094.

Pełny tekst źródła
Streszczenie:
Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly recommended to be an effective and practical alternative to state-of-art PAPR reduction techniques.
Style APA, Harvard, Vancouver, ISO itp.
13

Khanna, Saurabh. "Bayesian Techniques for Joint Sparse Signal Recovery: Theory and Algorithms". Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5292.

Pełny tekst źródła
Streszczenie:
This thesis contributes new theoretical results, solution concepts, and algorithms concerning the Bayesian recovery of multiple joint sparse vectors from noisy and underdetermined linear measurements. The thesis is written in two parts. The first part focuses on the recovery of nonzero support of multiple joint sparse vectors from their linear compressive measurements, an important canonical problem in multisensor signal processing. The support recovery performance of a well known Bayesian inference technique called Multiple Sparse Bayesian Learning (MSBL) is analyzed using tools from large deviation theory. New improved sufficient conditions are derived for perfect support recovery in MSBL with arbitrarily high probability. We show that the support error probability in MSBL decays exponentially fast with the number of joint sparse vectors and the rate of decay depends on the restricted eigenvalues and null space structure of the self Khatri-Rao product of the sensing matrix used to generate the measurements. New insights into MSBL’s objective are developed which enhance our understanding of MSBL’s ability to recover supports of size greater than the number of measurements available per joint sparse vector. These new insights are formalized into a novel covariance matching framework for sparsity pattern recovery. Next, we characterize the restricted isometry property of a generic Khatri-Rao product matrix in terms of its restricted isometry constants (RICs). Upper bounds for the RICs of Khatri-Rao product matrices are of independent interest as they feature in the sample complexity analysis of several linear inverse problems of fundamental importance, including the above support recovery problem. We derive deterministic and probabilistic upper bounds for the RICs of Khatri-Rao product between two matrices. The newly obtained RIC bounds are then used to derive performance bounds for MSBL based support recovery. Building upon the new insights about MSBL, a novel covariance matching based support recovery algorithm is conceived. It uses a R´enyi divergence objective which reverts to the MSBL’s objective in a special case. We show that the R´enyi divergence objective can be expressed as a difference of two submodular set functions, and hence it can be optimized via an iterative majorization-minimization procedure to generate the support estimate. The resulting algorithm is empirically shown to be several times faster than existing support recovery methods with comparable performance. The second part of the thesis focuses on developing decentralized extensions of MSBL for in-network estimation of multiple joint sparse vectors from linear compressive measurements using a network of nodes. A common issue while implementing decentralized algorithms is the high cost associated with the exchange of information between the network nodes. To mitigate this problem, we examine two different approaches to reduce the amount of inter-node communication in the network. In the first decentralized extension of MSBL, the network nodes exchange information only via a small set of predesignated bridge nodes. For this bridge node based network topology, the MSBL optimization is then performed using decentralized Alternating Directions Method of Multipliers (ADMM). The convergence of decentralized ADMM in a bridge node based network topology for a generic consensus optimization is separately analyzed and a linear rate of convergence is established. Our second decentralized extension of MSBL reduces the communication complexity by adaptively censoring the information exchanged between the nodes of the network by exploiting the inherent sparse nature of the exchanged information. The performance of the proposed decentralized schemes is evaluated using both simulated as well as real-world data.
Style APA, Harvard, Vancouver, ISO itp.
14

"Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing". Master's thesis, 2019. http://hdl.handle.net/2286/R.I.55639.

Pełny tekst źródła
Streszczenie:
abstract: Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.
Dissertation/Thesis
Masters Thesis Computer Engineering 2019
Style APA, Harvard, Vancouver, ISO itp.
15

Masood, Mudassir. "Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications". Diss., 2015. http://hdl.handle.net/10754/553050.

Pełny tekst źródła
Streszczenie:
Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c) distributed estimation of sparse signals. Extensive experiments are conducted to demonstrate the power and robustness of our proposed sparse signal estimation algorithms. Specifically, we target the problems of a) channel estimation in massive-MIMO, and b) Narrowband interference mitigation in SC-FDMA. We model these problems as sparse recovery problems and demonstrate how these could be solved naturally using the proposed algorithms.
Style APA, Harvard, Vancouver, ISO itp.
16

Sana, Furrukh. "Efficient Techniques of Sparse Signal Analysis for Enhanced Recovery of Information in Biomedical Engineering and Geosciences". Diss., 2016. http://hdl.handle.net/10754/621865.

Pełny tekst źródła
Streszczenie:
Sparse signals are abundant among both natural and man-made signals. Sparsity implies that the signal essentially resides in a small dimensional subspace. The sparsity of the signal can be exploited to improve its recovery from limited and noisy observations. Traditional estimation algorithms generally lack the ability to take advantage of signal sparsity. This dissertation considers several problems in the areas of biomedical engineering and geosciences with the aim of enhancing the recovery of information by exploiting the underlying sparsity in the problem. The objective is to overcome the fundamental bottlenecks, both in terms of estimation accuracies and required computational resources. In the first part of dissertation, we present a high precision technique for the monitoring of human respiratory movements by exploiting the sparsity of wireless ultra-wideband signals. The proposed technique provides a novel methodology of overcoming the Nyquist sampling constraint and enables robust performance in the presence of noise and interferences. We also present a comprehensive framework for the important problem of extracting the fetal electrocardiogram (ECG) signals from abdominal ECG recordings of pregnant women. The multiple measurement vectors approach utilized for this purpose provides an efficient mechanism of exploiting the common structure of ECG signals, when represented in sparse transform domains, and allows leveraging information from multiple ECG electrodes under a joint estimation formulation. In the second part of dissertation, we adopt sparse signal processing principles for improved information recovery in large-scale subsurface reservoir characterization problems. We propose multiple new algorithms for sparse representation of the subsurface geological structures, incorporation of useful prior information in the estimation process, and for reducing computational complexities of the problem. The techniques presented here enable significantly enhanced imaging of the subsurface earth and result in substantial savings in terms of convergence time, leading to optimized placement of oil wells. This dissertation demonstrates through detailed experimental analysis that the sparse estimation approach not only enables enhanced information recovery in variety of application areas, but also greatly helps in reducing the computational complexities associated with the problems.
Style APA, Harvard, Vancouver, ISO itp.
17

Prasanna, Dheeraj. "Structured Sparse Signal Recovery for mmWave Channel Estimation: Intra-vector Correlation and Modulo Compressed Sensing". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5215.

Pełny tekst źródła
Streszczenie:
This thesis contributes new theoretical results and recovery algorithms for the area of sparse signal recovery motivated by applications to the problem of channel estimation in mmWave communication systems. The presentation is in two parts. The first part focuses on the recovery of sparse vectors with correlated non-zero entries from their noisy low dimensional projections. Such structured sparse signals can be recovered using the technique of covariance matching. Here, we first estimate the covariance of the signal from the compressed measurements, and then use the obtained covariance matrix estimate as a plug-in to the linear minimum mean squared estimator to obtain an estimate of the sparse vector. We present a novel parametric Gaussian prior model, inspired by sparse Bayesian learning (SBL), which captures the underlying correlation in addition to the sparsity. Based on this prior, we develop a novel Bayesian learning algorithm called Corr-SBL, using the expectation-maximization procedure. This algorithm learns the parameters of the prior and updates the posterior estimates in an iterative fashion, thereby yielding a sparse vector estimate upon convergence. We present a closed form solution for the hyperparameter update based on fixed-point iterations. In case of imperfect correlation information, we present a pragmatic approach to learn the parameters of the correlation matrix in a data-driven fashion. Next, we apply Corr-SBL to the channel estimation problem in mmWave multiple-input multiple-output systems employing a hybrid analog-digital architecture. We use noisy low dimensional projections of the channel obtained in the pilot transmission phase to estimate the channel across multiple coherence blocks. We show the efficacy of the Corr-SBL prior by analyzing the error in the channel estimates. Our results show that, compared to a genie-aided estimator and other existing sparse recovery algorithms, exploiting both sparsity and correlation results in significant performance gains, even under imperfect covariance estimates obtained using a limited number of samples. In the second part of the presentation, we consider the sparse signal recovery problem when low-resolution ADCs with finite resolution are used in the measurement acquisition process. To counter the effect of signal clipping in these systems, we use modulo arithmetic to fold the measurements crossing the range back into the dynamic range of the system. For this setup, termed as modulo-CS, we answer the fundamental question of signal identifiability, by deriving conditions on the measurement matrix and the minimal number of measurements required for unique recovery of sparse vectors. We also show that recovery using the minimum required number of measurements is possible when the entries of the measurement matrix are drawn independently from any continuous distribution. Finally, we present an algorithm based on convex relaxation, and formulate a mixed integer linear program (MILP) for recovery of sparse vectors under modulo-CS. Our empirical results show that the minimum number of measurements required for the MILP is close to the theoretical result, for signals with low variance.
Style APA, Harvard, Vancouver, ISO itp.
18

Dine, James. "A Habitat Suitability Model for Ricord’s Iguana in the Dominican Republic". Thesis, 2009. http://hdl.handle.net/1805/1889.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
The West Indian iguanas of the genus Cyclura are the most endangered group of lizards in the world (Burton & Bloxam, 2002). The Ricord’s iguana, Cyclura ricordii, is listed as critically endangered by the International Union for Conservation of Nature (IUCN) (Ramer, 2004). This species is endemic to the island of Hispaniola (Figure 1), and can only be found in limited geographic areas (Burton & Bloxam, 2002). The range of this species is estimated to be only 60% of historical levels, with most areas being affected by some level of disturbance (Ottenwalder, 1996). The most recent population estimation is between 2,000 and 4,000 individuals (Burton & Bloxam, 2002). Information on potentially suitable habitat can help the conservation efforts for Ricord’s iguana. However, intensive ground surveys are not always feasible or cost effective, and cannot easily provide continuous coverage over a large area. This paper presents results from a pilot study that evaluated variables extracted from satellite imagery and digitally mapped data layers to map the probability of suitable Ricord’s iguana habitat. Bayesian methods were used to determine the probability that each pixel in the study areas is suitable habitat for Ricord’s iguanas by evaluating relevant environmental attributes. This model predicts the probability that an area is suitable habitat based on the values of the environmental attributes including landscape biophysical characteristics, terrain data, and bioclimatic variables.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii