Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Bernoulli-Gaussian“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Bernoulli-Gaussian" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Bernoulli-Gaussian"
Hrabovets, Anastasiia. „Feynman diagrams and their limits for Bernoulli noise“. Theory of Stochastic Processes 27(43), Nr. 1 (16.11.2023): 11–30. http://dx.doi.org/10.3842/tsp-4311781209-33.
Der volle Inhalt der QuelleTörő, Olivér, Tamás Bécsi, Szilárd Aradi und Péter Gáspár. „IMM Bernoulli Gaussian Particle Filter“. IFAC-PapersOnLine 51, Nr. 22 (2018): 274–79. http://dx.doi.org/10.1016/j.ifacol.2018.11.554.
Der volle Inhalt der QuelleXie, Shaohao, Shaohua Zhuang und Yusong Du. „Improved Bernoulli Sampling for Discrete Gaussian Distributions over the Integers“. Mathematics 9, Nr. 4 (13.02.2021): 378. http://dx.doi.org/10.3390/math9040378.
Der volle Inhalt der QuelleFinamore, Weiler, Marcelo Pinho, Manish Sharma und Moises Ribeiro. „Modeling Noise as a Bernoulli-Gaussian Process“. Journal of Communication and Information Systems 38 (2023): 175–86. http://dx.doi.org/10.14209/jcis.2023.20.
Der volle Inhalt der QuelleBobkov, Sergey G., Friedrich Gotze und Christian Houdre. „On Gaussian and Bernoulli Covariance Representations“. Bernoulli 7, Nr. 3 (Juni 2001): 439. http://dx.doi.org/10.2307/3318495.
Der volle Inhalt der QuelleLavielle, Marc. „Bayesian deconvolution of Bernoulli-Gaussian processes“. Signal Processing 33, Nr. 1 (Juli 1993): 67–79. http://dx.doi.org/10.1016/0165-1684(93)90079-p.
Der volle Inhalt der QuelleDe La Rue, Thierry. „Systèmes dynamiques gaussiens d'entropie nulle, lâchement et non lâchement Bernoulli“. Ergodic Theory and Dynamical Systems 16, Nr. 2 (April 1996): 379–404. http://dx.doi.org/10.1017/s0143385700008865.
Der volle Inhalt der QuelleAl-Zuhairi, Dheyaa T., und Abbas Salman Hameed. „DOA estimation under Bernoulli-Gaussian impulsive noise“. IOP Conference Series: Materials Science and Engineering 1090, Nr. 1 (01.03.2021): 012096. http://dx.doi.org/10.1088/1757-899x/1090/1/012096.
Der volle Inhalt der QuelleChong-Yung Chi und J. Mendel. „Viterbi algorithm detector for Bernoulli-Gaussian processes“. IEEE Transactions on Acoustics, Speech, and Signal Processing 33, Nr. 3 (Juni 1985): 511–19. http://dx.doi.org/10.1109/tassp.1985.1164580.
Der volle Inhalt der QuelleTalagrand, Michel. „Gaussian averages, Bernoulli averages, and Gibbs' measures“. Random Structures and Algorithms 21, Nr. 3-4 (Oktober 2002): 197–204. http://dx.doi.org/10.1002/rsa.10059.
Der volle Inhalt der QuelleDissertationen zum Thema "Bernoulli-Gaussian"
Gaerke, Tiffani M. „Characteristic Functions and Bernoulli-Gaussian Impulsive Noise Channels“. University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1408040080.
Der volle Inhalt der QuelleVu, Hung Van. „Capacities of Bernoulli-Gaussian Impulsive Noise Channels in Rayleigh Fading“. University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1407370145.
Der volle Inhalt der QuelleBoudineau, Mégane. „Vers la résolution "optimale" de problèmes inverses non linéaires parcimonieux grâce à l'exploitation de variables binaires sur dictionnaires continus : applications en astrophysique“. Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30020/document.
Der volle Inhalt der QuelleThis thesis deals with solutions of nonlinear inverse problems using a sparsity prior; more specifically when the data can be modelled as a linear combination of a few functions, which depend non-linearly on a "location" parameter, i.e. frequencies for spectral analysis or time-delay for spike train deconvolution. These problems are generally reformulated as linear sparse approximation problems, thanks to an evaluation of the nonlinear functions at location parameters discretised on a thin grid, building a "discrete dictionary". However, such an approach has two major drawbacks. On the one hand, the discrete dictionary is highly correlated; classical sub-optimal methods such as L1- penalisation or greedy algorithms can then fail. On the other hand, the estimated location parameter, which belongs to the discretisation grid, is necessarily discrete and that leads to model errors. To deal with these issues, we propose in this work an exact sparsity model, thanks to the introduction of binary variables, and an optimal solution of the problem with a "continuous dictionary" allowing a continuous estimation of the location parameter. We focus on two research axes, which we illustrate with problems such as spike train deconvolution and spectral analysis of unevenly sampled data. The first axis focusses on the "dictionary interpolation" principle, which consists in a linearisation of the continuous dictionary in order to get a constrained linear sparse approximation problem. The introduction of binary variables allows us to reformulate this problem as a "Mixed Integer Program" (MIP) and to exactly model the sparsity thanks to the "pseudo-norm L0". We study different kinds of dictionary interpolation and constraints relaxation, in order to solve the problem optimally thanks to MIP classical algorithms. For the second axis, in a Bayesian framework, the binary variables are supposed random with a Bernoulli distribution and we model the sparsity through a Bernoulli-Gaussian prior. This model is extended to take into account continuous location parameters (BGE model). We then estimate the parameters from samples drawn using Markov chain Monte Carlo algorithms. In particular, we show that marginalising the amplitudes allows us to improve the sampling of a Gibbs algorithm in a supervised case (when the model's hyperparameters are known). In an unsupervised case, we propose to take advantage of such a marginalisation through a "Partially Collapsed Gibbs Sampler." Finally, we adapt the BGE model and associated samplers to a topical science case in Astrophysics: the detection of exoplanets from radial velocity measurements. The efficiency of our method will be illustrated with simulated data, as well as actual astrophysical data
Barbault, Pierre. „Un ticket pour le sparse : de l'estimation des signaux et des paramètres en problèmes inverses bernoulli-gaussiens“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG049.
Der volle Inhalt der QuelleMagneto/Electro Encephalography (M/EEG) imaging can be used to reconstruct focal points of cerebral activity by measuring the Electro Magnetic field produced by it. Even if the characteristic time of the recorded signals is low enough to be able to consider a linear acquisition model, the number of possible sources remains very large compared to the number of sensors. In fact, this is an ill-posed and, moreover, a large-scale problem. In order to reduce it to a 'well-posed' problem, a common assumption, and which makes sense for neurons, is that the sources are sparse i.e. that the number of non-zero values is very small. We then model the problem from a probabilistic point of view using a Bernoulli-Gaussian (BG) a priori for the sources. There are many methods that can solve such a problem, but most of them require knowledge of the parameters of the BG law. The objective of this thesis is to propose a completely unsupervised approach which allows to estimate the parameters of the BG law as well as to estimate the sources if possible. To do this, Expectation-Minimization (EM) algorithms are explored. First, the simplest case is treated: that of denoising where the linear operator is the identity. In this framework, three algorithms are proposed: A Moments method based on data statistics, an EM and a joint estimation algorithm for sources and parameters. The results show that the EM initialized by the Method of Moments is the best candidate for parameter estimation. Secondly, the previous results are extended to the general case of any linear operator thanks to the introduction of a latent variable. This variable, by decoupling the sources from the observations, makes it possible to derive so-called 'latent' algorithms which alternate between a gradient descent step and a denoising step which corresponds exactly to the problem dealt with previously. The results then show that the most effective strategy is the use of the 'latent' joint estimate which initializes the 'latent' EM. Finally, the last part of this work is devoted to theoretical considerations concerning the choice of joint or marginal estimators in the supervised case. In particular with regard to the sources and their supports. This work shows that it is possible to frame marginal problems by joint problems thanks to a reparameterization of the problem. This then makes it possible to propose a general estimation strategy based on the initialization of marginal estimation algorithms by joint estimation algorithms
Yu, Jia. „Distributed parameter and state estimation for wireless sensor networks“. Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28929.
Der volle Inhalt der QuelleDe, la Rue Thierry. „Quelques résultats sur les systèmes dynamiques gaussiens réels“. Rouen, 1994. https://tel.archives-ouvertes.fr/tel-01546012.
Der volle Inhalt der QuelleGuan, Feng-Bo, und 馮博冠. „Parameter Estimations of Condition Gaussian Distribution Given Bernoulli Distribution“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/87455161476925419316.
Der volle Inhalt der Quelle國立中央大學
數學研究所
90
Abstract The sample mean and sample variance of a Gaussian distribution have the following nice statistical properties:(1)both are sufficient,(2)they are independent, (3)sample mean is m.l.e., UMVUE, and method of momemt estimator,(4)sample variance is m.l.e.,method of moment estimator and UMVUE if multiplied by a constant, (5)both estimators have variances achieve the Cramer-Rao lower bound,(6)both estimators are asymptotically efficient.Based on sample obtianed from the conditional Gaussian distribution given Bernoulli distribution,we study conditional sample mean and conditional sample variance and check if they also have the above statistical properties.
„Bayesian Inference Frameworks for Fluorescence Microscopy Data Analysis“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53545.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Applied Mathematics 2019
FANTACCI, CLAUDIO. „Distributed multi-object tracking over sensor networks: a random finite set approach“. Doctoral thesis, 2015. http://hdl.handle.net/2158/1003256.
Der volle Inhalt der QuelleBuchteile zum Thema "Bernoulli-Gaussian"
Cho, KyungHyun, Alexander Ilin und Tapani Raiko. „Improved Learning of Gaussian-Bernoulli Restricted Boltzmann Machines“. In Lecture Notes in Computer Science, 10–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21735-7_2.
Der volle Inhalt der QuelleErshov, M. A., und A. S. Voroshilov. „UCB Strategy for Gaussian and Bernoulli Multi-armed Bandits“. In Communications in Computer and Information Science, 67–78. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43257-6_6.
Der volle Inhalt der QuelleLi, Ziqiang, Xun Cai und Ti Liang. „Gaussian-Bernoulli Based Convolutional Restricted Boltzmann Machine for Images Feature Extraction“. In Neural Information Processing, 593–602. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_66.
Der volle Inhalt der QuelleVeziroğlu, Merve, Erkan Eziroğlu und İhsan Ömür Bucak. „PERFORMANCE COMPARISON BETWEEN NAIVE BAYES AND MACHINE LEARNING ALGORITHMS FOR NEWS CLASSIFICATION“. In Bayesian Inference - Recent Trends. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1002778.
Der volle Inhalt der QuelleSivia, D. S. „Assigning probabilities“. In Data Analysis, 103–26. Oxford University PressOxford, 2006. http://dx.doi.org/10.1093/oso/9780198568315.003.0005.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Bernoulli-Gaussian"
Cho, Kyung Hyun, Tapani Raiko und Alexander Ilin. „Gaussian-Bernoulli deep Boltzmann machine“. In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706831.
Der volle Inhalt der QuelleJones, George, Ángel F. García-Fernández und Prudence W. H. Wong. „GOSPA-Driven Gaussian Bernoulli Sensor Management“. In 2023 26th International Conference on Information Fusion (FUSION). IEEE, 2023. http://dx.doi.org/10.23919/fusion52260.2023.10224220.
Der volle Inhalt der QuelleLiu, Lei, Chongwen Huang, Yuhao Chi, Chau Yuen, Yong Liang Guan und Ying Li. „Sparse Vector Recovery: Bernoulli-Gaussian Message Passing“. In 2017 IEEE Global Communications Conference (GLOBECOM 2017). IEEE, 2017. http://dx.doi.org/10.1109/glocom.2017.8254836.
Der volle Inhalt der QuelleVila, Jeremy, und Philip Schniter. „Expectation-maximization Bernoulli-Gaussian approximate message passing“. In 2011 45th Asilomar Conference on Signals, Systems and Computers. IEEE, 2011. http://dx.doi.org/10.1109/acssc.2011.6190117.
Der volle Inhalt der QuelleBazot, Cecile, Nicolas Dobigeon, Jean-Yves Tourneret und Alfred O. Hero. „A Bernoulli-Gaussian model for gene factor analysis“. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947728.
Der volle Inhalt der QuelleBassi, Francesca, Michel Kieffer und Cagatay Dikici. „Multiterminal source coding of Bernoulli-Gaussian correlated sources“. In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960130.
Der volle Inhalt der QuelleYin, Jianjun, Jianqiu Zhang und Jin Zhao. „The Gaussian Particle multi-target multi-Bernoulli filter“. In 2010 2nd International Conference on Advanced Computer Control. IEEE, 2010. http://dx.doi.org/10.1109/icacc.2010.5486859.
Der volle Inhalt der QuelleGarcaa-Fernandez, Angel F., Yuxuan Xia, Karl Granstrom, Lennart Svensson und Jason L. Williams. „Gaussian implementation of the multi-Bernoulli mixture filter“. In 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011346.
Der volle Inhalt der QuelleChaari, Lotfi, Jean-Yves Toumeret und Caroline Chaux. „Sparse signal recovery using a Bernoulli generalized Gaussian prior“. In 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, 2015. http://dx.doi.org/10.1109/eusipco.2015.7362676.
Der volle Inhalt der QuelleYildirim, Sinan, A. Taylan Cemgil und Aysin B. Ertuzun. „A hybrid method for deconvolution of Bernoulli-Gaussian processes“. In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960359.
Der volle Inhalt der Quelle