Gotowa bibliografia na temat „Bernoulli-Gaussian”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Bernoulli-Gaussian”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Bernoulli-Gaussian"
Hrabovets, Anastasiia. "Feynman diagrams and their limits for Bernoulli noise". Theory of Stochastic Processes 27(43), nr 1 (16.11.2023): 11–30. http://dx.doi.org/10.3842/tsp-4311781209-33.
Pełny tekst źródłaTörő, Olivér, Tamás Bécsi, Szilárd Aradi i Péter Gáspár. "IMM Bernoulli Gaussian Particle Filter". IFAC-PapersOnLine 51, nr 22 (2018): 274–79. http://dx.doi.org/10.1016/j.ifacol.2018.11.554.
Pełny tekst źródłaXie, Shaohao, Shaohua Zhuang i Yusong Du. "Improved Bernoulli Sampling for Discrete Gaussian Distributions over the Integers". Mathematics 9, nr 4 (13.02.2021): 378. http://dx.doi.org/10.3390/math9040378.
Pełny tekst źródłaFinamore, Weiler, Marcelo Pinho, Manish Sharma i Moises Ribeiro. "Modeling Noise as a Bernoulli-Gaussian Process". Journal of Communication and Information Systems 38 (2023): 175–86. http://dx.doi.org/10.14209/jcis.2023.20.
Pełny tekst źródłaBobkov, Sergey G., Friedrich Gotze i Christian Houdre. "On Gaussian and Bernoulli Covariance Representations". Bernoulli 7, nr 3 (czerwiec 2001): 439. http://dx.doi.org/10.2307/3318495.
Pełny tekst źródłaLavielle, Marc. "Bayesian deconvolution of Bernoulli-Gaussian processes". Signal Processing 33, nr 1 (lipiec 1993): 67–79. http://dx.doi.org/10.1016/0165-1684(93)90079-p.
Pełny tekst źródłaDe La Rue, Thierry. "Systèmes dynamiques gaussiens d'entropie nulle, lâchement et non lâchement Bernoulli". Ergodic Theory and Dynamical Systems 16, nr 2 (kwiecień 1996): 379–404. http://dx.doi.org/10.1017/s0143385700008865.
Pełny tekst źródłaAl-Zuhairi, Dheyaa T., i Abbas Salman Hameed. "DOA estimation under Bernoulli-Gaussian impulsive noise". IOP Conference Series: Materials Science and Engineering 1090, nr 1 (1.03.2021): 012096. http://dx.doi.org/10.1088/1757-899x/1090/1/012096.
Pełny tekst źródłaChong-Yung Chi i J. Mendel. "Viterbi algorithm detector for Bernoulli-Gaussian processes". IEEE Transactions on Acoustics, Speech, and Signal Processing 33, nr 3 (czerwiec 1985): 511–19. http://dx.doi.org/10.1109/tassp.1985.1164580.
Pełny tekst źródłaTalagrand, Michel. "Gaussian averages, Bernoulli averages, and Gibbs' measures". Random Structures and Algorithms 21, nr 3-4 (październik 2002): 197–204. http://dx.doi.org/10.1002/rsa.10059.
Pełny tekst źródłaRozprawy doktorskie na temat "Bernoulli-Gaussian"
Gaerke, Tiffani M. "Characteristic Functions and Bernoulli-Gaussian Impulsive Noise Channels". University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1408040080.
Pełny tekst źródłaVu, Hung Van. "Capacities of Bernoulli-Gaussian Impulsive Noise Channels in Rayleigh Fading". University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1407370145.
Pełny tekst źródłaBoudineau, Mégane. "Vers la résolution "optimale" de problèmes inverses non linéaires parcimonieux grâce à l'exploitation de variables binaires sur dictionnaires continus : applications en astrophysique". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30020/document.
Pełny tekst źródłaThis thesis deals with solutions of nonlinear inverse problems using a sparsity prior; more specifically when the data can be modelled as a linear combination of a few functions, which depend non-linearly on a "location" parameter, i.e. frequencies for spectral analysis or time-delay for spike train deconvolution. These problems are generally reformulated as linear sparse approximation problems, thanks to an evaluation of the nonlinear functions at location parameters discretised on a thin grid, building a "discrete dictionary". However, such an approach has two major drawbacks. On the one hand, the discrete dictionary is highly correlated; classical sub-optimal methods such as L1- penalisation or greedy algorithms can then fail. On the other hand, the estimated location parameter, which belongs to the discretisation grid, is necessarily discrete and that leads to model errors. To deal with these issues, we propose in this work an exact sparsity model, thanks to the introduction of binary variables, and an optimal solution of the problem with a "continuous dictionary" allowing a continuous estimation of the location parameter. We focus on two research axes, which we illustrate with problems such as spike train deconvolution and spectral analysis of unevenly sampled data. The first axis focusses on the "dictionary interpolation" principle, which consists in a linearisation of the continuous dictionary in order to get a constrained linear sparse approximation problem. The introduction of binary variables allows us to reformulate this problem as a "Mixed Integer Program" (MIP) and to exactly model the sparsity thanks to the "pseudo-norm L0". We study different kinds of dictionary interpolation and constraints relaxation, in order to solve the problem optimally thanks to MIP classical algorithms. For the second axis, in a Bayesian framework, the binary variables are supposed random with a Bernoulli distribution and we model the sparsity through a Bernoulli-Gaussian prior. This model is extended to take into account continuous location parameters (BGE model). We then estimate the parameters from samples drawn using Markov chain Monte Carlo algorithms. In particular, we show that marginalising the amplitudes allows us to improve the sampling of a Gibbs algorithm in a supervised case (when the model's hyperparameters are known). In an unsupervised case, we propose to take advantage of such a marginalisation through a "Partially Collapsed Gibbs Sampler." Finally, we adapt the BGE model and associated samplers to a topical science case in Astrophysics: the detection of exoplanets from radial velocity measurements. The efficiency of our method will be illustrated with simulated data, as well as actual astrophysical data
Barbault, Pierre. "Un ticket pour le sparse : de l'estimation des signaux et des paramètres en problèmes inverses bernoulli-gaussiens". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG049.
Pełny tekst źródłaMagneto/Electro Encephalography (M/EEG) imaging can be used to reconstruct focal points of cerebral activity by measuring the Electro Magnetic field produced by it. Even if the characteristic time of the recorded signals is low enough to be able to consider a linear acquisition model, the number of possible sources remains very large compared to the number of sensors. In fact, this is an ill-posed and, moreover, a large-scale problem. In order to reduce it to a 'well-posed' problem, a common assumption, and which makes sense for neurons, is that the sources are sparse i.e. that the number of non-zero values is very small. We then model the problem from a probabilistic point of view using a Bernoulli-Gaussian (BG) a priori for the sources. There are many methods that can solve such a problem, but most of them require knowledge of the parameters of the BG law. The objective of this thesis is to propose a completely unsupervised approach which allows to estimate the parameters of the BG law as well as to estimate the sources if possible. To do this, Expectation-Minimization (EM) algorithms are explored. First, the simplest case is treated: that of denoising where the linear operator is the identity. In this framework, three algorithms are proposed: A Moments method based on data statistics, an EM and a joint estimation algorithm for sources and parameters. The results show that the EM initialized by the Method of Moments is the best candidate for parameter estimation. Secondly, the previous results are extended to the general case of any linear operator thanks to the introduction of a latent variable. This variable, by decoupling the sources from the observations, makes it possible to derive so-called 'latent' algorithms which alternate between a gradient descent step and a denoising step which corresponds exactly to the problem dealt with previously. The results then show that the most effective strategy is the use of the 'latent' joint estimate which initializes the 'latent' EM. Finally, the last part of this work is devoted to theoretical considerations concerning the choice of joint or marginal estimators in the supervised case. In particular with regard to the sources and their supports. This work shows that it is possible to frame marginal problems by joint problems thanks to a reparameterization of the problem. This then makes it possible to propose a general estimation strategy based on the initialization of marginal estimation algorithms by joint estimation algorithms
Yu, Jia. "Distributed parameter and state estimation for wireless sensor networks". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28929.
Pełny tekst źródłaDe, la Rue Thierry. "Quelques résultats sur les systèmes dynamiques gaussiens réels". Rouen, 1994. https://tel.archives-ouvertes.fr/tel-01546012.
Pełny tekst źródłaGuan, Feng-Bo, i 馮博冠. "Parameter Estimations of Condition Gaussian Distribution Given Bernoulli Distribution". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/87455161476925419316.
Pełny tekst źródła國立中央大學
數學研究所
90
Abstract The sample mean and sample variance of a Gaussian distribution have the following nice statistical properties:(1)both are sufficient,(2)they are independent, (3)sample mean is m.l.e., UMVUE, and method of momemt estimator,(4)sample variance is m.l.e.,method of moment estimator and UMVUE if multiplied by a constant, (5)both estimators have variances achieve the Cramer-Rao lower bound,(6)both estimators are asymptotically efficient.Based on sample obtianed from the conditional Gaussian distribution given Bernoulli distribution,we study conditional sample mean and conditional sample variance and check if they also have the above statistical properties.
"Bayesian Inference Frameworks for Fluorescence Microscopy Data Analysis". Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53545.
Pełny tekst źródłaDissertation/Thesis
Masters Thesis Applied Mathematics 2019
FANTACCI, CLAUDIO. "Distributed multi-object tracking over sensor networks: a random finite set approach". Doctoral thesis, 2015. http://hdl.handle.net/2158/1003256.
Pełny tekst źródłaCzęści książek na temat "Bernoulli-Gaussian"
Cho, KyungHyun, Alexander Ilin i Tapani Raiko. "Improved Learning of Gaussian-Bernoulli Restricted Boltzmann Machines". W Lecture Notes in Computer Science, 10–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21735-7_2.
Pełny tekst źródłaErshov, M. A., i A. S. Voroshilov. "UCB Strategy for Gaussian and Bernoulli Multi-armed Bandits". W Communications in Computer and Information Science, 67–78. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43257-6_6.
Pełny tekst źródłaLi, Ziqiang, Xun Cai i Ti Liang. "Gaussian-Bernoulli Based Convolutional Restricted Boltzmann Machine for Images Feature Extraction". W Neural Information Processing, 593–602. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_66.
Pełny tekst źródłaVeziroğlu, Merve, Erkan Eziroğlu i İhsan Ömür Bucak. "PERFORMANCE COMPARISON BETWEEN NAIVE BAYES AND MACHINE LEARNING ALGORITHMS FOR NEWS CLASSIFICATION". W Bayesian Inference - Recent Trends. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1002778.
Pełny tekst źródłaSivia, D. S. "Assigning probabilities". W Data Analysis, 103–26. Oxford University PressOxford, 2006. http://dx.doi.org/10.1093/oso/9780198568315.003.0005.
Pełny tekst źródłaStreszczenia konferencji na temat "Bernoulli-Gaussian"
Cho, Kyung Hyun, Tapani Raiko i Alexander Ilin. "Gaussian-Bernoulli deep Boltzmann machine". W 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706831.
Pełny tekst źródłaJones, George, Ángel F. García-Fernández i Prudence W. H. Wong. "GOSPA-Driven Gaussian Bernoulli Sensor Management". W 2023 26th International Conference on Information Fusion (FUSION). IEEE, 2023. http://dx.doi.org/10.23919/fusion52260.2023.10224220.
Pełny tekst źródłaLiu, Lei, Chongwen Huang, Yuhao Chi, Chau Yuen, Yong Liang Guan i Ying Li. "Sparse Vector Recovery: Bernoulli-Gaussian Message Passing". W 2017 IEEE Global Communications Conference (GLOBECOM 2017). IEEE, 2017. http://dx.doi.org/10.1109/glocom.2017.8254836.
Pełny tekst źródłaVila, Jeremy, i Philip Schniter. "Expectation-maximization Bernoulli-Gaussian approximate message passing". W 2011 45th Asilomar Conference on Signals, Systems and Computers. IEEE, 2011. http://dx.doi.org/10.1109/acssc.2011.6190117.
Pełny tekst źródłaBazot, Cecile, Nicolas Dobigeon, Jean-Yves Tourneret i Alfred O. Hero. "A Bernoulli-Gaussian model for gene factor analysis". W 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947728.
Pełny tekst źródłaBassi, Francesca, Michel Kieffer i Cagatay Dikici. "Multiterminal source coding of Bernoulli-Gaussian correlated sources". W ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960130.
Pełny tekst źródłaYin, Jianjun, Jianqiu Zhang i Jin Zhao. "The Gaussian Particle multi-target multi-Bernoulli filter". W 2010 2nd International Conference on Advanced Computer Control. IEEE, 2010. http://dx.doi.org/10.1109/icacc.2010.5486859.
Pełny tekst źródłaGarcaa-Fernandez, Angel F., Yuxuan Xia, Karl Granstrom, Lennart Svensson i Jason L. Williams. "Gaussian implementation of the multi-Bernoulli mixture filter". W 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019. http://dx.doi.org/10.23919/fusion43075.2019.9011346.
Pełny tekst źródłaChaari, Lotfi, Jean-Yves Toumeret i Caroline Chaux. "Sparse signal recovery using a Bernoulli generalized Gaussian prior". W 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, 2015. http://dx.doi.org/10.1109/eusipco.2015.7362676.
Pełny tekst źródłaYildirim, Sinan, A. Taylan Cemgil i Aysin B. Ertuzun. "A hybrid method for deconvolution of Bernoulli-Gaussian processes". W ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960359.
Pełny tekst źródła