Gotowa bibliografia na temat „Probabilistic deep models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Probabilistic deep models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Probabilistic deep models"
Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen i Antonio Salmerón. "Probabilistic Models with Deep Neural Networks". Entropy 23, nr 1 (18.01.2021): 117. http://dx.doi.org/10.3390/e23010117.
Pełny tekst źródłaVillanueva Llerena, Julissa, i Denis Deratani Maua. "Efficient Predictive Uncertainty Estimators for Deep Probabilistic Models". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 10 (3.04.2020): 13740–41. http://dx.doi.org/10.1609/aaai.v34i10.7142.
Pełny tekst źródłaKarami, Mahdi, i Dale Schuurmans. "Deep Probabilistic Canonical Correlation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 9 (18.05.2021): 8055–63. http://dx.doi.org/10.1609/aaai.v35i9.16982.
Pełny tekst źródłaLu, Ming, Zhihao Duan, Fengqing Zhu i Zhan Ma. "Deep Hierarchical Video Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 8 (24.03.2024): 8859–67. http://dx.doi.org/10.1609/aaai.v38i8.28733.
Pełny tekst źródłaMaroñas, Juan, Roberto Paredes i Daniel Ramos. "Calibration of deep probabilistic models with decoupled bayesian neural networks". Neurocomputing 407 (wrzesień 2020): 194–205. http://dx.doi.org/10.1016/j.neucom.2020.04.103.
Pełny tekst źródłaLi, Zhenjun, Xi Liu, Dawei Kou, Yi Hu, Qingrui Zhang i Qingxi Yuan. "Probabilistic Models for the Shear Strength of RC Deep Beams". Applied Sciences 13, nr 8 (12.04.2023): 4853. http://dx.doi.org/10.3390/app13084853.
Pełny tekst źródłaSerpell, Cristián, Ignacio A. Araya, Carlos Valle i Héctor Allende. "Addressing model uncertainty in probabilistic forecasting using Monte Carlo dropout". Intelligent Data Analysis 24 (4.12.2020): 185–205. http://dx.doi.org/10.3233/ida-200015.
Pełny tekst źródłaBoursin, Nicolas, Carl Remlinger i Joseph Mikael. "Deep Generators on Commodity Markets Application to Deep Hedging". Risks 11, nr 1 (23.12.2022): 7. http://dx.doi.org/10.3390/risks11010007.
Pełny tekst źródłaZuidberg Dos Martires, Pedro. "Probabilistic Neural Circuits". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 17280–89. http://dx.doi.org/10.1609/aaai.v38i15.29675.
Pełny tekst źródłaRavuri, Suman, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons i in. "Skilful precipitation nowcasting using deep generative models of radar". Nature 597, nr 7878 (29.09.2021): 672–77. http://dx.doi.org/10.1038/s41586-021-03854-z.
Pełny tekst źródłaRozprawy doktorskie na temat "Probabilistic deep models"
Misino, Eleonora. "Deep Generative Models with Probabilistic Logic Priors". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.
Pełny tekst źródłaZhai, Menghua. "Deep Probabilistic Models for Camera Geo-Calibration". UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/74.
Pełny tekst źródłaWu, Di. "Human action recognition using deep probabilistic graphical models". Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/6603/.
Pełny tekst źródłaRossi, Simone. "Improving Scalability and Inference in Probabilistic Deep Models". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.
Pełny tekst źródłaThroughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Hager, Paul Andrew. "Investigation of connection between deep learning and probabilistic graphical models". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119552.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 21).
The field of machine learning (ML) has benefitted greatly from its relationship with the field of classical statistics. In support of that continued expansion, the following proposes an alternative perspective at the link between these fields. The link focuses on probabilistic graphical models in the context of reinforcement learning. Viewing certain algorithms as reinforcement learning gives one an ability to map ML concepts to statistics problems. Training a multi-layer nonlinear perceptron algorithm is equivalent to structure learning problems in probabilistic graphical models (PGMs). The technique of boosting weak rules into an ensemble is weighted sampling. Finally regularizing neural networks using the dropout technique is conditioning on certain observations in PGMs.
by Paul Andrew Hager.
M. Eng.
Farouni, Tarek. "An Overview of Probabilistic Latent Variable Models with anApplication to the Deep Unsupervised Learning of ChromatinStates". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492189894812539.
Pełny tekst źródłaQian, Weizhu. "Discovering human mobility from mobile data : probabilistic models and learning algorithms". Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA025.
Pełny tekst źródłaSmartphone usage data can be used to study human indoor and outdoor mobility. In our work, we investigate both aspects in proposing machine learning-based algorithms adapted to the different information sources that can be collected.In terms of outdoor mobility, we use the collected GPS coordinate data to discover the daily mobility patterns of the users. To this end, we propose an automatic clustering algorithm using the Dirichlet process Gaussian mixture model (DPGMM) so as to cluster the daily GPS trajectories. This clustering method is based on estimating probability densities of the trajectories, which alleviate the problems caused by the data noise.By contrast, we utilize the collected WiFi fingerprint data to study indoor human mobility. In order to predict the indoor user location at the next time points, we devise a hybrid deep learning model, called the convolutional mixture density recurrent neural network (CMDRNN), which combines the advantages of different multiple deep neural networks. Moreover, as for accurate indoor location recognition, we presume that there exists a latent distribution governing the input and output at the same time. Based on this assumption, we develop a variational auto-encoder (VAE)-based semi-supervised learning model. In the unsupervised learning procedure, we employ a VAE model to learn a latent distribution of the input, the WiFi fingerprint data. In the supervised learning procedure, we use a neural network to compute the target, the user coordinates. Furthermore, based on the same assumption used in the VAE-based semi-supervised learning model, we leverage the information bottleneck theory to devise a variational information bottleneck (VIB)-based model. This is an end-to-end deep learning model which is easier to train and has better performance.Finally, we validate thees proposed methods on several public real-world datasets providing thus results that verify the efficiencies of our methods as compared to other existing methods generally used
SYED, MUHAMMAD FARRUKH SHAHID. "Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT". Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048543.
Pełny tekst źródłaEl-Shaer, Mennat Allah. "An Experimental Evaluation of Probabilistic Deep Networks for Real-time Traffic Scene Representation using Graphical Processing Units". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1546539166677894.
Pełny tekst źródłaHu, Xu. "Towards efficient learning of graphical models and neural networks with variational techniques". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1037.
Pełny tekst źródłaIn this thesis, I will mainly focus on variational inference and probabilistic models. In particular, I will cover several projects I have been working on during my PhD about improving the efficiency of AI/ML systems with variational techniques. The thesis consists of two parts. In the first part, the computational efficiency of probabilistic graphical models is studied. In the second part, several problems of learning deep neural networks are investigated, which are related to either energy efficiency or sample efficiency
Książki na temat "Probabilistic deep models"
Oaksford, Mike, i Nick Chater. Causal Models and Conditional Reasoning. Redaktor Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.5.
Pełny tekst źródłaTutino, Stefania. Uncertainty in Post-Reformation Catholicism. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190694098.001.0001.
Pełny tekst źródłaTrappenberg, Thomas P. Fundamentals of Machine Learning. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198828044.001.0001.
Pełny tekst źródłaLevinson, Stephen C. Speech Acts. Redaktor Yan Huang. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199697960.013.22.
Pełny tekst źródłaCzęści książek na temat "Probabilistic deep models"
Sucar, Luis Enrique. "Deep Learning and Graphical Models". W Probabilistic Graphical Models, 327–46. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61943-5_16.
Pełny tekst źródłaBahadir, Cagla Deniz, Benjamin Liechty, David J. Pisapia i Mert R. Sabuncu. "Characterizing the Features of Mitotic Figures Using a Conditional Diffusion Probabilistic Model". W Deep Generative Models, 121–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53767-7_12.
Pełny tekst źródłaGustafsson, Fredrik K., Martin Danelljan, Goutam Bhat i Thomas B. Schön. "Energy-Based Models for Deep Probabilistic Regression". W Computer Vision – ECCV 2020, 325–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58565-5_20.
Pełny tekst źródłaHung, Alex Ling Yu, Zhiqing Sun, Wanwen Chen i John Galeotti. "Hierarchical Probabilistic Ultrasound Image Inpainting via Variational Inference". W Deep Generative Models, and Data Augmentation, Labelling, and Imperfections, 83–92. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88210-5_7.
Pełny tekst źródłaNkambule, Tshepo, i Ritesh Ajoodha. "Classification of Music by Genre Using Probabilistic Models and Deep Learning Models". W Proceedings of Sixth International Congress on Information and Communication Technology, 185–93. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2102-4_17.
Pełny tekst źródłaVölcker, Claas, Alejandro Molina, Johannes Neumann, Dirk Westermann i Kristian Kersting. "DeepNotebooks: Deep Probabilistic Models Construct Python Notebooks for Reporting Datasets". W Machine Learning and Knowledge Discovery in Databases, 28–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_3.
Pełny tekst źródłaDinh, Xuan Truong, i Hai Van Pham. "Social Network Analysis Based on Combining Probabilistic Models with Graph Deep Learning". W Communication and Intelligent Systems, 975–86. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1089-9_76.
Pełny tekst źródłaLinghu, Yuan, Xiangxue Li i Zhenlong Zhang. "Deep Learning vs. Traditional Probabilistic Models: Case Study on Short Inputs for Password Guessing". W Algorithms and Architectures for Parallel Processing, 468–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38991-8_31.
Pełny tekst źródłaLiu, Zheng, i Hao Wang. "Research on Process Diagnosis of Severe Accidents Based on Deep Learning and Probabilistic Safety Analysis". W Springer Proceedings in Physics, 624–34. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_54.
Pełny tekst źródłaStojanovski, David, Uxio Hermida, Pablo Lamata, Arian Beqiri i Alberto Gomez. "Echo from Noise: Synthetic Ultrasound Image Generation Using Diffusion Models for Real Image Segmentation". W Simplifying Medical Ultrasound, 34–43. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44521-7_4.
Pełny tekst źródłaStreszczenia konferencji na temat "Probabilistic deep models"
Sidheekh, Sahil, Saurabh Mathur, Athresh Karanam i Sriraam Natarajan. "Deep Tractable Probabilistic Models". W CODS-COMAD 2024: 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD). New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3632410.3633295.
Pełny tekst źródłaLiu, Xixi, Che-Tsung Lin i Christopher Zach. "Energy-based Models for Deep Probabilistic Regression". W 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9955636.
Pełny tekst źródłaAsgariandehkordi, Hojat, Sobhan Goudarzi, Adrian Basarab i Hassan Rivaz. "Deep Ultrasound Denoising Using Diffusion Probabilistic Models". W 2023 IEEE International Ultrasonics Symposium (IUS). IEEE, 2023. http://dx.doi.org/10.1109/ius51837.2023.10306544.
Pełny tekst źródłaVillanueva Llerena, Julissa. "Predictive Uncertainty Estimation for Tractable Deep Probabilistic Models". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/745.
Pełny tekst źródłaCotterell, Ryan, i Jason Eisner. "Probabilistic Typology: Deep Generative Models of Vowel Inventories". W Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/p17-1109.
Pełny tekst źródłaZHANG, YANG, YOU-WU WANG i YI-QING NI. "HYBRID PROBABILISTIC DEEP LEARNING FOR DAMAGE IDENTIFICATION". W Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/37014.
Pełny tekst źródłaLi, Xiucheng, Gao Cong i Yun Cheng. "Spatial Transition Learning on Road Networks with Deep Probabilistic Models". W 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020. http://dx.doi.org/10.1109/icde48307.2020.00037.
Pełny tekst źródłaZhu, Jun. "Probabilistic Machine Learning: Models, Algorithms and a Programming Library". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/823.
Pełny tekst źródłaSaleem, Rabia, Bo Yuan, Fatih Kurugollu i Ashiq Anjum. "Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks". W 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020. http://dx.doi.org/10.1109/ucc48980.2020.00070.
Pełny tekst źródłaBejarano, Gissella. "PhD Forum: Deep Learning and Probabilistic Models Applied to Sequential Data". W 2018 IEEE International Conference on Smart Computing (SMARTCOMP). IEEE, 2018. http://dx.doi.org/10.1109/smartcomp.2018.00066.
Pełny tekst źródła