Auswahl der wissenschaftlichen Literatur zum Thema „Probabilistic deep models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Probabilistic deep models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Probabilistic deep models"

1

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen und Antonio Salmerón. „Probabilistic Models with Deep Neural Networks“. Entropy 23, Nr. 1 (18.01.2021): 117. http://dx.doi.org/10.3390/e23010117.

Der volle Inhalt der Quelle
Annotation:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Villanueva Llerena, Julissa, und Denis Deratani Maua. „Efficient Predictive Uncertainty Estimators for Deep Probabilistic Models“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 10 (03.04.2020): 13740–41. http://dx.doi.org/10.1609/aaai.v34i10.7142.

Der volle Inhalt der Quelle
Annotation:
Deep Probabilistic Models (DPM) based on arithmetic circuits representation, such as Sum-Product Networks (SPN) and Probabilistic Sentential Decision Diagrams (PSDD), have shown competitive performance in several machine learning tasks with interesting properties (Poon and Domingos 2011; Kisa et al. 2014). Due to the high number of parameters and scarce data, DPMs can produce unreliable and overconfident inference. This research aims at increasing the robustness of predictive inference with DPMs by obtaining new estimators of the predictive uncertainty. This problem is not new and the literature on deep models contains many solutions. However the probabilistic nature of DPMs offer new possibilities to achieve accurate estimates at low computational costs, but also new challenges, as the range of different types of predictions is much larger than with traditional deep models. To cope with such issues, we plan on investigating two different approaches. The first approach is to perform a global sensitivity analysis on the parameters, measuring the variability of the output to perturbations of the model weights. The second approach is to capture the variability of the prediction with respect to changes in the model architecture. Our approaches shall be evaluated on challenging tasks such as image completion, multilabel classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Karami, Mahdi, und Dale Schuurmans. „Deep Probabilistic Canonical Correlation Analysis“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 9 (18.05.2021): 8055–63. http://dx.doi.org/10.1609/aaai.v35i9.16982.

Der volle Inhalt der Quelle
Annotation:
We propose a deep generative framework for multi-view learning based on a probabilistic interpretation of canonical correlation analysis (CCA). The model combines a linear multi-view layer in the latent space with deep generative networks as observation models, to decompose the variability in multiple views into a shared latent representation that describes the common underlying sources of variation and a set of viewspecific components. To approximate the posterior distribution of the latent multi-view layer, an efficient variational inference procedure is developed based on the solution of probabilistic CCA. The model is then generalized to an arbitrary number of views. An empirical analysis confirms that the proposed deep multi-view model can discover subtle relationships between multiple views and recover rich representations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lu, Ming, Zhihao Duan, Fengqing Zhu und Zhan Ma. „Deep Hierarchical Video Compression“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 8 (24.03.2024): 8859–67. http://dx.doi.org/10.1609/aaai.v38i8.28733.

Der volle Inhalt der Quelle
Annotation:
Recently, probabilistic predictive coding that directly models the conditional distribution of latent features across successive frames for temporal redundancy removal has yielded promising results. Existing methods using a single-scale Variational AutoEncoder (VAE) must devise complex networks for conditional probability estimation in latent space, neglecting multiscale characteristics of video frames. Instead, this work proposes hierarchical probabilistic predictive coding, for which hierarchal VAEs are carefully designed to characterize multiscale latent features as a family of flexible priors and posteriors to predict the probabilities of future frames. Under such a hierarchical structure, lightweight networks are sufficient for prediction. The proposed method outperforms representative learned video compression models on common testing videos and demonstrates computational friendliness with much less memory footprint and faster encoding/decoding. Extensive experiments on adaptation to temporal patterns also indicate the better generalization of our hierarchical predictive mechanism. Furthermore, our solution is the first to enable progressive decoding that is favored in networked video applications with packet loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Maroñas, Juan, Roberto Paredes und Daniel Ramos. „Calibration of deep probabilistic models with decoupled bayesian neural networks“. Neurocomputing 407 (September 2020): 194–205. http://dx.doi.org/10.1016/j.neucom.2020.04.103.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Zhenjun, Xi Liu, Dawei Kou, Yi Hu, Qingrui Zhang und Qingxi Yuan. „Probabilistic Models for the Shear Strength of RC Deep Beams“. Applied Sciences 13, Nr. 8 (12.04.2023): 4853. http://dx.doi.org/10.3390/app13084853.

Der volle Inhalt der Quelle
Annotation:
A new shear strength determination of reinforced concrete (RC) deep beams was proposed by using a statistical approach. The Bayesian–MCMC (Markov Chain Monte Carlo) method was introduced to establish a new shear prediction model and to improve seven existing deterministic models with a database of 645 experimental data. The bias correction terms of deterministic models were described by key explanatory terms identified by a systematic removal process. Considering multi-parameters, the Gibbs sampling was used to solve the high dimensional integration problem and to determine optimum and reliable model parameters with 50,000 iterations for probabilistic models. The model continuity and uncertainty for key parameters were quantified by the partial factor that was investigated by comparing test and model results. The partial factor for the proposed model was 1.25. The proposed model showed improved accuracy and continuity with the mean and coefficient of variation (CoV) of the experimental-to-predicted results ratio as 1.0357 and 0.2312, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Serpell, Cristián, Ignacio A. Araya, Carlos Valle und Héctor Allende. „Addressing model uncertainty in probabilistic forecasting using Monte Carlo dropout“. Intelligent Data Analysis 24 (04.12.2020): 185–205. http://dx.doi.org/10.3233/ida-200015.

Der volle Inhalt der Quelle
Annotation:
In recent years, deep learning models have been developed to address probabilistic forecasting tasks, assuming an implicit stochastic process that relates past observed values to uncertain future values. These models are capable of capturing the inherent uncertainty of the underlying process, but they ignore the model uncertainty that comes from the fact of not having infinite data. This work proposes addressing the model uncertainty problem using Monte Carlo dropout, a variational approach that assigns distributions to the weights of a neural network instead of simply using fixed values. This allows to easily adapt common deep learning models currently in use to produce better probabilistic forecasting estimates, in terms of their consideration of uncertainty. The proposal is validated for prediction intervals estimation on seven energy time series, using a popular probabilistic model called Mean Variance Estimation (MVE), as the deep model adapted using the technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Boursin, Nicolas, Carl Remlinger und Joseph Mikael. „Deep Generators on Commodity Markets Application to Deep Hedging“. Risks 11, Nr. 1 (23.12.2022): 7. http://dx.doi.org/10.3390/risks11010007.

Der volle Inhalt der Quelle
Annotation:
Four deep generative methods for time series are studied on commodity markets and compared with classical probabilistic models. The lack of data in the case of deep hedgers is a common flaw, which deep generative methods seek to address. In the specific case of commodities, it turns out that these generators can also be used to refine the price models by tackling the high-dimensional challenges. In this work, the synthetic time series of commodity prices produced by such generators are studied and then used to train deep hedgers on various options. A fully data-driven approach to commodity risk management is thus proposed, from synthetic price generation to learning risk hedging policies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zuidberg Dos Martires, Pedro. „Probabilistic Neural Circuits“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 17280–89. http://dx.doi.org/10.1609/aaai.v38i15.29675.

Der volle Inhalt der Quelle
Annotation:
Probabilistic circuits (PCs) have gained prominence in recent years as a versatile framework for discussing probabilistic models that support tractable queries and are yet expressive enough to model complex probability distributions. Nevertheless, tractability comes at a cost: PCs are less expressive than neural networks. In this paper we introduce probabilistic neural circuits (PNCs), which strike a balance between PCs and neural nets in terms of tractability and expressive power. Theoretically, we show that PNCs can be interpreted as deep mixtures of Bayesian networks. Experimentally, we demonstrate that PNCs constitute powerful function approximators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ravuri, Suman, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons et al. „Skilful precipitation nowcasting using deep generative models of radar“. Nature 597, Nr. 7878 (29.09.2021): 672–77. http://dx.doi.org/10.1038/s41586-021-03854-z.

Der volle Inhalt der Quelle
Annotation:
AbstractPrecipitation nowcasting, the high-resolution forecasting of precipitation up to two hours ahead, supports the real-world socioeconomic needs of many sectors reliant on weather-dependent decision-making1,2. State-of-the-art operational nowcasting methods typically advect precipitation fields with radar-based wind estimates, and struggle to capture important non-linear events such as convective initiations3,4. Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges. Using statistical, economic and cognitive measures, we show that our method provides improved forecast quality, forecast consistency and forecast value. Our model produces realistic and spatiotemporally consistent predictions over regions up to 1,536 km × 1,280 km and with lead times from 5–90 min ahead. Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods. When verified quantitatively, these nowcasts are skillful without resorting to blurring. We show that generative nowcasting can provide probabilistic predictions that improve forecast value and support operational utility, and at resolutions and lead times where alternative methods struggle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Probabilistic deep models"

1

Misino, Eleonora. „Deep Generative Models with Probabilistic Logic Priors“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.

Der volle Inhalt der Quelle
Annotation:
Many different extensions of the VAE framework have been introduced in the past. How­ ever, the vast majority of them focused on pure sub­-symbolic approaches that are not sufficient for solving generative tasks that require a form of reasoning. In this thesis, we propose the probabilistic logic VAE (PLVAE), a neuro-­symbolic deep generative model that combines the representational power of VAEs with the reasoning ability of probabilistic ­logic programming. The strength of PLVAE resides in its probabilistic ­logic prior, which provides an interpretable structure to the latent space that can be easily changed in order to apply the model to different scenarios. We provide empirical results of our approach by training PLVAE on a base task and then using the same model to generalize to novel tasks that involve reasoning with the same set of symbols.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhai, Menghua. „Deep Probabilistic Models for Camera Geo-Calibration“. UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/74.

Der volle Inhalt der Quelle
Annotation:
The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wu, Di. „Human action recognition using deep probabilistic graphical models“. Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/6603/.

Der volle Inhalt der Quelle
Annotation:
Building intelligent systems that are capable of representing or extracting high-level representations from high-dimensional sensory data lies at the core of solving many A.I. related tasks. Human action recognition is an important topic in computer vision that lies in high-dimensional space. Its applications include robotics, video surveillance, human-computer interaction, user interface design, and multi-media video retrieval amongst others. A number of approaches have been proposed to extract representative features from high-dimensional temporal data, most commonly hard wired geometric or bio-inspired shape context features. This thesis first demonstrates some \emph{ad-hoc} hand-crafted rules for effectively encoding motion features, and later elicits a more generic approach for incorporating structured feature learning and reasoning, \ie deep probabilistic graphical models. The hierarchial dynamic framework first extracts high level features and then uses the learned representation for estimating emission probability to infer action sequences. We show that better action recognition can be achieved by replacing gaussian mixture models by Deep Neural Networks that contain many layers of features to predict probability distributions over states of Markov Models. The framework can be easily extended to include an ergodic state to segment and recognise actions simultaneously. The first part of the thesis focuses on analysis and applications of hand-crafted features for human action representation and classification. We show that the ``hard coded" concept of correlogram can incorporate correlations between time domain sequences and we further investigate multi-modal inputs, \eg depth sensor input and its unique traits for action recognition. The second part of this thesis focuses on marrying probabilistic graphical models with Deep Neural Networks (both Deep Belief Networks and Deep 3D Convolutional Neural Networks) for structured sequence prediction. The proposed Deep Dynamic Neural Network exhibits its general framework for structured 2D data representation and classification. This inspires us to further investigate for applying various graphical models for time-variant video sequences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rossi, Simone. „Improving Scalability and Inference in Probabilistic Deep Models“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.

Der volle Inhalt der Quelle
Annotation:
Au cours de la dernière décennie, l'apprentissage profond a atteint un niveau de maturité suffisant pour devenir le choix privilégié pour résoudre les problèmes liés à l'apprentissage automatique ou pour aider les processus de prise de décision.En même temps, l'apprentissage profond n'a généralement pas la capacité de quantifier avec précision l'incertitude de ses prédictions, ce qui rend ces modèles moins adaptés aux applications critiques en matière de risque.Une solution possible pour résoudre ce problème est d'utiliser une formulation bayésienne ; cependant, bien que cette solution soit élégante, elle est analytiquement difficile à mettre en œuvre et nécessite des approximations. Malgré les énormes progrès réalisés au cours des dernières années, il reste encore beaucoup de chemin à parcourir pour rendre ces approches largement applicables. Dans cette thèse, nous adressons certains des défis de l'apprentissage profond bayésien moderne, en proposant et en étudiant des solutions pour améliorer la scalabilité et l'inférence de ces modèles.La première partie de la thèse est consacrée aux modèles profonds où l'inférence est effectuée en utilisant l'inférence variationnelle (VI).Plus précisément, nous étudions le rôle de l'initialisation des paramètres variationnels et nous montrons comment des stratégies d'initialisation prudentes peuvent permettre à l'inférence variationnelle de fournir de bonnes performances même dans des modèles à grande échelle.Dans cette partie de la thèse, nous étudions également l'effet de sur-régularisation de l'objectif variationnel sur les modèles sur-paramétrés.Pour résoudre ce problème, nous proposons une nouvelle paramétrisation basée sur la transformée de Walsh-Hadamard ; non seulement cela résout l'effet de sur-régularisation de l'objectif variationnel mais cela nous permet également de modéliser des postérités non factorisées tout en gardant la complexité temporelle et spatiale sous contrôle.La deuxième partie de la thèse est consacrée à une étude sur le rôle des prieurs.Bien qu'étant un élément essentiel de la règle de Bayes, il est généralement difficile de choisir de bonnes prieurs pour les modèles d'apprentissage profond.Pour cette raison, nous proposons deux stratégies différentes basées (i) sur l'interprétation fonctionnelle des réseaux de neurones et (ii) sur une procédure évolutive pour effectuer une sélection de modèle sur les hyper-paramètres antérieurs, semblable à la maximisation de la vraisemblance marginale.Pour conclure cette partie, nous analysons un autre type de modèle bayésien (processus Gaussien) et nous étudions l'effet de l'application d'un a priori sur tous les hyperparamètres de ces modèles, y compris les variables supplémentaires requises par les approximations du inducing points.Nous montrons également comment il est possible d'inférer des a posteriori de forme libre sur ces variables, qui, par convention, auraient été autrement estimées par point
Throughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hager, Paul Andrew. „Investigation of connection between deep learning and probabilistic graphical models“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119552.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 21).
The field of machine learning (ML) has benefitted greatly from its relationship with the field of classical statistics. In support of that continued expansion, the following proposes an alternative perspective at the link between these fields. The link focuses on probabilistic graphical models in the context of reinforcement learning. Viewing certain algorithms as reinforcement learning gives one an ability to map ML concepts to statistics problems. Training a multi-layer nonlinear perceptron algorithm is equivalent to structure learning problems in probabilistic graphical models (PGMs). The technique of boosting weak rules into an ensemble is weighted sampling. Finally regularizing neural networks using the dropout technique is conditioning on certain observations in PGMs.
by Paul Andrew Hager.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Farouni, Tarek. „An Overview of Probabilistic Latent Variable Models with anApplication to the Deep Unsupervised Learning of ChromatinStates“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492189894812539.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Qian, Weizhu. „Discovering human mobility from mobile data : probabilistic models and learning algorithms“. Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA025.

Der volle Inhalt der Quelle
Annotation:
Les données d'utilisation des smartphones peuvent être utilisées pour étudier la mobilité humaine que ce soit en environnement extérieur ouvert ou à l'intérieur de bâtiments. Dans ce travail, nous étudions ces deux aspects de la mobilité humaine en proposant des algorithmes de machine learning adapté aux sources d'information disponibles dans chacun des contextes.Pour l'étude de la mobilité en environnement extérieur, nous utilisons les données de coordonnées GPS collectées pour découvrir les schémas de mobilité quotidiens des utilisateurs. Pour cela, nous proposons un algorithme de clustering automatique utilisant le Dirichlet process Gaussian mixture model (DPGMM) afin de regrouper les trajectoires GPS quotidiennes. Cette méthode de segmentation est basée sur l'estimation des densités de probabilité des trajectoires, ce qui atténue les problèmes causés par le bruit des données.Concernant l'étude de la mobilité humaine dans les bâtiments, nous utilisons les données d'empreintes digitales WiFi collectées par les smartphones. Afin de prédire la trajectoire d'un individu à l'intérieur d'un bâtiment, nous avons conçu un modèle hybride d'apprentissage profond, appelé convolutional mixture density recurrent neural network (CMDRNN), qui combine les avantages de différents réseaux de neurones profonds multiples. De plus, en ce qui concerne la localisation précise en intérieur, nous supposons qu'il existe une distribution latente régissant l'entrée et la sortie en même temps. Sur la base de cette hypothèse, nous avons développé un modèle d'apprentissage semi-supervisé basé sur le variational autoencoder (VAE). Dans la procédure d'apprentissage non supervisé, nous utilisons un modèle VAE pour apprendre une distribution latente de l'entrée qui est composée de données d'empreintes digitales WiFi. Dans la procédure d'apprentissage supervisé, nous utilisons un réseau de neurones pour calculer la cible, coordonnées par l'utilisateur. De plus, sur la base de la même hypothèse utilisée dans le modèle d'apprentissage semi-supervisé basé sur le VAE, nous exploitons la théorie des goulots d'étranglement de l'information pour concevoir un modèle basé sur le variational information bottleneck (VIB). Il s'agit d'un modèle d'apprentissage en profondeur de bout en bout plus facile à former et offrant de meilleures performances.Enfin, les méthodes proposées ont été validées sur plusieurs jeux de données publics acquis en situation réelle. Les résultats obtenus ont permis de vérifier l'efficacité de nos méthodes par rapport à l'existant
Smartphone usage data can be used to study human indoor and outdoor mobility. In our work, we investigate both aspects in proposing machine learning-based algorithms adapted to the different information sources that can be collected.In terms of outdoor mobility, we use the collected GPS coordinate data to discover the daily mobility patterns of the users. To this end, we propose an automatic clustering algorithm using the Dirichlet process Gaussian mixture model (DPGMM) so as to cluster the daily GPS trajectories. This clustering method is based on estimating probability densities of the trajectories, which alleviate the problems caused by the data noise.By contrast, we utilize the collected WiFi fingerprint data to study indoor human mobility. In order to predict the indoor user location at the next time points, we devise a hybrid deep learning model, called the convolutional mixture density recurrent neural network (CMDRNN), which combines the advantages of different multiple deep neural networks. Moreover, as for accurate indoor location recognition, we presume that there exists a latent distribution governing the input and output at the same time. Based on this assumption, we develop a variational auto-encoder (VAE)-based semi-supervised learning model. In the unsupervised learning procedure, we employ a VAE model to learn a latent distribution of the input, the WiFi fingerprint data. In the supervised learning procedure, we use a neural network to compute the target, the user coordinates. Furthermore, based on the same assumption used in the VAE-based semi-supervised learning model, we leverage the information bottleneck theory to devise a variational information bottleneck (VIB)-based model. This is an end-to-end deep learning model which is easier to train and has better performance.Finally, we validate thees proposed methods on several public real-world datasets providing thus results that verify the efficiencies of our methods as compared to other existing methods generally used
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

SYED, MUHAMMAD FARRUKH SHAHID. „Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT“. Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048543.

Der volle Inhalt der Quelle
Annotation:
Cognitive Radio Internet of Things (CR-IoT) has revolutionized almost every field of life and reshaped the technological world. Several tiny devices are seamlessly connected in a CR-IoT network to perform various tasks in many applications. Nevertheless, CR-IoT surfers from malicious attacks that pulverize communication and perturb network performance. Therefore, recently it is envisaged to introduce higher-level Artificial Intelligence (AI) by incorporating Self-Awareness (SA) capabilities into CR-IoT objects to facilitate CR-IoT networks to establish secure transmission against vicious attacks autonomously. In this context, sub-band information from the Orthogonal Frequency Division Multiplexing (OFDM) modulated transmission in the spectrum has been extracted from the radio device receiver terminal, and a generalized state vector (GS) is formed containing low dimension in-phase and quadrature components. Accordingly, a probabilistic method based on learning a switching Dynamic Bayesian Network (DBN) from OFDM transmission with no abnormalities has been proposed to statistically model signal behaviors inside the CR-IoT spectrum. A Bayesian filter, Markov Jump Particle Filter (MJPF), is implemented to perform state estimation and capture malicious attacks. Subsequently, GS containing a higher number of subcarriers has been investigated. In this connection, Variational autoencoders (VAE) is used as a deep learning technique to extract features from high dimension radio signals into low dimension latent space z, and DBN is learned based on GS containing latent space data. Afterward, to perform state estimation and capture abnormalities in a spectrum, Adapted-Markov Jump Particle Filter (A-MJPF) is deployed. The proposed method can capture anomaly that appears due to either jammer attacks in transmission or cognitive devices in a network experiencing different transmission sources that have not been observed previously. The performance is assessed using the receiver operating characteristic (ROC) curves and the area under the curve (AUC) metrics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

El-Shaer, Mennat Allah. „An Experimental Evaluation of Probabilistic Deep Networks for Real-time Traffic Scene Representation using Graphical Processing Units“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1546539166677894.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hu, Xu. „Towards efficient learning of graphical models and neural networks with variational techniques“. Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1037.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, je me concentrerai principalement sur l’inférence variationnelle et les modèles probabilistes. En particulier, je couvrirai plusieurs projets sur lesquels j'ai travaillé pendant ma thèse sur l'amélioration de l'efficacité des systèmes AI / ML avec des techniques variationnelles. La thèse comprend deux parties. Dans la première partie, l’efficacité des modèles probabilistes graphiques est étudiée. Dans la deuxième partie, plusieurs problèmes d’apprentissage des réseaux de neurones profonds sont examinés, qui sont liés à l’efficacité énergétique ou à l’efficacité des échantillons
In this thesis, I will mainly focus on variational inference and probabilistic models. In particular, I will cover several projects I have been working on during my PhD about improving the efficiency of AI/ML systems with variational techniques. The thesis consists of two parts. In the first part, the computational efficiency of probabilistic graphical models is studied. In the second part, several problems of learning deep neural networks are investigated, which are related to either energy efficiency or sample efficiency
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Probabilistic deep models"

1

Oaksford, Mike, und Nick Chater. Causal Models and Conditional Reasoning. Herausgegeben von Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.5.

Der volle Inhalt der Quelle
Annotation:
There are deep intuitions that the meaning of conditional statements relate to probabilistic law-like dependencies. In this chapter it is argued that these intuitions can be captured by representing conditionals in causal Bayes nets (CBNs) and that this conjecture is theoretically productive. This proposal is borne out in a variety of results. First, causal considerations can provide a unified account of abstract and causal conditional reasoning. Second, a recent model (Fernbach & Erb, 2013) can be extended to the explicit causal conditional reasoning paradigm (Byrne, 1989), making some novel predictions on the way. Third, when embedded in the broader cognitive system involved in reasoning, causal model theory can provide a novel explanation for apparent violations of the Markov condition in causal conditional reasoning (Ali et al, 2011). Alternative explanations are also considered (see, Rehder, 2014a) with respect to this evidence. While further work is required, the chapter concludes that the conjecture that conditional reasoning is underpinned by representations and processes similar to CBNs is indeed a productive line of research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tutino, Stefania. Uncertainty in Post-Reformation Catholicism. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190694098.001.0001.

Der volle Inhalt der Quelle
Annotation:
This book provides a historical account of the development and implications of early modern probabilism. First elaborated in the sixteenth century, probabilism represented a significant and controversial novelty in Catholic moral theology. Against a deep-seated tradition defending the strict application of moral rules, probabilist theologians maintained that in situations of uncertainty, the agent can legitimately follow any course of action supported by a probable opinion, no matter how disputable. By the second half of the seventeenth century, and thanks in part to Pascal’s influential antiprobabilist stances, probabilism had become inextricably linked to the Society of Jesus and to a lax and excessively forgiving moral system. To this day, most historians either ignore probabilism, or they associate it with moral duplicity and intellectual and cultural decadence. By contrast, this book argues that probabilism was instrumental for addressing the challenges created by a geographically and intellectually expanding world. Early modern probabilist theologians saw that these challenges provoked an exponential growth of uncertainties, doubts, and dilemmas of conscience, and they realized that traditional theology was not equipped to deal with them. Therefore, they used probabilism to integrate changes and novelties within the post-Reformation Catholic theological and intellectual system. Seen in this light, probabilism represented the result of their attempts to appreciate, come to terms with, and manage uncertainty. Uncertainty continues to play a central role even today. Thus, learning how early modern probabilists engaged with uncertainty might be useful for us as we try to cope with our own moral and epistemological doubts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Trappenberg, Thomas P. Fundamentals of Machine Learning. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198828044.001.0001.

Der volle Inhalt der Quelle
Annotation:
Machine learning is exploding, both in research and for industrial applications. This book aims to be a brief introduction to this area given the importance of this topic in many disciplines, from sciences to engineering, and even for its broader impact on our society. This book tries to contribute with a style that keeps a balance between brevity of explanations, the rigor of mathematical arguments, and outlining principle ideas. At the same time, this book tries to give some comprehensive overview of a variety of methods to see their relation on specialization within this area. This includes some introduction to Bayesian approaches to modeling as well as deep learning. Writing small programs to apply machine learning techniques is made easy today by the availability of high-level programming systems. This book offers examples in Python with the machine learning libraries sklearn and Keras. The first four chapters concentrate largely on the practical side of applying machine learning techniques. The book then discusses more fundamental concepts and includes their formulation in a probabilistic context. This is followed by chapters on advanced models, that of recurrent neural networks and that of reinforcement learning. The book closes with a brief discussion on the impact of machine learning and AI on our society.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Levinson, Stephen C. Speech Acts. Herausgegeben von Yan Huang. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199697960.013.22.

Der volle Inhalt der Quelle
Annotation:
The essential insight of speech act theory was that when we use language, we perform actions—in a more modern parlance, core language use in interaction is a form of joint action. Over the last thirty years, speech acts have been relatively neglected in linguistic pragmatics, although important work has been done especially in conversation analysis. Here we review the core issues—the identifying characteristics, the degree of universality, the problem of multiple functions, and the puzzle of speech act recognition. Special attention is drawn to the role of conversation structure, probabilistic linguistic cues, and plan or sequence inference in speech act recognition, and to the centrality of deep recursive structures in sequences of speech acts in conversation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Probabilistic deep models"

1

Sucar, Luis Enrique. „Deep Learning and Graphical Models“. In Probabilistic Graphical Models, 327–46. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61943-5_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bahadir, Cagla Deniz, Benjamin Liechty, David J. Pisapia und Mert R. Sabuncu. „Characterizing the Features of Mitotic Figures Using a Conditional Diffusion Probabilistic Model“. In Deep Generative Models, 121–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53767-7_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gustafsson, Fredrik K., Martin Danelljan, Goutam Bhat und Thomas B. Schön. „Energy-Based Models for Deep Probabilistic Regression“. In Computer Vision – ECCV 2020, 325–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58565-5_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hung, Alex Ling Yu, Zhiqing Sun, Wanwen Chen und John Galeotti. „Hierarchical Probabilistic Ultrasound Image Inpainting via Variational Inference“. In Deep Generative Models, and Data Augmentation, Labelling, and Imperfections, 83–92. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88210-5_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Nkambule, Tshepo, und Ritesh Ajoodha. „Classification of Music by Genre Using Probabilistic Models and Deep Learning Models“. In Proceedings of Sixth International Congress on Information and Communication Technology, 185–93. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2102-4_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Völcker, Claas, Alejandro Molina, Johannes Neumann, Dirk Westermann und Kristian Kersting. „DeepNotebooks: Deep Probabilistic Models Construct Python Notebooks for Reporting Datasets“. In Machine Learning and Knowledge Discovery in Databases, 28–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Dinh, Xuan Truong, und Hai Van Pham. „Social Network Analysis Based on Combining Probabilistic Models with Graph Deep Learning“. In Communication and Intelligent Systems, 975–86. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1089-9_76.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Linghu, Yuan, Xiangxue Li und Zhenlong Zhang. „Deep Learning vs. Traditional Probabilistic Models: Case Study on Short Inputs for Password Guessing“. In Algorithms and Architectures for Parallel Processing, 468–83. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38991-8_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liu, Zheng, und Hao Wang. „Research on Process Diagnosis of Severe Accidents Based on Deep Learning and Probabilistic Safety Analysis“. In Springer Proceedings in Physics, 624–34. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_54.

Der volle Inhalt der Quelle
Annotation:
AbstractSevere accident process diagnosis provides data basis for severe accident prognosis, positive and negative effect evaluation of Severe Accident Management Guidelines (SAMGs), especially to quickly diagnose Plant Damage State (PDS) for operators in the main control room or personnel in the Technical Support Center (TSC) based on historic data of the limited number of instruments during the operation transition from Emergency Operation Procedures (EOPs) to SAMGs. This diagnosis methodology is based on tens of thousands of simulations of severe accidents using the integrated analysis program MAAP. The simulation process is organized in reference to Level 1 Probabilistic Safety Analysis (L1 PSA) and EOPs. According to L1 PSA, the initial event of accidents and scenarios from the initial event to core damage are presented in Event Trees (ET), which include operator actions following up EOPs. During simulation, the time uncertainty of operations in scenarios is considered. Besides the big data collection of simulations, a deep learning algorithm, Convolutional Neural Network (CNN), has been used in this severe accident diagnosis methodology, to diagnose the type of severe accident initiation event, the breach size, breach location, and occurrence time of the initial event of LOCA, and action time by operators following up EOPs intending to take Nuclear Power Plant (NPP) back to safety state. These algorithms train classification and regression models with ET-based numerical simulations, such as the classification model of sequence number, break location, and regression model of the break size and occurrence time of initial event MBLOCA. Then these trained models take advantage of historic data from instruments in NPP to generate a diagnosis conclusion, which is automatically written into an input deck file of MAAP. This input deck originated from previous traceback efforts and provides a numerical analysis basis for predicting the follow-up process of a severe accident, which is conducive to severe accident management. Results of this paper show a theoretical possibility that under limited available instruments, this traceback and diagnosis method can automatically and quickly diagnose PDS when operation transit from EOPs to SAMGs and provide numerical analysis basis for severe accident process prognosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Stojanovski, David, Uxio Hermida, Pablo Lamata, Arian Beqiri und Alberto Gomez. „Echo from Noise: Synthetic Ultrasound Image Generation Using Diffusion Models for Real Image Segmentation“. In Simplifying Medical Ultrasound, 34–43. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44521-7_4.

Der volle Inhalt der Quelle
Annotation:
AbstractWe propose a novel pipeline for the generation of synthetic ultrasound images via Denoising Diffusion Probabilistic Models (DDPMs) guided by cardiac semantic label maps. We show that these synthetic images can serve as a viable substitute for real data in the training of deep-learning models for ultrasound image analysis tasks such as cardiac segmentation. To demonstrate the effectiveness of this approach, we generated synthetic 2D echocardiograms and trained a neural network for segmenting the left ventricle and left atrium. The performance of the network trained on exclusively synthetic images was evaluated on an unseen dataset of real images and yielded mean Dice scores of $$88.6 \pm 4.91$$ 88.6 ± 4.91 , $$91.9 \pm 4.22$$ 91.9 ± 4.22 , $$85.2 \pm 4.83$$ 85.2 ± 4.83 % for left ventricular endocardium, epicardium and left atrial segmentation respectively. This represents a relative increase of 9.2, 3.3 and 13.9% in Dice scores compared to the previous state-of-the-art. The proposed pipeline has potential for application to a wide range of other tasks across various medical imaging modalities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Probabilistic deep models"

1

Sidheekh, Sahil, Saurabh Mathur, Athresh Karanam und Sriraam Natarajan. „Deep Tractable Probabilistic Models“. In CODS-COMAD 2024: 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD). New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3632410.3633295.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Xixi, Che-Tsung Lin und Christopher Zach. „Energy-based Models for Deep Probabilistic Regression“. In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9955636.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Asgariandehkordi, Hojat, Sobhan Goudarzi, Adrian Basarab und Hassan Rivaz. „Deep Ultrasound Denoising Using Diffusion Probabilistic Models“. In 2023 IEEE International Ultrasonics Symposium (IUS). IEEE, 2023. http://dx.doi.org/10.1109/ius51837.2023.10306544.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Villanueva Llerena, Julissa. „Predictive Uncertainty Estimation for Tractable Deep Probabilistic Models“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/745.

Der volle Inhalt der Quelle
Annotation:
Tractable Deep Probabilistic Models (TPMs) are generative models based on arithmetic circuits that allow for exact marginal inference in linear time. These models have obtained promising results in several machine learning tasks. Like many other models, TPMs can produce over-confident incorrect inferences, especially on regions with small statistical support. In this work, we will develop efficient estimators of the predictive uncertainty that are robust to data scarcity and outliers. We investigate two approaches. The first approach measures the variability of the output to perturbations of the model weights. The second approach captures the variability of the prediction to changes in the model architecture. We will evaluate the approaches on challenging tasks such as image completion and multilabel classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cotterell, Ryan, und Jason Eisner. „Probabilistic Typology: Deep Generative Models of Vowel Inventories“. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/p17-1109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

ZHANG, YANG, YOU-WU WANG und YI-QING NI. „HYBRID PROBABILISTIC DEEP LEARNING FOR DAMAGE IDENTIFICATION“. In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/37014.

Der volle Inhalt der Quelle
Annotation:
In structural health monitoring, various types of sensors collect a large amount of data for structural defect detection. These data provide critical support for the application of machine learning for structural damage identification. However, machine learning relies heavily on training data, whose quality and distribution can affect the effectiveness of detection models in real-world damage identification. In addition, machine learning contains a large number of parameters that are highly uncertain, which results in the output of machine learning models is not always as reliable. These deterministic deep networks usually make overconfident decisions in some data. The ability of deep learning to provide safe and reliable decisions is very important when applied in the field of engineering. In order to ensure the decision security of machine learning models, this paper proposes a hybrid probabilistic deep network for structural damage identification. The proposed method converts deterministic weights into a Gaussian distribution, which in turn quantifies the uncertainty in machine learning. Among them, variational inference is used for uncertainty modeling of probabilistic deep networks. These uncertainty metrics can be used to determine whether the output of the machine learning model is reliable. Nevertheless, the introduction of uncertainty weakens the learning ability of deep networks. Meanwhile, the number of parameters in the probabilistic layer is twice that of the deterministic layer for the same architecture. Therefore, probabilistic deep learning is more difficult to train compared to deterministic deep learning. To address these issues, deep learning with hybrid probabilistic and non-probabilistic layers needs to be investigated. This paper analyzed and discussed the effects of different numbers of probability layers on the effectiveness of structural damage identification. Finally, a series of experimental results showed that the proposed method is able to accurately identify structural damage while quantifying the decision uncertainty.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Xiucheng, Gao Cong und Yun Cheng. „Spatial Transition Learning on Road Networks with Deep Probabilistic Models“. In 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020. http://dx.doi.org/10.1109/icde48307.2020.00037.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhu, Jun. „Probabilistic Machine Learning: Models, Algorithms and a Programming Library“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/823.

Der volle Inhalt der Quelle
Annotation:
Probabilistic machine learning provides a suite of powerful tools for modeling uncertainty, performing probabilistic inference, and making predictions or decisions in uncertain environments. In this paper, we present an overview of our recent work on probabilistic machine learning, including the theory of regularized Bayesian inference, Bayesian deep learning, scalable inference algorithms, a probabilistic programming library named ZhuSuan, and applications in representation learning as well as learning from crowds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Saleem, Rabia, Bo Yuan, Fatih Kurugollu und Ashiq Anjum. „Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks“. In 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC). IEEE, 2020. http://dx.doi.org/10.1109/ucc48980.2020.00070.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bejarano, Gissella. „PhD Forum: Deep Learning and Probabilistic Models Applied to Sequential Data“. In 2018 IEEE International Conference on Smart Computing (SMARTCOMP). IEEE, 2018. http://dx.doi.org/10.1109/smartcomp.2018.00066.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie