Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Deep Discriminative Probabilistic Models.

Artykuły w czasopismach na temat „Deep Discriminative Probabilistic Models”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Deep Discriminative Probabilistic Models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kamran, Fahad, i Jenna Wiens. "Estimating Calibrated Individualized Survival Curves with Deep Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 1 (18.05.2021): 240–48. http://dx.doi.org/10.1609/aaai.v35i1.16098.

Pełny tekst źródła
Streszczenie:
In survival analysis, deep learning approaches have been proposed for estimating an individual's probability of survival over some time horizon. Such approaches can capture complex non-linear relationships, without relying on restrictive assumptions regarding the relationship between an individual's characteristics and their underlying survival process. To date, however, these methods have focused primarily on optimizing discriminative performance and have ignored model calibration. Well-calibrated survival curves present realistic and meaningful probabilistic estimates of the true underlying survival process for an individual. However, due to the lack of ground-truth regarding the underlying stochastic process of survival for an individual, optimizing and measuring calibration in survival analysis is an inherently difficult task. In this work, we i) highlight the shortcomings of existing approaches in terms of calibration and ii) propose a new training scheme for optimizing deep survival analysis models that maximizes discriminative performance, subject to good calibration. Compared to state-of-the-art approaches across two publicly available datasets, our proposed training scheme leads to significant improvements in calibration, while maintaining good discriminative performance.
Style APA, Harvard, Vancouver, ISO itp.
2

Al Moubayed, Noura, Stephen McGough i Bashar Awwad Shiekh Hasan. "Beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling". PeerJ Computer Science 6 (27.01.2020): e252. http://dx.doi.org/10.7717/peerj-cs.252.

Pełny tekst źródła
Streszczenie:
The article presents a discriminative approach to complement the unsupervised probabilistic nature of topic modelling. The framework transforms the probabilities of the topics per document into class-dependent deep learning models that extract highly discriminatory features suitable for classification. The framework is then used for sentiment analysis with minimum feature engineering. The approach transforms the sentiment analysis problem from the word/document domain to the topics domain making it more robust to noise and incorporating complex contextual information that are not represented otherwise. A stacked denoising autoencoder (SDA) is then used to model the complex relationship among the topics per sentiment with minimum assumptions. To achieve this, a distinct topic model and SDA per sentiment polarity is built with an additional decision layer for classification. The framework is tested on a comprehensive collection of benchmark datasets that vary in sample size, class bias and classification task. A significant improvement to the state of the art is achieved without the need for a sentiment lexica or over-engineered features. A further analysis is carried out to explain the observed improvement in accuracy.
Style APA, Harvard, Vancouver, ISO itp.
3

Bhattacharya, Debswapna. "refineD: improved protein structure refinement using machine learning based restrained relaxation". Bioinformatics 35, nr 18 (13.02.2019): 3320–28. http://dx.doi.org/10.1093/bioinformatics/btz101.

Pełny tekst źródła
Streszczenie:
AbstractMotivationProtein structure refinement aims to bring moderately accurate template-based protein models closer to the native state through conformational sampling. However, guiding the sampling towards the native state by effectively using restraints remains a major issue in structure refinement.ResultsHere, we develop a machine learning based restrained relaxation protocol that uses deep discriminative learning based binary classifiers to predict multi-resolution probabilistic restraints from the starting structure and subsequently converts these restraints to be integrated into Rosetta all-atom energy function as additional scoring terms during structure refinement. We use four restraint resolutions as adopted in GDT-HA (0.5, 1, 2 and 4 Å), centered on the Cα atom of each residue that are predicted by ensemble of four deep discriminative classifiers trained using combinations of sequence and structure-derived features as well as several energy terms from Rosetta centroid scoring function. The proposed method, refineD, has been found to produce consistent and substantial structural refinement through the use of cumulative and non-cumulative restraints on 150 benchmarking targets. refineD outperforms unrestrained relaxation strategy or relaxation that is restrained to starting structures using the FastRelax application of Rosetta or atomic-level energy minimization based ModRefiner method as well as molecular dynamics (MD) simulation based FG-MD protocol. Furthermore, by adjusting restraint resolutions, the method addresses the tradeoff that exists between degree and consistency of refinement. These results demonstrate a promising new avenue for improving accuracy of template-based protein models by effectively guiding conformational sampling during structure refinement through the use of machine learning based restraints.Availability and implementationhttp://watson.cse.eng.auburn.edu/refineD/.Supplementary informationSupplementary data are available at Bioinformatics online.
Style APA, Harvard, Vancouver, ISO itp.
4

Wu, Boxi, Jie Jiang, Haidong Ren, Zifan Du, Wenxiao Wang, Zhifeng Li, Deng Cai, Xiaofei He, Binbin Lin i Wei Liu. "Towards In-Distribution Compatible Out-of-Distribution Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 10333–41. http://dx.doi.org/10.1609/aaai.v37i9.26230.

Pełny tekst źródła
Streszczenie:
Deep neural network, despite its remarkable capability of discriminating targeted in-distribution samples, shows poor performance on detecting anomalous out-of-distribution data. To address this defect, state-of-the-art solutions choose to train deep networks on an auxiliary dataset of outliers. Various training criteria for these auxiliary outliers are proposed based on heuristic intuitions. However, we find that these intuitively designed outlier training criteria can hurt in-distribution learning and eventually lead to inferior performance. To this end, we identify three causes of the in-distribution incompatibility: contradictory gradient, false likelihood, and distribution shift. Based on our new understandings, we propose a new out-of-distribution detection method by adapting both the top-design of deep models and the loss function. Our method achieves in-distribution compatibility by pursuing less interference with the probabilistic characteristic of in-distribution features. On several benchmarks, our method not only achieves the state-of-the-art out-of-distribution detection performance but also improves the in-distribution accuracy.
Style APA, Harvard, Vancouver, ISO itp.
5

Roy, Debaditya, Sarunas Girdzijauskas i Serghei Socolovschi. "Confidence-Calibrated Human Activity Recognition". Sensors 21, nr 19 (30.09.2021): 6566. http://dx.doi.org/10.3390/s21196566.

Pełny tekst źródła
Streszczenie:
Wearable sensors are widely used in activity recognition (AR) tasks with broad applicability in health and well-being, sports, geriatric care, etc. Deep learning (DL) has been at the forefront of progress in activity classification with wearable sensors. However, most state-of-the-art DL models used for AR are trained to discriminate different activity classes at high accuracy, not considering the confidence calibration of predictive output of those models. This results in probabilistic estimates that might not capture the true likelihood and is thus unreliable. In practice, it tends to produce overconfident estimates. In this paper, the problem is addressed by proposing deep time ensembles, a novel ensembling method capable of producing calibrated confidence estimates from neural network architectures. In particular, the method trains an ensemble of network models with temporal sequences extracted by varying the window size over the input time series and averaging the predictive output. The method is evaluated on four different benchmark HAR datasets and three different neural network architectures. Across all the datasets and architectures, our method shows an improvement in calibration by reducing the expected calibration error (ECE)by at least 40%, thereby providing superior likelihood estimates. In addition to providing reliable predictions our method also outperforms the state-of-the-art classification results in the WISDM, UCI HAR, and PAMAP2 datasets and performs as good as the state-of-the-art in the Skoda dataset.
Style APA, Harvard, Vancouver, ISO itp.
6

Tsuda, Koji, Motoaki Kawanabe, Gunnar Rätsch, Sören Sonnenburg i Klaus-Robert Müller. "A New Discriminative Kernel from Probabilistic Models". Neural Computation 14, nr 10 (1.10.2002): 2397–414. http://dx.doi.org/10.1162/08997660260293274.

Pełny tekst źródła
Streszczenie:
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
Style APA, Harvard, Vancouver, ISO itp.
7

Ahmed, Nisar, i Mark Campbell. "On estimating simple probabilistic discriminative models with subclasses". Expert Systems with Applications 39, nr 7 (czerwiec 2012): 6659–64. http://dx.doi.org/10.1016/j.eswa.2011.12.042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Du, Fang, Jiangshe Zhang, Junying Hu i Rongrong Fei. "Discriminative multi-modal deep generative models". Knowledge-Based Systems 173 (czerwiec 2019): 74–82. http://dx.doi.org/10.1016/j.knosys.2019.02.023.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Che, Tong, Xiaofeng Liu, Site Li, Yubin Ge, Ruixiang Zhang, Caiming Xiong i Yoshua Bengio. "Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 8 (18.05.2021): 7002–10. http://dx.doi.org/10.1609/aaai.v35i8.16862.

Pełny tekst źródła
Streszczenie:
AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to detect unreliable inputs or predictions of deep discriminative models, using separately trained deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation. We give both intuitive and theoretical justifications for the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on both out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve state-of-the-art results in all of these problems.
Style APA, Harvard, Vancouver, ISO itp.
10

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen i Antonio Salmerón. "Probabilistic Models with Deep Neural Networks". Entropy 23, nr 1 (18.01.2021): 117. http://dx.doi.org/10.3390/e23010117.

Pełny tekst źródła
Streszczenie:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework.
Style APA, Harvard, Vancouver, ISO itp.
11

Jong Kyoung Kim i Seungjin Choi. "Probabilistic Models for Semisupervised Discriminative Motif Discovery in DNA Sequences". IEEE/ACM Transactions on Computational Biology and Bioinformatics 8, nr 5 (wrzesień 2011): 1309–17. http://dx.doi.org/10.1109/tcbb.2010.84.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Fang, Yi, Luo Si i Aditya P. Mathur. "Discriminative probabilistic models for expert search in heterogeneous information sources". Information Retrieval 14, nr 2 (21.08.2010): 158–77. http://dx.doi.org/10.1007/s10791-010-9139-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Ahmed, Nisar, i Mark Campbell. "Variational Bayesian Learning of Probabilistic Discriminative Models With Latent Softmax Variables". IEEE Transactions on Signal Processing 59, nr 7 (lipiec 2011): 3143–54. http://dx.doi.org/10.1109/tsp.2011.2144587.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Wu, Ying Nian, Ruiqi Gao, Tian Han i Song-Chun Zhu. "A tale of three probabilistic families: Discriminative, descriptive, and generative models". Quarterly of Applied Mathematics 77, nr 2 (31.12.2018): 423–65. http://dx.doi.org/10.1090/qam/1528.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Qin, Huafeng, i Peng Wang. "Finger-Vein Verification Based on LSTM Recurrent Neural Networks". Applied Sciences 9, nr 8 (24.04.2019): 1687. http://dx.doi.org/10.3390/app9081687.

Pełny tekst źródła
Streszczenie:
Finger-vein biometrics has been extensively investigated for personal verification. A challenge is that the finger-vein acquisition is affected by many factors, which results in many ambiguous regions in the finger-vein image. Generally, the separability between vein and background is poor in such regions. Despite recent advances in finger-vein pattern segmentation, current solutions still lack the robustness to extract finger-vein features from raw images because they do not take into account the complex spatial dependencies of vein pattern. This paper proposes a deep learning model to extract vein features by combining the Convolutional Neural Networks (CNN) model and Long Short-Term Memory (LSTM) model. Firstly, we automatically assign the label based on a combination of known state of the art handcrafted finger-vein image segmentation techniques, and generate various sequences for each labeled pixel along different directions. Secondly, several Stacked Convolutional Neural Networks and Long Short-Term Memory (SCNN-LSTM) models are independently trained on the resulting sequences. The outputs of various SCNN-LSTMs form a complementary and over-complete representation and are conjointly put into Probabilistic Support Vector Machine (P-SVM) to predict the probability of each pixel of being foreground (i.e., vein pixel) given several sequences centered on it. Thirdly, we propose a supervised encoding scheme to extract the binary vein texture. A threshold is automatically computed by taking into account the maximal separation between the inter-class distance and the intra-class distance. In our approach, the CNN learns robust features for vein texture pattern representation and LSTM stores the complex spatial dependencies of vein patterns. So, the pixels in any region of a test image can then be classified effectively. In addition, the supervised information is employed to encode the vein patterns, so the resulting encoding images contain more discriminating features. The experimental results on one public finger-vein database show that the proposed approach significantly improves the finger-vein verification accuracy.
Style APA, Harvard, Vancouver, ISO itp.
16

Villanueva Llerena, Julissa, i Denis Deratani Maua. "Efficient Predictive Uncertainty Estimators for Deep Probabilistic Models". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 10 (3.04.2020): 13740–41. http://dx.doi.org/10.1609/aaai.v34i10.7142.

Pełny tekst źródła
Streszczenie:
Deep Probabilistic Models (DPM) based on arithmetic circuits representation, such as Sum-Product Networks (SPN) and Probabilistic Sentential Decision Diagrams (PSDD), have shown competitive performance in several machine learning tasks with interesting properties (Poon and Domingos 2011; Kisa et al. 2014). Due to the high number of parameters and scarce data, DPMs can produce unreliable and overconfident inference. This research aims at increasing the robustness of predictive inference with DPMs by obtaining new estimators of the predictive uncertainty. This problem is not new and the literature on deep models contains many solutions. However the probabilistic nature of DPMs offer new possibilities to achieve accurate estimates at low computational costs, but also new challenges, as the range of different types of predictions is much larger than with traditional deep models. To cope with such issues, we plan on investigating two different approaches. The first approach is to perform a global sensitivity analysis on the parameters, measuring the variability of the output to perturbations of the model weights. The second approach is to capture the variability of the prediction with respect to changes in the model architecture. Our approaches shall be evaluated on challenging tasks such as image completion, multilabel classification.
Style APA, Harvard, Vancouver, ISO itp.
17

Chu, Joseph Lin, i Adam Krzyźak. "The Recognition Of Partially Occluded Objects with Support Vector Machines, Convolutional Neural Networks and Deep Belief Networks". Journal of Artificial Intelligence and Soft Computing Research 4, nr 1 (1.01.2014): 5–19. http://dx.doi.org/10.2478/jaiscr-2014-0021.

Pełny tekst źródła
Streszczenie:
Abstract Biologically inspired artificial neural networks have been widely used for machine learning tasks such as object recognition. Deep architectures, such as the Convolutional Neural Network, and the Deep Belief Network have recently been implemented successfully for object recognition tasks. We conduct experiments to test the hypothesis that certain primarily generative models such as the Deep Belief Network should perform better on the occluded object recognition task than purely discriminative models such as Convolutional Neural Networks and Support Vector Machines. When the generative models are run in a partially discriminative manner, the data does not support the hypothesis. It is also found that the implementation of Gaussian visible units in a Deep Belief Network trained on occluded image data allows it to also learn to effectively classify non-occluded images
Style APA, Harvard, Vancouver, ISO itp.
18

Wang, Liwei, Xiong Li, Zhuowen Tu i Jiaya Jia. "Discriminative Clustering via Generative Feature Mapping". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 1162–68. http://dx.doi.org/10.1609/aaai.v26i1.8305.

Pełny tekst źródła
Streszczenie:
Existing clustering methods can be roughly classified into two categories: generative and discriminative approaches. Generative clustering aims to explain the data and thus is adaptive to the underlying data distribution; discriminative clustering, on the other hand, emphasizes on finding partition boundaries. In this paper, we take the advantages of both models by coupling the two paradigms through feature mapping derived from linearizing Bayesian classifiers. Such the feature mapping strategy maps nonlinear boundaries of generative clustering to linear ones in the feature space where we explicitly impose the maximum entropy principle. We also propose the unified probabilistic framework, enabling solvers using standard techniques. Experiments on a variety of datasets bear out the notable benefit of our method in terms of adaptiveness and robustness.
Style APA, Harvard, Vancouver, ISO itp.
19

Buscombe, Daniel, i Paul Grams. "Probabilistic Substrate Classification with Multispectral Acoustic Backscatter: A Comparison of Discriminative and Generative Models". Geosciences 8, nr 11 (30.10.2018): 395. http://dx.doi.org/10.3390/geosciences8110395.

Pełny tekst źródła
Streszczenie:
We propose a probabilistic graphical model for discriminative substrate characterization, to support geological and biological habitat mapping in aquatic environments. The model, called a fully-connected conditional random field (CRF), is demonstrated using multispectral and monospectral acoustic backscatter from heterogeneous seafloors in Patricia Bay, British Columbia, and Bedford Basin, Nova Scotia. Unlike previously proposed discriminative algorithms, the CRF model considers both the relative backscatter magnitudes of different substrates and their relative proximities. The model therefore combines the statistical flexibility of a machine learning algorithm with an inherently spatial treatment of the substrate. The CRF model predicts substrates such that nearby locations with similar backscattering characteristics are likely to be in the same substrate class. The degree of allowable proximity and backscatter similarity are controlled by parameters that are learned from the data. CRF model results were evaluated against a popular generative model known as a Gaussian Mixture model (GMM) that doesn’t include spatial dependencies, only covariance between substrate backscattering response over different frequencies. Both models are used in conjunction with sparse bed observations/samples in a supervised classification. A detailed accuracy assessment, including a leave-one-out cross-validation analysis, was performed using both models. Using multispectral backscatter, the GMM model trained on 50% of the bed observations resulted in a 75% and 89% average accuracies in Patricia Bay and Bedford Basin, respectively. The same metrics for the CRF model were 78% and 95%. Further, the CRF model resulted in a 91% mean cross-validation accuracy across four substrate classes at Patricia Bay, and a 99.5% mean accuracy across three substrate classes at Bedford Basin, which suggest that the CRF model generalizes extremely well to new data. This analysis also showed that the CRF model was much less sensitive to the specific number and locations of bed observations than the generative model, owing to its ability to incorporate spatial autocorrelation in substrates. The CRF therefore may prove to be a powerful ‘spatially aware’ alternative to other discriminative classifiers.
Style APA, Harvard, Vancouver, ISO itp.
20

Luo, You-Wei, Chuan-Xian Ren, Pengfei Ge, Ke-Kun Huang i Yu-Feng Yu. "Unsupervised Domain Adaptation via Discriminative Manifold Embedding and Alignment". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5029–36. http://dx.doi.org/10.1609/aaai.v34i04.5943.

Pełny tekst źródła
Streszczenie:
Unsupervised domain adaptation is effective in leveraging the rich information from the source domain to the unsupervised target domain. Though deep learning and adversarial strategy make an important breakthrough in the adaptability of features, there are two issues to be further explored. First, the hard-assigned pseudo labels on the target domain are risky to the intrinsic data structure. Second, the batch-wise training manner in deep learning limits the description of the global structure. In this paper, a Riemannian manifold learning framework is proposed to achieve transferability and discriminability consistently. As to the first problem, this method establishes a probabilistic discriminant criterion on the target domain via soft labels. Further, this criterion is extended to a global approximation scheme for the second issue; such approximation is also memory-saving. The manifold metric alignment is exploited to be compatible with the embedding space. A theoretical error bound is derived to facilitate the alignment. Extensive experiments have been conducted to investigate the proposal and results of the comparison study manifest the superiority of consistent manifold learning framework.
Style APA, Harvard, Vancouver, ISO itp.
21

Karami, Mahdi, i Dale Schuurmans. "Deep Probabilistic Canonical Correlation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 9 (18.05.2021): 8055–63. http://dx.doi.org/10.1609/aaai.v35i9.16982.

Pełny tekst źródła
Streszczenie:
We propose a deep generative framework for multi-view learning based on a probabilistic interpretation of canonical correlation analysis (CCA). The model combines a linear multi-view layer in the latent space with deep generative networks as observation models, to decompose the variability in multiple views into a shared latent representation that describes the common underlying sources of variation and a set of viewspecific components. To approximate the posterior distribution of the latent multi-view layer, an efficient variational inference procedure is developed based on the solution of probabilistic CCA. The model is then generalized to an arbitrary number of views. An empirical analysis confirms that the proposed deep multi-view model can discover subtle relationships between multiple views and recover rich representations.
Style APA, Harvard, Vancouver, ISO itp.
22

Cui, Bo, Guyue Hu i Shan Yu. "DeepCollaboration: Collaborative Generative and Discriminative Models for Class Incremental Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1175–83. http://dx.doi.org/10.1609/aaai.v35i2.16204.

Pełny tekst źródła
Streszczenie:
An important challenge for neural networks is to learn incrementally, i.e., learn new classes without catastrophic forgetting. To overcome this problem, generative replay technique has been suggested, which can generate samples belonging to learned classes while learning new ones. However, such generative models usually suffer from increased distribution mismatch between the generated and original samples along the learning process. In this work, we propose DeepCollaboration (D-Collab), a collaborative framework of deep generative and discriminative models to solve this problem effectively. We develop a discriminative learning model to incrementally update the latent feature space for continual classification. At the same time, a generative model is introduced to achieve conditional generation using the latent feature distribution produced by the discriminative model. Importantly, the generative and discriminative models are connected through bidirectional training to enforce cycle-consistency of mappings between feature and image domains. Furthermore, a domain alignment module is used to eliminate the divergence between the feature distributions of generated images and real ones. This module together with the discriminative model can perform effective sample mining to facilitate incremental learning. Extensive experiments on several visual recognition datasets show that our system can achieve state-of-the-art performance.
Style APA, Harvard, Vancouver, ISO itp.
23

Gordon, Jonathan, i José Miguel Hernández-Lobato. "Combining deep generative and discriminative models for Bayesian semi-supervised learning". Pattern Recognition 100 (kwiecień 2020): 107156. http://dx.doi.org/10.1016/j.patcog.2019.107156.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Bai, Wenjun, Changqin Quan i Zhi-Wei Luo. "Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders". Applied Sciences 9, nr 12 (21.06.2019): 2551. http://dx.doi.org/10.3390/app9122551.

Pełny tekst źródła
Streszczenie:
Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.
Style APA, Harvard, Vancouver, ISO itp.
25

Li, Fuqiang, Tongzhuang Zhang, Yong Liu i Feiqi Long. "Deep Residual Vector Encoding for Vein Recognition". Electronics 11, nr 20 (13.10.2022): 3300. http://dx.doi.org/10.3390/electronics11203300.

Pełny tekst źródła
Streszczenie:
Vein recognition has been drawing more attention recently because it is highly secure and reliable for practical biometric applications. However, underlying issues such as uneven illumination, low contrast, and sparse patterns with high inter-class similarities make the traditional vein recognition systems based on hand-engineered features unreliable. Recent successes of convolutional neural networks (CNNs) for large-scale image recognition tasks motivate us to replace the traditional hand-engineered features with the superior CNN to design a robust and discriminative vein recognition system. To address the difficulty of direct training or fine-tuning of a CNN with existing small-scale vein databases, a new knowledge transfer approach is formulated using pre-trained CNN models together with a training dataset (e.g., ImageNet) as a robust descriptor generation machine. With the generated deep residual descriptors, a very discriminative model, namely deep residual vector encoding (DRVE), is proposed by a hierarchical design of dictionary learning, coding, and classifier training procedures. Rigorous experiments are conducted with a high-quality hand-dorsa vein database, and superior recognition results compared with state-of-the-art models fully demonstrate the effectiveness of the proposed models. An additional experiment with the PolyU multispectral palmprint database is designed to illustrate the generalization ability.
Style APA, Harvard, Vancouver, ISO itp.
26

Hu, Gang, Chahna Dixit i Guanqiu Qi. "Discriminative Shape Feature Pooling in Deep Neural Networks". Journal of Imaging 8, nr 5 (20.04.2022): 118. http://dx.doi.org/10.3390/jimaging8050118.

Pełny tekst źródła
Streszczenie:
Although deep learning approaches are able to generate generic image features from massive labeled data, discriminative handcrafted features still have advantages in providing explicit domain knowledge and reflecting intuitive visual understanding. Much of the existing research focuses on integrating both handcrafted features and deep networks to leverage the benefits. However, the issues of parameter quality have not been effectively solved in existing applications of handcrafted features in deep networks. In this research, we propose a method that enriches deep network features by utilizing the injected discriminative shape features (generic edge tokens and curve partitioning points) to adjust the network’s internal parameter update process. Thus, the modified neural networks are trained under the guidance of specific domain knowledge, and they are able to generate image representations that incorporate the benefits from both handcrafted and deep learned features. The comparative experiments were performed on several benchmark datasets. The experimental results confirmed our method works well on both large and small training datasets. Additionally, compared with existing models using either handcrafted features or deep network representations, our method not only improves the corresponding performance, but also reduces the computational costs.
Style APA, Harvard, Vancouver, ISO itp.
27

Coto-Jiménez, Marvin. "Discriminative Multi-Stream Postfilters Based on Deep Learning for Enhancing Statistical Parametric Speech Synthesis". Biomimetics 6, nr 1 (7.02.2021): 12. http://dx.doi.org/10.3390/biomimetics6010012.

Pełny tekst źródła
Streszczenie:
Statistical parametric speech synthesis based on Hidden Markov Models has been an important technique for the production of artificial voices, due to its ability to produce results with high intelligibility and sophisticated features such as voice conversion and accent modification with a small footprint, particularly for low-resource languages where deep learning-based techniques remain unexplored. Despite the progress, the quality of the results, mainly based on Hidden Markov Models (HMM) does not reach those of the predominant approaches, based on unit selection of speech segments of deep learning. One of the proposals to improve the quality of HMM-based speech has been incorporating postfiltering stages, which pretend to increase the quality while preserving the advantages of the process. In this paper, we present a new approach to postfiltering synthesized voices with the application of discriminative postfilters, with several long short-term memory (LSTM) deep neural networks. Our motivation stems from modeling specific mapping from synthesized to natural speech on those segments corresponding to voiced or unvoiced sounds, due to the different qualities of those sounds and how HMM-based voices can present distinct degradation on each one. The paper analyses the discriminative postfilters obtained using five voices, evaluated using three objective measures, Mel cepstral distance and subjective tests. The results indicate the advantages of the discriminative postilters in comparison with the HTS voice and the non-discriminative postfilters.
Style APA, Harvard, Vancouver, ISO itp.
28

Adedigba, Adeyinka P., Steve A. Adeshina i Abiodun M. Aibinu. "Performance Evaluation of Deep Learning Models on Mammogram Classification Using Small Dataset". Bioengineering 9, nr 4 (6.04.2022): 161. http://dx.doi.org/10.3390/bioengineering9040161.

Pełny tekst źródła
Streszczenie:
Cancer is the second leading cause of death globally, and breast cancer (BC) is the second most reported cancer. Although the incidence rate is reducing in developed countries, the reverse is the case in low- and middle-income countries. Early detection has been found to contain cancer growth, prevent metastasis, ease treatment, and reduce mortality by 25%. The digital mammogram is one of the most common, cheapest, and most effective BC screening techniques capable of early detection of up to 90% BC incidence. However, the mammogram is one of the most difficult medical images to analyze. In this paper, we present a method of training a deep learning model for BC diagnosis. We developed a discriminative fine-tuning method which dynamically assigns different learning rates to each layer of the deep CNN. In addition, the model was trained using mixed-precision training to ease the computational demand of training deep learning models. Lastly, we present data augmentation methods for mammograms. The discriminative fine-tuning algorithm enables rapid convergence of the model loss; hence, the models were trained to attain their best performance within 50 epochs. Comparing the results, DenseNet achieved the highest accuracy of 0.998, while AlexNet obtained 0.988.
Style APA, Harvard, Vancouver, ISO itp.
29

Alshazly, Hammam, Christoph Linse, Erhardt Barth i Thomas Martinetz. "Ensembles of Deep Learning Models and Transfer Learning for Ear Recognition". Sensors 19, nr 19 (24.09.2019): 4139. http://dx.doi.org/10.3390/s19194139.

Pełny tekst źródła
Streszczenie:
The recognition performance of visual recognition systems is highly dependent on extracting and representing the discriminative characteristics of image data. Convolutional neural networks (CNNs) have shown unprecedented success in a variety of visual recognition tasks due to their capability to provide in-depth representations exploiting visual image features of appearance, color, and texture. This paper presents a novel system for ear recognition based on ensembles of deep CNN-based models and more specifically the Visual Geometry Group (VGG)-like network architectures for extracting discriminative deep features from ear images. We began by training different networks of increasing depth on ear images with random weight initialization. Then, we examined pretrained models as feature extractors as well as fine-tuning them on ear images. After that, we built ensembles of the best models to further improve the recognition performance. We evaluated the proposed ensembles through identification experiments using ear images acquired under controlled and uncontrolled conditions from mathematical analysis of images (AMI), AMI cropped (AMIC) (introduced here), and West Pomeranian University of Technology (WPUT) ear datasets. The experimental results indicate that our ensembles of models yield the best performance with significant improvements over the recently published results. Moreover, we provide visual explanations of the learned features by highlighting the relevant image regions utilized by the models for making decisions or predictions.
Style APA, Harvard, Vancouver, ISO itp.
30

Maroñas, Juan, Roberto Paredes i Daniel Ramos. "Calibration of deep probabilistic models with decoupled bayesian neural networks". Neurocomputing 407 (wrzesień 2020): 194–205. http://dx.doi.org/10.1016/j.neucom.2020.04.103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Li, Zhenjun, Xi Liu, Dawei Kou, Yi Hu, Qingrui Zhang i Qingxi Yuan. "Probabilistic Models for the Shear Strength of RC Deep Beams". Applied Sciences 13, nr 8 (12.04.2023): 4853. http://dx.doi.org/10.3390/app13084853.

Pełny tekst źródła
Streszczenie:
A new shear strength determination of reinforced concrete (RC) deep beams was proposed by using a statistical approach. The Bayesian–MCMC (Markov Chain Monte Carlo) method was introduced to establish a new shear prediction model and to improve seven existing deterministic models with a database of 645 experimental data. The bias correction terms of deterministic models were described by key explanatory terms identified by a systematic removal process. Considering multi-parameters, the Gibbs sampling was used to solve the high dimensional integration problem and to determine optimum and reliable model parameters with 50,000 iterations for probabilistic models. The model continuity and uncertainty for key parameters were quantified by the partial factor that was investigated by comparing test and model results. The partial factor for the proposed model was 1.25. The proposed model showed improved accuracy and continuity with the mean and coefficient of variation (CoV) of the experimental-to-predicted results ratio as 1.0357 and 0.2312, respectively.
Style APA, Harvard, Vancouver, ISO itp.
32

Liu, Shengyi. "Model Extraction Attack and Defense on Deep Generative Models". Journal of Physics: Conference Series 2189, nr 1 (1.02.2022): 012024. http://dx.doi.org/10.1088/1742-6596/2189/1/012024.

Pełny tekst źródła
Streszczenie:
Abstract The security issues of machine learning have aroused much attention and model extraction attack is one of them. The definition of model extraction attack is that an adversary can collect data through query access to a victim model and train a substitute model with it in order to steal the functionality of the target model. At present, most of the related work has focused on the research of model extraction attack against discriminative models while this paper pays attention to deep generative models. First, considering the difference of an adversary` goals, the attacks are taxonomized into two different types: accuracy extraction attack and fidelity extraction attack and the effect is evaluated by 1-NN accuracy. Attacks among three main types of deep generative models and the influence of the number of queries are also researched. Finally, this paper studies different defensive techniques to safeguard the models according to their architectures.
Style APA, Harvard, Vancouver, ISO itp.
33

Bai, Shuang. "Scene Categorization Through Using Objects Represented by Deep Features". International Journal of Pattern Recognition and Artificial Intelligence 31, nr 09 (luty 2017): 1755013. http://dx.doi.org/10.1142/s0218001417550138.

Pełny tekst źródła
Streszczenie:
Objects in scenes are thought to be important for scene recognition. In this paper, we propose to utilize scene-specific objects represented by deep features for scene categorization. Our approach combines benefits of deep learning and Latent Support Vector Machine (LSVM) to train a set of scene-specific object models for each scene category. Specifically, we first use deep Convolutional Neural Networks (CNNs) pre-trained on the large-scale object-centric image database ImageNet to learn rich object features and a large number of general object concepts. Then, the pre-trained CNNs is adopted to extract features from images in the target dataset, and initialize the learning of scene-specific object models for each scene category. After initialization, the scene-specific object models are obtained by alternating between searching over the most representative and discriminative regions of images in the target dataset and training linear SVM classifiers based on obtained region features. As a result, for each scene category a set of object models that are representative and discriminative can be acquired. We use them to perform scene categorization. In addition, to utilize global structure information of scenes, we use another CNNs pre-trained on the large-scale scene-centric database Places to capture structure information of scene images. By combining objects and structure information for scene categorization, we show superior performances to state-of-the-art approaches on three public datasets, i.e. MIT-indoor, UIUC-sports and SUN. Experiment results demonstrated the effectiveness of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
34

Kumar, Parmod, D. Suganthi, K. Valarmathi, Mahendra Pratap Swain, Piyush Vashistha, Dharam Buddhi i Emmanuel Sey. "A Multi-Thresholding-Based Discriminative Neural Classifier for Detection of Retinoblastoma Using CNN Models". BioMed Research International 2023 (6.02.2023): 1–9. http://dx.doi.org/10.1155/2023/5803661.

Pełny tekst źródła
Streszczenie:
Cancer is one of the vital diseases which lead to the uncontrollable growth of the cell, and it affects the body tissue. A type of cancer that affects the children below five years and adults in a rare case is called retinoblastoma. It affects the retina in the eye and the surrounding region of eye like the eyelid, and sometimes, it leads to vision loss if it is not diagnosed at the early stage. MRI and CT are widely used scanning procedures to identify the cancerous region in the eye. Current screening methods for cancer region identification needs the clinicians’ support to spot the affected regions. Modern healthcare systems develop an easy way to diagnose the disease. Discriminative architectures in deep learning can be viewed as supervised deep learning algorithms which use classification/regression techniques to predict the output. A convolutional neural network (CNN) is a part of the discriminative architecture which helps to process both image and text data. This work suggests the CNN-based classifier which classifies the tumor and nontumor regions in retinoblastoma. The tumor-like region (TLR) in retinoblastoma is identified using the automated thresholding method. After that, ResNet and AlexNet algorithms are used to classify the cancerous region along with classifiers. In addition, the comparison of discriminative algorithm along with its variants is experimented to produce the better image analysis method without the intervention of clinicians. The experimental study reveals that ResNet50 and AlexNet yield better results compared to other learning modules.
Style APA, Harvard, Vancouver, ISO itp.
35

Yu, Hee-Jin, Chang-Hwan Son i Dong Hyuk Lee. "Apple Leaf Disease Identification Through Region-of-Interest-Aware Deep Convolutional Neural Network". Journal of Imaging Science and Technology 64, nr 2 (1.03.2020): 20507–1. http://dx.doi.org/10.2352/j.imagingsci.technol.2020.64.2.020507.

Pełny tekst źródła
Streszczenie:
Abstract Traditional approaches for the identification of leaf diseases involve the use of handcrafted features such as colors and textures for feature extraction. Therefore, these approaches may have limitations in extracting abundant and discriminative features. Although deep learning approaches have been recently introduced to overcome the shortcomings of traditional approaches, existing deep learning models such as VGG and ResNet have been used in these approaches. This indicates that the approach can be further improved to increase the discriminative power because the spatial attention mechanism to predict the background and spot areas (i.e., local areas with leaf diseases) has not been considered. Therefore, a new deep learning architecture, which is hereafter referred to as region-of-interest-aware deep convolutional neural network (ROI-aware DCNN), is proposed to make deep features more discriminative and increase classification performance. The primary idea is that leaf disease symptoms appear in leaf area, whereas the background region does not contain useful information regarding leaf diseases. To realize this, two subnetworks are designed. One subnetwork is the ROI subnetwork to provide more discriminative features from the background, leaf areas, and spot areas in the feature map. The other subnetwork is the classification subnetwork to increase the classification accuracy. To train the ROI-aware DCNN, the ROI subnetwork is first learned with a new image set containing the ground truth images where the background, leaf area, and spot area are divided. Subsequently, the entire network is trained in an end-to-end manner to connect the ROI subnetwork with the classification subnetwork through a concatenation layer. The experimental results confirm that the proposed ROI-aware DCNN can increase the discriminative power by predicting the areas in the feature map that are more important for leaf diseases identification. The results prove that the proposed method surpasses conventional state-of-the-art methods such as VGG, ResNet, SqueezeNet, bilinear model, and multiscale-based deep feature extraction and pooling.
Style APA, Harvard, Vancouver, ISO itp.
36

Boursin, Nicolas, Carl Remlinger i Joseph Mikael. "Deep Generators on Commodity Markets Application to Deep Hedging". Risks 11, nr 1 (23.12.2022): 7. http://dx.doi.org/10.3390/risks11010007.

Pełny tekst źródła
Streszczenie:
Four deep generative methods for time series are studied on commodity markets and compared with classical probabilistic models. The lack of data in the case of deep hedgers is a common flaw, which deep generative methods seek to address. In the specific case of commodities, it turns out that these generators can also be used to refine the price models by tackling the high-dimensional challenges. In this work, the synthetic time series of commodity prices produced by such generators are studied and then used to train deep hedgers on various options. A fully data-driven approach to commodity risk management is thus proposed, from synthetic price generation to learning risk hedging policies.
Style APA, Harvard, Vancouver, ISO itp.
37

D’Andrea, Fabio, Pierre Gentine, Alan K. Betts i Benjamin R. Lintner. "Triggering Deep Convection with a Probabilistic Plume Model". Journal of the Atmospheric Sciences 71, nr 11 (29.10.2014): 3881–901. http://dx.doi.org/10.1175/jas-d-13-0340.1.

Pełny tekst źródła
Streszczenie:
Abstract A model unifying the representation of the planetary boundary layer and dry, shallow, and deep convection, the probabilistic plume model (PPM), is presented. Its capacity to reproduce the triggering of deep convection over land is analyzed in detail. The model accurately reproduces the timing of shallow convection and of deep convection onset over land, which is a major issue in many current general climate models. PPM is based on a distribution of plumes with varying thermodynamic states (potential temperature and specific humidity) induced by surface-layer turbulence. Precipitation is computed by a simple ice microphysics, and with the onset of precipitation, downdrafts are initiated and lateral entrainment of environmental air into updrafts is reduced. The most buoyant updrafts are responsible for the triggering of moist convection, causing the rapid growth of clouds and precipitation. Organization of turbulence in the subcloud layer is induced by unsaturated downdrafts, and the effect of density currents is modeled through a reduction of the lateral entrainment. The reduction of entrainment induces further development from the precipitating congestus phase to full deep cumulonimbus. Model validation is performed by comparing cloud base, cloud-top heights, timing of precipitation, and environmental profiles against cloud-resolving models and large-eddy simulations for two test cases. These comparisons demonstrate that PPM triggers deep convection at the proper time in the diurnal cycle and produces reasonable precipitation. On the other hand, PPM underestimates cloud-top height.
Style APA, Harvard, Vancouver, ISO itp.
38

Serpell, Cristián, Ignacio A. Araya, Carlos Valle i Héctor Allende. "Addressing model uncertainty in probabilistic forecasting using Monte Carlo dropout". Intelligent Data Analysis 24 (4.12.2020): 185–205. http://dx.doi.org/10.3233/ida-200015.

Pełny tekst źródła
Streszczenie:
In recent years, deep learning models have been developed to address probabilistic forecasting tasks, assuming an implicit stochastic process that relates past observed values to uncertain future values. These models are capable of capturing the inherent uncertainty of the underlying process, but they ignore the model uncertainty that comes from the fact of not having infinite data. This work proposes addressing the model uncertainty problem using Monte Carlo dropout, a variational approach that assigns distributions to the weights of a neural network instead of simply using fixed values. This allows to easily adapt common deep learning models currently in use to produce better probabilistic forecasting estimates, in terms of their consideration of uncertainty. The proposal is validated for prediction intervals estimation on seven energy time series, using a popular probabilistic model called Mean Variance Estimation (MVE), as the deep model adapted using the technique.
Style APA, Harvard, Vancouver, ISO itp.
39

Qian, Weizhu, Fabrice Lauri i Franck Gechter. "Supervised and semi-supervised deep probabilistic models for indoor positioning problems". Neurocomputing 435 (maj 2021): 228–38. http://dx.doi.org/10.1016/j.neucom.2020.12.131.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Wang, Wenzheng, Yuqi Han, Chenwei Deng i Zhen Li. "Hyperspectral Image Classification via Deep Structure Dictionary Learning". Remote Sensing 14, nr 9 (8.05.2022): 2266. http://dx.doi.org/10.3390/rs14092266.

Pełny tekst źródła
Streszczenie:
The construction of diverse dictionaries for sparse representation of hyperspectral image (HSI) classification has been a hot topic over the past few years. However, compared with convolutional neural network (CNN) models, dictionary-based models cannot extract deeper spectral information, which will reduce their performance for HSI classification. Moreover, dictionary-based methods have low discriminative capability, which leads to less accurate classification. To solve the above problems, we propose a deep learning-based structure dictionary for HSI classification in this paper. The core ideas are threefold, as follows: (1) To extract the abundant spectral information, we incorporate deep residual neural networks in dictionary learning and represent input signals in the deep feature domain. (2) To enhance the discriminative ability of the proposed model, we optimize the structure of the dictionary and design sharing constraint in terms of sub-dictionaries. Thus, the general and specific feature of HSI samples can be learned separately. (3) To further enhance classification performance, we design two kinds of loss functions, including coding loss and discriminating loss. The coding loss is used to realize the group sparsity of code coefficients, in which within-class spectral samples can be represented intensively and effectively. The Fisher discriminating loss is used to enforce the sparse representation coefficients with large between-class scatter. Extensive tests performed on hyperspectral dataset with bright prospects prove the developed method to be effective and outperform other existing methods.
Style APA, Harvard, Vancouver, ISO itp.
41

Andrianomena, Sambatra. "Probabilistic learning for pulsar classification". Journal of Cosmology and Astroparticle Physics 2022, nr 10 (1.10.2022): 016. http://dx.doi.org/10.1088/1475-7516/2022/10/016.

Pełny tekst źródła
Streszczenie:
Abstract In this work, we explore the possibility of using probabilistic learning to identify pulsar candidates. We make use of Deep Gaussian Process (DGP) and Deep Kernel Learning (DKL). Trained on a balanced training set in order to avoid the effect of class imbalance, the performance of the models, achieving relatively high probability of differentiating the positive class from the negative one (roc-auc ∼ 0.98), is very promising overall. We estimate the predictive entropy of each model predictions and find that DKL is more confident than DGP in its predictions and provides better uncertainty calibration. Upon investigating the effect of training with imbalanced dataset on the models, results show that each model performance decreases with an increasing number of the majority class in the training set. Interestingly, with a number of negative class 10× that of positive class, the models still provide reasonably well calibrated uncertainty, i.e. an expected Uncertainty Calibration Error (UCE) less than 6%. We also show in this study how, in the case of relatively small amount of training dataset, a convolutional neural network based classifier trained via Bayesian Active Learning by Disagreement (BALD) performs. We find that, with an optimized number of training examples, the model — being the most confident in its predictions — generalizes relatively well and produces the best uncertainty calibration which corresponds to UCE = 3.118%.
Style APA, Harvard, Vancouver, ISO itp.
42

Kim, Hyesuk, i Incheol Kim. "Dynamic Arm Gesture Recognition Using Spherical Angle Features and Hidden Markov Models". Advances in Human-Computer Interaction 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/785349.

Pełny tekst źródła
Streszczenie:
We introduce a vision-based arm gesture recognition (AGR) system using Kinect. The AGR system learns the discrete Hidden Markov Model (HMM), an effective probabilistic graph model for gesture recognition, from the dynamic pose of the arm joints provided by the Kinect API. Because Kinect’s viewpoint and the subject’s arm length can substantially affect the estimated 3D pose of each joint, it is difficult to recognize gestures reliably with these features. The proposed system performs the feature transformation that changes the 3D Cartesian coordinates of each joint into the 2D spherical angles of the corresponding arm part to obtain view-invariant and more discriminative features. We confirmed high recognition performance of the proposed AGR system through experiments with two different datasets.
Style APA, Harvard, Vancouver, ISO itp.
43

Murad, Abdulmajid, Frank Alexander Kraemer, Kerstin Bach i Gavin Taylor. "Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting". Sensors 21, nr 23 (30.11.2021): 8009. http://dx.doi.org/10.3390/s21238009.

Pełny tekst źródła
Streszczenie:
Data-driven forecasts of air quality have recently achieved more accurate short-term predictions. However, despite their success, most of the current data-driven solutions lack proper quantifications of model uncertainty that communicate how much to trust the forecasts. Recently, several practical tools to estimate uncertainty have been developed in probabilistic deep learning. However, there have not been empirical applications and extensive comparisons of these tools in the domain of air quality forecasts. Therefore, this work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts. Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability. We also propose improving these models using “free” adversarial training and exploiting temporal and spatial correlation inherent in air quality data. Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts. Overall, Bayesian neural networks provide a more reliable uncertainty estimate but can be challenging to implement and scale. Other scalable methods, such as deep ensemble, Monte Carlo (MC) dropout, and stochastic weight averaging-Gaussian (SWAG), can perform well if applied correctly but with different tradeoffs and slight variations in performance metrics. Finally, our results show the practical impact of uncertainty estimation and demonstrate that, indeed, probabilistic models are more suitable for making informed decisions.
Style APA, Harvard, Vancouver, ISO itp.
44

Adams, Jadie. "Probabilistic Shape Models of Anatomy Directly from Images". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 16107–8. http://dx.doi.org/10.1609/aaai.v37i13.26914.

Pełny tekst źródła
Streszczenie:
Statistical shape modeling (SSM) is an enabling tool in medical image analysis as it allows for population-based quantitative analysis. The traditional pipeline for landmark-based SSM from images requires painstaking and cost-prohibitive steps. My thesis aims to leverage probabilistic deep learning frameworks to streamline the adoption of SSM in biomedical research and practice. The expected outcomes of this work will be new frameworks for SSM that (1) provide reliable and calibrated uncertainty quantification, (2) are effective given limited or sparsely annotated/incomplete data, and (3) can make predictions from incomplete 4D spatiotemporal data. These efforts will reduce required costs and manual labor for anatomical SSM, helping SSM become a more viable clinical tool and advancing medical practice.
Style APA, Harvard, Vancouver, ISO itp.
45

Ravuri, Suman, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons i in. "Skilful precipitation nowcasting using deep generative models of radar". Nature 597, nr 7878 (29.09.2021): 672–77. http://dx.doi.org/10.1038/s41586-021-03854-z.

Pełny tekst źródła
Streszczenie:
AbstractPrecipitation nowcasting, the high-resolution forecasting of precipitation up to two hours ahead, supports the real-world socioeconomic needs of many sectors reliant on weather-dependent decision-making1,2. State-of-the-art operational nowcasting methods typically advect precipitation fields with radar-based wind estimates, and struggle to capture important non-linear events such as convective initiations3,4. Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges. Using statistical, economic and cognitive measures, we show that our method provides improved forecast quality, forecast consistency and forecast value. Our model produces realistic and spatiotemporally consistent predictions over regions up to 1,536 km × 1,280 km and with lead times from 5–90 min ahead. Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods. When verified quantitatively, these nowcasts are skillful without resorting to blurring. We show that generative nowcasting can provide probabilistic predictions that improve forecast value and support operational utility, and at resolutions and lead times where alternative methods struggle.
Style APA, Harvard, Vancouver, ISO itp.
46

Huang, Jiabo, Qi Dong, Shaogang Gong i Xiatian Zhu. "Unsupervised Deep Learning via Affinity Diffusion". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11029–36. http://dx.doi.org/10.1609/aaai.v34i07.6757.

Pełny tekst źródła
Streszczenie:
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.
Style APA, Harvard, Vancouver, ISO itp.
47

Collins, Michael, i Terry Koo. "Discriminative Reranking for Natural Language Parsing". Computational Linguistics 31, nr 1 (marzec 2005): 25–70. http://dx.doi.org/10.1162/0891201053630273.

Pełny tekst źródła
Streszczenie:
This article considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. We introduce a new method for the reranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). We apply the boosting method to parsing the Wall Street Journal treebank. The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model's score of 88.2%. The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. We argue that the method is an appealing alternative-in terms of both simplicity and efficiency-to work on feature selection methods within log-linear (maximum-entropy) models. Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
Style APA, Harvard, Vancouver, ISO itp.
48

Mashlakov, Aleksei, Toni Kuronen, Lasse Lensu, Arto Kaarna i Samuli Honkapuro. "Assessing the performance of deep learning models for multivariate probabilistic energy forecasting". Applied Energy 285 (marzec 2021): 116405. http://dx.doi.org/10.1016/j.apenergy.2020.116405.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Duan, Yun. "A Novel Interval Energy-Forecasting Method for Sustainable Building Management Based on Deep Learning". Sustainability 14, nr 14 (13.07.2022): 8584. http://dx.doi.org/10.3390/su14148584.

Pełny tekst źródła
Streszczenie:
Energy conservation in buildings has increasingly become a hot issue for the Chinese government. Compared to deterministic load prediction, probabilistic load forecasting is more suitable for long-term planning and management of building energy consumption. In this study, we propose a probabilistic load-forecasting method for daily and weekly indoor load. The methodology is based on the long short-term memory (LSTM) model and penalized quantile regression (PQR). A comprehensive analysis for a time period of a year is conducted using the proposed method, and back propagation neural networks (BPNN), support vector machine (SVM), and random forest are applied as reference models. Point prediction as well as interval prediction are adopted to roundly test the prediction performance of the proposed model. Results show that LSTM-PQR has superior performance over the other three models and has improvements ranging from 6.4% to 20.9% for PICP compared with other models. This work indicates that the proposed method fits well with probabilistic load forecasting, which could promise to guide the management of building sustainability in a future carbon neutral scenario.
Style APA, Harvard, Vancouver, ISO itp.
50

Krogh, Anders, i Søren Kamaric Riis. "Hidden Neural Networks". Neural Computation 11, nr 2 (1.02.1999): 541–63. http://dx.doi.org/10.1162/089976699300016764.

Pełny tekst źródła
Streszczenie:
A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear performance gains compared to standard HMMs tested on the same task.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii