Siga este enlace para ver otros tipos de publicaciones sobre el tema: Probabilistic deep models.

Artículos de revistas sobre el tema "Probabilistic deep models"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Probabilistic deep models".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen y Antonio Salmerón. "Probabilistic Models with Deep Neural Networks". Entropy 23, n.º 1 (18 de enero de 2021): 117. http://dx.doi.org/10.3390/e23010117.

Texto completo
Resumen
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Villanueva Llerena, Julissa y Denis Deratani Maua. "Efficient Predictive Uncertainty Estimators for Deep Probabilistic Models". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 10 (3 de abril de 2020): 13740–41. http://dx.doi.org/10.1609/aaai.v34i10.7142.

Texto completo
Resumen
Deep Probabilistic Models (DPM) based on arithmetic circuits representation, such as Sum-Product Networks (SPN) and Probabilistic Sentential Decision Diagrams (PSDD), have shown competitive performance in several machine learning tasks with interesting properties (Poon and Domingos 2011; Kisa et al. 2014). Due to the high number of parameters and scarce data, DPMs can produce unreliable and overconfident inference. This research aims at increasing the robustness of predictive inference with DPMs by obtaining new estimators of the predictive uncertainty. This problem is not new and the literature on deep models contains many solutions. However the probabilistic nature of DPMs offer new possibilities to achieve accurate estimates at low computational costs, but also new challenges, as the range of different types of predictions is much larger than with traditional deep models. To cope with such issues, we plan on investigating two different approaches. The first approach is to perform a global sensitivity analysis on the parameters, measuring the variability of the output to perturbations of the model weights. The second approach is to capture the variability of the prediction with respect to changes in the model architecture. Our approaches shall be evaluated on challenging tasks such as image completion, multilabel classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Karami, Mahdi y Dale Schuurmans. "Deep Probabilistic Canonical Correlation Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 9 (18 de mayo de 2021): 8055–63. http://dx.doi.org/10.1609/aaai.v35i9.16982.

Texto completo
Resumen
We propose a deep generative framework for multi-view learning based on a probabilistic interpretation of canonical correlation analysis (CCA). The model combines a linear multi-view layer in the latent space with deep generative networks as observation models, to decompose the variability in multiple views into a shared latent representation that describes the common underlying sources of variation and a set of viewspecific components. To approximate the posterior distribution of the latent multi-view layer, an efficient variational inference procedure is developed based on the solution of probabilistic CCA. The model is then generalized to an arbitrary number of views. An empirical analysis confirms that the proposed deep multi-view model can discover subtle relationships between multiple views and recover rich representations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lu, Ming, Zhihao Duan, Fengqing Zhu y Zhan Ma. "Deep Hierarchical Video Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de marzo de 2024): 8859–67. http://dx.doi.org/10.1609/aaai.v38i8.28733.

Texto completo
Resumen
Recently, probabilistic predictive coding that directly models the conditional distribution of latent features across successive frames for temporal redundancy removal has yielded promising results. Existing methods using a single-scale Variational AutoEncoder (VAE) must devise complex networks for conditional probability estimation in latent space, neglecting multiscale characteristics of video frames. Instead, this work proposes hierarchical probabilistic predictive coding, for which hierarchal VAEs are carefully designed to characterize multiscale latent features as a family of flexible priors and posteriors to predict the probabilities of future frames. Under such a hierarchical structure, lightweight networks are sufficient for prediction. The proposed method outperforms representative learned video compression models on common testing videos and demonstrates computational friendliness with much less memory footprint and faster encoding/decoding. Extensive experiments on adaptation to temporal patterns also indicate the better generalization of our hierarchical predictive mechanism. Furthermore, our solution is the first to enable progressive decoding that is favored in networked video applications with packet loss.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Maroñas, Juan, Roberto Paredes y Daniel Ramos. "Calibration of deep probabilistic models with decoupled bayesian neural networks". Neurocomputing 407 (septiembre de 2020): 194–205. http://dx.doi.org/10.1016/j.neucom.2020.04.103.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Li, Zhenjun, Xi Liu, Dawei Kou, Yi Hu, Qingrui Zhang y Qingxi Yuan. "Probabilistic Models for the Shear Strength of RC Deep Beams". Applied Sciences 13, n.º 8 (12 de abril de 2023): 4853. http://dx.doi.org/10.3390/app13084853.

Texto completo
Resumen
A new shear strength determination of reinforced concrete (RC) deep beams was proposed by using a statistical approach. The Bayesian–MCMC (Markov Chain Monte Carlo) method was introduced to establish a new shear prediction model and to improve seven existing deterministic models with a database of 645 experimental data. The bias correction terms of deterministic models were described by key explanatory terms identified by a systematic removal process. Considering multi-parameters, the Gibbs sampling was used to solve the high dimensional integration problem and to determine optimum and reliable model parameters with 50,000 iterations for probabilistic models. The model continuity and uncertainty for key parameters were quantified by the partial factor that was investigated by comparing test and model results. The partial factor for the proposed model was 1.25. The proposed model showed improved accuracy and continuity with the mean and coefficient of variation (CoV) of the experimental-to-predicted results ratio as 1.0357 and 0.2312, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Serpell, Cristián, Ignacio A. Araya, Carlos Valle y Héctor Allende. "Addressing model uncertainty in probabilistic forecasting using Monte Carlo dropout". Intelligent Data Analysis 24 (4 de diciembre de 2020): 185–205. http://dx.doi.org/10.3233/ida-200015.

Texto completo
Resumen
In recent years, deep learning models have been developed to address probabilistic forecasting tasks, assuming an implicit stochastic process that relates past observed values to uncertain future values. These models are capable of capturing the inherent uncertainty of the underlying process, but they ignore the model uncertainty that comes from the fact of not having infinite data. This work proposes addressing the model uncertainty problem using Monte Carlo dropout, a variational approach that assigns distributions to the weights of a neural network instead of simply using fixed values. This allows to easily adapt common deep learning models currently in use to produce better probabilistic forecasting estimates, in terms of their consideration of uncertainty. The proposal is validated for prediction intervals estimation on seven energy time series, using a popular probabilistic model called Mean Variance Estimation (MVE), as the deep model adapted using the technique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Boursin, Nicolas, Carl Remlinger y Joseph Mikael. "Deep Generators on Commodity Markets Application to Deep Hedging". Risks 11, n.º 1 (23 de diciembre de 2022): 7. http://dx.doi.org/10.3390/risks11010007.

Texto completo
Resumen
Four deep generative methods for time series are studied on commodity markets and compared with classical probabilistic models. The lack of data in the case of deep hedgers is a common flaw, which deep generative methods seek to address. In the specific case of commodities, it turns out that these generators can also be used to refine the price models by tackling the high-dimensional challenges. In this work, the synthetic time series of commodity prices produced by such generators are studied and then used to train deep hedgers on various options. A fully data-driven approach to commodity risk management is thus proposed, from synthetic price generation to learning risk hedging policies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zuidberg Dos Martires, Pedro. "Probabilistic Neural Circuits". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 17280–89. http://dx.doi.org/10.1609/aaai.v38i15.29675.

Texto completo
Resumen
Probabilistic circuits (PCs) have gained prominence in recent years as a versatile framework for discussing probabilistic models that support tractable queries and are yet expressive enough to model complex probability distributions. Nevertheless, tractability comes at a cost: PCs are less expressive than neural networks. In this paper we introduce probabilistic neural circuits (PNCs), which strike a balance between PCs and neural nets in terms of tractability and expressive power. Theoretically, we show that PNCs can be interpreted as deep mixtures of Bayesian networks. Experimentally, we demonstrate that PNCs constitute powerful function approximators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ravuri, Suman, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons et al. "Skilful precipitation nowcasting using deep generative models of radar". Nature 597, n.º 7878 (29 de septiembre de 2021): 672–77. http://dx.doi.org/10.1038/s41586-021-03854-z.

Texto completo
Resumen
AbstractPrecipitation nowcasting, the high-resolution forecasting of precipitation up to two hours ahead, supports the real-world socioeconomic needs of many sectors reliant on weather-dependent decision-making1,2. State-of-the-art operational nowcasting methods typically advect precipitation fields with radar-based wind estimates, and struggle to capture important non-linear events such as convective initiations3,4. Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges. Using statistical, economic and cognitive measures, we show that our method provides improved forecast quality, forecast consistency and forecast value. Our model produces realistic and spatiotemporally consistent predictions over regions up to 1,536 km × 1,280 km and with lead times from 5–90 min ahead. Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods. When verified quantitatively, these nowcasts are skillful without resorting to blurring. We show that generative nowcasting can provide probabilistic predictions that improve forecast value and support operational utility, and at resolutions and lead times where alternative methods struggle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Adams, Jadie. "Probabilistic Shape Models of Anatomy Directly from Images". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junio de 2023): 16107–8. http://dx.doi.org/10.1609/aaai.v37i13.26914.

Texto completo
Resumen
Statistical shape modeling (SSM) is an enabling tool in medical image analysis as it allows for population-based quantitative analysis. The traditional pipeline for landmark-based SSM from images requires painstaking and cost-prohibitive steps. My thesis aims to leverage probabilistic deep learning frameworks to streamline the adoption of SSM in biomedical research and practice. The expected outcomes of this work will be new frameworks for SSM that (1) provide reliable and calibrated uncertainty quantification, (2) are effective given limited or sparsely annotated/incomplete data, and (3) can make predictions from incomplete 4D spatiotemporal data. These efforts will reduce required costs and manual labor for anatomical SSM, helping SSM become a more viable clinical tool and advancing medical practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Qian, Weizhu, Fabrice Lauri y Franck Gechter. "Supervised and semi-supervised deep probabilistic models for indoor positioning problems". Neurocomputing 435 (mayo de 2021): 228–38. http://dx.doi.org/10.1016/j.neucom.2020.12.131.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Sinha, Mourani, Mrinmoyee Bhattacharya, M. Seemanth y Suchandra A. Bhowmick. "Probabilistic Models and Deep Learning Models Assessed to Estimate Design and Operational Ocean Wave Statistics to Reduce Coastal Hazards". Geosciences 13, n.º 12 (12 de diciembre de 2023): 380. http://dx.doi.org/10.3390/geosciences13120380.

Texto completo
Resumen
Probabilistic models for long-term estimations and deep learning models for short-term predictions have been evaluated and analyzed for ocean wave parameters. Estimation of design and operational wave parameters for long-term return periods is essential for various coastal and ocean engineering applications. Three probability distributions, namely generalized extreme value distribution (EV), generalized Pareto distribution (PD), and Weibull distribution (WD), have been considered in this work. The design wave parameter considered is the maximal wave height for a specified return period, and the operational wave parameters are the mean maximal wave height and the highest occurring maximal wave height. For precise location-based estimation, wave heights are considered from a nested wave model, which has been configured to have a 10 km spatial resolution. As per availability, buoy-observed data are utilized for validation purposes at the Agatti, Digha, Gopalpur, and Ratnagiri stations along the Indian coasts. At the stations mentioned above, the long short-term memory (LSTM)-based deep learning model is applied to provide short-term predictions with higher accuracy. The probabilistic approach for long-term estimation and the deep learning model for short-term prediction can be used in combination to forecast wave statistics along the coasts, reducing hazards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Andrianomena, Sambatra. "Probabilistic learning for pulsar classification". Journal of Cosmology and Astroparticle Physics 2022, n.º 10 (1 de octubre de 2022): 016. http://dx.doi.org/10.1088/1475-7516/2022/10/016.

Texto completo
Resumen
Abstract In this work, we explore the possibility of using probabilistic learning to identify pulsar candidates. We make use of Deep Gaussian Process (DGP) and Deep Kernel Learning (DKL). Trained on a balanced training set in order to avoid the effect of class imbalance, the performance of the models, achieving relatively high probability of differentiating the positive class from the negative one (roc-auc ∼ 0.98), is very promising overall. We estimate the predictive entropy of each model predictions and find that DKL is more confident than DGP in its predictions and provides better uncertainty calibration. Upon investigating the effect of training with imbalanced dataset on the models, results show that each model performance decreases with an increasing number of the majority class in the training set. Interestingly, with a number of negative class 10× that of positive class, the models still provide reasonably well calibrated uncertainty, i.e. an expected Uncertainty Calibration Error (UCE) less than 6%. We also show in this study how, in the case of relatively small amount of training dataset, a convolutional neural network based classifier trained via Bayesian Active Learning by Disagreement (BALD) performs. We find that, with an optimized number of training examples, the model — being the most confident in its predictions — generalizes relatively well and produces the best uncertainty calibration which corresponds to UCE = 3.118%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

D’Andrea, Fabio, Pierre Gentine, Alan K. Betts y Benjamin R. Lintner. "Triggering Deep Convection with a Probabilistic Plume Model". Journal of the Atmospheric Sciences 71, n.º 11 (29 de octubre de 2014): 3881–901. http://dx.doi.org/10.1175/jas-d-13-0340.1.

Texto completo
Resumen
Abstract A model unifying the representation of the planetary boundary layer and dry, shallow, and deep convection, the probabilistic plume model (PPM), is presented. Its capacity to reproduce the triggering of deep convection over land is analyzed in detail. The model accurately reproduces the timing of shallow convection and of deep convection onset over land, which is a major issue in many current general climate models. PPM is based on a distribution of plumes with varying thermodynamic states (potential temperature and specific humidity) induced by surface-layer turbulence. Precipitation is computed by a simple ice microphysics, and with the onset of precipitation, downdrafts are initiated and lateral entrainment of environmental air into updrafts is reduced. The most buoyant updrafts are responsible for the triggering of moist convection, causing the rapid growth of clouds and precipitation. Organization of turbulence in the subcloud layer is induced by unsaturated downdrafts, and the effect of density currents is modeled through a reduction of the lateral entrainment. The reduction of entrainment induces further development from the precipitating congestus phase to full deep cumulonimbus. Model validation is performed by comparing cloud base, cloud-top heights, timing of precipitation, and environmental profiles against cloud-resolving models and large-eddy simulations for two test cases. These comparisons demonstrate that PPM triggers deep convection at the proper time in the diurnal cycle and produces reasonable precipitation. On the other hand, PPM underestimates cloud-top height.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Murad, Abdulmajid, Frank Alexander Kraemer, Kerstin Bach y Gavin Taylor. "Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting". Sensors 21, n.º 23 (30 de noviembre de 2021): 8009. http://dx.doi.org/10.3390/s21238009.

Texto completo
Resumen
Data-driven forecasts of air quality have recently achieved more accurate short-term predictions. However, despite their success, most of the current data-driven solutions lack proper quantifications of model uncertainty that communicate how much to trust the forecasts. Recently, several practical tools to estimate uncertainty have been developed in probabilistic deep learning. However, there have not been empirical applications and extensive comparisons of these tools in the domain of air quality forecasts. Therefore, this work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts. Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability. We also propose improving these models using “free” adversarial training and exploiting temporal and spatial correlation inherent in air quality data. Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts. Overall, Bayesian neural networks provide a more reliable uncertainty estimate but can be challenging to implement and scale. Other scalable methods, such as deep ensemble, Monte Carlo (MC) dropout, and stochastic weight averaging-Gaussian (SWAG), can perform well if applied correctly but with different tradeoffs and slight variations in performance metrics. Finally, our results show the practical impact of uncertainty estimation and demonstrate that, indeed, probabilistic models are more suitable for making informed decisions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Buda-Ożóg, Lidia. "Probabilistic assessment of load-bearing capacity of deep beams designed by strut-and-tie method". MATEC Web of Conferences 262 (2019): 08001. http://dx.doi.org/10.1051/matecconf/201926208001.

Texto completo
Resumen
This paper presents probabilistic assessment of load-bearing capacity and reliability for different STM of deep beams. Six deep beams having different reinforcement arrangement obtained on the basis of STM but the same overall geometry and loading pattern were analysed. The used strut-and-tie models for D-regions of analysed elements have been verified and optimised by different researchers. In order to assess load-bearing capacity of these elements probabilistically, stochastic modelling was performed. In the presented probabilistic analysis of deep beams designed, the ATENA software, the SARA software and the CAST (computer-aided strut-and-tie) design tool were used. The reliability analysis shown that STM optimization should be a multi-criteria issue so that the obtained models were characterized by optimal stiffness with the assumed volume or weight and maximum reliability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Duan, Yun. "A Novel Interval Energy-Forecasting Method for Sustainable Building Management Based on Deep Learning". Sustainability 14, n.º 14 (13 de julio de 2022): 8584. http://dx.doi.org/10.3390/su14148584.

Texto completo
Resumen
Energy conservation in buildings has increasingly become a hot issue for the Chinese government. Compared to deterministic load prediction, probabilistic load forecasting is more suitable for long-term planning and management of building energy consumption. In this study, we propose a probabilistic load-forecasting method for daily and weekly indoor load. The methodology is based on the long short-term memory (LSTM) model and penalized quantile regression (PQR). A comprehensive analysis for a time period of a year is conducted using the proposed method, and back propagation neural networks (BPNN), support vector machine (SVM), and random forest are applied as reference models. Point prediction as well as interval prediction are adopted to roundly test the prediction performance of the proposed model. Results show that LSTM-PQR has superior performance over the other three models and has improvements ranging from 6.4% to 20.9% for PICP compared with other models. This work indicates that the proposed method fits well with probabilistic load forecasting, which could promise to guide the management of building sustainability in a future carbon neutral scenario.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Mashlakov, Aleksei, Toni Kuronen, Lasse Lensu, Arto Kaarna y Samuli Honkapuro. "Assessing the performance of deep learning models for multivariate probabilistic energy forecasting". Applied Energy 285 (marzo de 2021): 116405. http://dx.doi.org/10.1016/j.apenergy.2020.116405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Liu, Mao-Yi, Zheng Li y Hang Zhang. "Probabilistic Shear Strength Prediction for Deep Beams Based on Bayesian-Optimized Data-Driven Approach". Buildings 13, n.º 10 (28 de septiembre de 2023): 2471. http://dx.doi.org/10.3390/buildings13102471.

Texto completo
Resumen
To ensure the safety of buildings, accurate and robust prediction of a reinforced concrete deep beam’s shear capacity is necessary to avoid unpredictable accidents caused by brittle failure. However, the failure mechanism of reinforced concrete deep beams is very complicated, has not been fully elucidated, and cannot be accurately described by simple equations. To solve this issue, machine learning techniques have been utilized and corresponding prediction models have been developed. Nevertheless, these models can only provide deterministic prediction results of the scalar type, and the confidence level is uncertain. Thus, these prediction results cannot be used for the design and assessment of deep beams. Therefore, in this paper, a probabilistic prediction approach of the shear strength of reinforced concrete deep beams is proposed based on the natural gradient boosting algorithm trained on a collected database. A database of 267 deep beam experiments was utilized, with 14 key parameters identified as the inputs related to the beam geometry, material properties, and reinforcement details. The proposed NGBoost model was compared to empirical formulas from design codes and other machine learning methods. The results showed that the NGBoost model achieved higher accuracy in mean shear strength prediction, with an R2 of 0.9045 and an RMSE of 38.8 kN, outperforming existing formulas by over 50%. Additionally, the NGBoost model provided probabilistic predictions of shear strength as probability density functions, enabling reliable confidence intervals. This demonstrated the capability of the data-driven NGBoost approach for robust shear strength evaluation of RC deep beams. Overall, the results illustrated that the proposed probabilistic prediction approach dramatically surpassed the current formulas adopted in design codes and machine learning models in both prediction accuracy and robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Nye, Logan, Hamid Ghaednia y Joseph H. Schwab. "Generating synthetic samples of chondrosarcoma histopathology with a denoising diffusion probabilistic model." Journal of Clinical Oncology 41, n.º 16_suppl (1 de junio de 2023): e13592-e13592. http://dx.doi.org/10.1200/jco.2023.41.16_suppl.e13592.

Texto completo
Resumen
e13592 Background: The emergence of digital pathology — an image-based environment for the acquisition, management and interpretation of pathology information supported by computational techniques for data extraction and analysis — is changing the pathology ecosystem. The development of machine-learning approaches for the extraction of information from image data, allows for tissue interrogation in a way that was not previously possible. However, creating digital pathology algorithms requires large volumes of training data, often on the order of thousands of histopathology slides. This becomes problematic for rare diseases, where imaging datasets of such size do not exist. This makes it impossible to train digital pathology models for these rare conditions. However, recent advances in generative deep learning models may provide a method for overcoming this lack of histology data for rare diseases. Pre-trained diffusion-based probabilistic models can be used to create photorealistic variations of existing images. In this study, we explored the potential of using a deep generative model created by OpenAI for the purpose of producing synthetic histopathology images, using chondrosarcoma as our rare tumor of interest. Methods: Our team compiled a dataset of 55 chondrosarcoma histolopathology images from the annotated records of Dr. Henry Jaffe, a pioneering authority in musculoskeletal pathology. We built a deep learning image-generation application in a Jupyter notebook environment, iterating upon OpenAI’s DALL-E application processing interface (API) with python programming language. Using the chondrosarcoma histology dataset and NVIDIA GPUs, we trained the deep learning application to generate multiple synthetic variations of each real chondrosarcoma image. Results: After several hours, the deep learning model successfully generated 1,000 images of chondrosarcoma from 55 original images. The synthetic histology images retained photorealistic quality and displayed characteristic cellular features of chondrosarcoma tumor tissue. Conclusions: Deep generative models may be useful in addressing issues of data scarcity in rare diseases, such as chondrosarcoma. For example, in situations where existing imaging data is insufficient for training diagnostic computer vision models, diffusion-based generative models could be applied to create training datasets. However, further exploration of ethical considerations and qualitative analyses of these generated data are needed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Bentivoglio, Roberto, Elvin Isufi, Sebastian Nicolaas Jonkman y Riccardo Taormina. "Deep learning methods for flood mapping: a review of existing applications and future research directions". Hydrology and Earth System Sciences 26, n.º 16 (25 de agosto de 2022): 4345–78. http://dx.doi.org/10.5194/hess-26-4345-2022.

Texto completo
Resumen
Abstract. Deep learning techniques have been increasingly used in flood management to overcome the limitations of accurate, yet slow, numerical models and to improve the results of traditional methods for flood mapping. In this paper, we review 58 recent publications to outline the state of the art of the field, identify knowledge gaps, and propose future research directions. The review focuses on the type of deep learning models used for various flood mapping applications, the flood types considered, the spatial scale of the studied events, and the data used for model development. The results show that models based on convolutional layers are usually more accurate, as they leverage inductive biases to better process the spatial characteristics of the flooding events. Models based on fully connected layers, instead, provide accurate results when coupled with other statistical models. Deep learning models showed increased accuracy when compared to traditional approaches and increased speed when compared to numerical methods. While there exist several applications in flood susceptibility, inundation, and hazard mapping, more work is needed to understand how deep learning can assist in real-time flood warning during an emergency and how it can be employed to estimate flood risk. A major challenge lies in developing deep learning models that can generalize to unseen case studies. Furthermore, all reviewed models and their outputs are deterministic, with limited considerations for uncertainties in outcomes and probabilistic predictions. The authors argue that these identified gaps can be addressed by exploiting recent fundamental advancements in deep learning or by taking inspiration from developments in other applied areas. Models based on graph neural networks and neural operators can work with arbitrarily structured data and thus should be capable of generalizing across different case studies and could account for complex interactions with the natural and built environment. Physics-based deep learning can be used to preserve the underlying physical equations resulting in more reliable speed-up alternatives for numerical models. Similarly, probabilistic models can be built by resorting to deep Gaussian processes or Bayesian neural networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Edie, Stewart M., Peter D. Smits y David Jablonski. "Probabilistic models of species discovery and biodiversity comparisons". Proceedings of the National Academy of Sciences 114, n.º 14 (21 de marzo de 2017): 3666–71. http://dx.doi.org/10.1073/pnas.1616355114.

Texto completo
Resumen
Inferring large-scale processes that drive biodiversity hinges on understanding the phylogenetic and spatial pattern of species richness. However, clades and geographic regions are accumulating newly described species at an uneven rate, potentially affecting the stability of currently observed diversity patterns. Here, we present a probabilistic model of species discovery to assess the uncertainty in diversity levels among clades and regions. We use a Bayesian time series regression to estimate the long-term trend in the rate of species description for marine bivalves and find a distinct spatial bias in the accumulation of new species. Despite these biases, probabilistic estimates of future species richness show considerable stability in the currently observed rank order of regional diversity. However, absolute differences in richness are still likely to change, potentially modifying the correlation between species numbers and geographic, environmental, and biological factors thought to promote biodiversity. Applied to scallops and related clades, we find that accumulating knowledge of deep-sea species will likely shift the relative richness of these three families, emphasizing the need to consider the incomplete nature of bivalve taxonomy in quantitative studies of its diversity. Along with estimating expected changes to observed patterns of diversity, the model described in this paper pinpoints geographic areas and clades most urgently requiring additional systematic study—an important practice for building more complete and accurate models of biodiversity dynamics that can inform ecological and evolutionary theory and improve conservation practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Avaylon, Matthew, Robbie Sadre, Zhe Bai y Talita Perciano. "Adaptable Deep Learning and Probabilistic Graphical Model System for Semantic Segmentation". Advances in Artificial Intelligence and Machine Learning 02, n.º 01 (2022): 288–302. http://dx.doi.org/10.54364/aaiml.2022.1119.

Texto completo
Resumen
Semantic segmentation algorithms based on deep learning architectures have been applied to a diverse set of problems. Consequently, new methodologies have emerged to push the state-of-the-art in this field forward, and the need for powerful user-friendly software increased significantly. The combination of Conditional Random Fields (CRFs) and Convolutional Neural Networks (CNNs) boosted the results of pixel-level classification predictions. Recent work using a fully integrated CRF-RNN layer have shown strong advantages in segmentation benchmarks over the base models. Despite this success, the rigidity of these frameworks prevents mass adaptability for complex scientific datasets and presents challenges in optimally scaling these models. In this work, we introduce a new encoder-decoder system that overcomes both these issues. We adapt multiple CNNs as encoders, allowing for the definition of multiple function parameter arguments to structure the models according to the targeted datasets and scientific problem. We leverage the flexibility of the U-Net architecture to act as a scalable decoder. The CRF-RNN layer is integrated into the decoder as an optional final layer, keeping the entire system fully compatible with back-propagation. To evaluate the performance of our implementation, we performed experiments on the Oxford-IIIT Pet Dataset and to experimental scientific data acquired via micro-computed tomography (microCT), revealing the adaptability of this framework and the performance benefits from a fully end-to-end CNN-CRF system on a both experimental and benchmark datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Sansine, Vateanui, Pascal Ortega, Daniel Hissel y Franco Ferrucci. "Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting". Atmosphere 14, n.º 7 (24 de julio de 2023): 1192. http://dx.doi.org/10.3390/atmos14071192.

Texto completo
Resumen
For grid stability, operation, and planning, solar irradiance forecasting is crucial. In this paper, we provide a method for predicting the Global Horizontal Irradiance (GHI) mean values one hour in advance. Sky images are utilized for training the various forecasting models along with measured meteorological data in order to account for the short-term variability of solar irradiance, which is mostly caused by the presence of clouds in the sky. Additionally, deep learning models like the multilayer perceptron (MLP), convolutional neural networks (CNN), long short-term memory (LSTM), or their hybridized forms are widely used for deterministic solar irradiance forecasting. The implementation of probabilistic solar irradiance forecasting, which is gaining prominence in grid management since it offers information on the likelihood of different outcomes, is another task we carry out using quantile regression. The novelty of this paper lies in the combination of a hybrid deep learning model (CNN-LSTM) with quantile regression for the computation of prediction intervals at different confidence levels. The training of the different machine learning algorithms is performed over a year’s worth of sky images and meteorological data from the years 2019 to 2020. The data were measured at the University of French Polynesia (17.5770° S, 149.6092° W), on the island of Tahiti, which has a tropical climate. Overall, the hybrid model (CNN-LSTM) is the best performing and most accurate in terms of deterministic and probabilistic metrics. In addition, it was found that the CNN, LSTM, and ANN show good results against persistence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Hou, Yuxin, Ari Heljakka y Arno Solin. "Gaussian Process Priors for View-Aware Inference". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 9 (18 de mayo de 2021): 7762–70. http://dx.doi.org/10.1609/aaai.v35i9.16948.

Texto completo
Resumen
While frame-independent predictions with deep neural networks have become the prominent solutions to many computer vision tasks, the potential benefits of utilizing correlations between frames have received less attention. Even though probabilistic machine learning provides the ability to encode correlation as prior knowledge for inference, there is a tangible gap between the theory and practice of applying probabilistic methods to modern vision problems. For this, we derive a principled framework to combine information coupling between camera poses (translation and orientation) with deep models. We proposed a novel view kernel that generalizes the standard periodic kernel in SO(3). We show how this soft-prior knowledge can aid several pose-related vision tasks like novel view synthesis and predict arbitrary points in the latent space of generative models, pointing towards a range of new applications for inter-frame reasoning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Nguyen, Minh Truong, Viet-Hung Dang y Truong-Thang Nguyen. "Applying Bayesian neural network to evaluate the influence of specialized mini projects on final performance of engineering students: A case study". Ministry of Science and Technology, Vietnam 64, n.º 4 (15 de diciembre de 2022): 10–15. http://dx.doi.org/10.31276/vjste.64(4).10-15.

Texto completo
Resumen
In this article, deep learning probabilistic models are applied to a case study on evaluating the influence of specialized mini projects (SMPs) on the performance of engineering students on their final year project (FYP) and cumulative grade point average (CGPA). This approach also creates a basis to predict the final performance of undergraduate students based on their SMP scores, which is a vital characteristic of engineering training. The study is conducted in two steps: (i) establishing a database by collecting 2890 SMP and FYP scores and the associated CGPA of a group of engineering students that graduated in 2022 in Hanoi; and (ii) engineering two deep learning probabilistic models based on Bayesian neural networks (BNNs) with the corresponding architectures of 8/16/16/1 and 9/16/16/1 for FYP and CGPA, respectively. The significance of this study is that the proposed probabilistic models are capable of (i) providing reasonable analysis results such as the feature importance score of individual SMPs as well as an estimated FYP and CGPA; and (ii) predicting relatively close estimations with mean relative errors from 6.8% to 12.1%. Based on the obtained results, academic activities to support student progress can be proposed for engineering universities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Nor, Ahmad Kamal Mohd. "Failure Prognostic of Turbofan Engines with Uncertainty Quantification and Explainable AI (XIA)". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 3 (11 de abril de 2021): 3494–504. http://dx.doi.org/10.17762/turcomat.v12i3.1624.

Texto completo
Resumen
Deep learning is quickly becoming essential to human ecosystem. However, the opacity of certain deep learning models poses a legal barrier in its adoption for greater purposes. Explainable AI (XAI) is a recent paradigm intended to tackle this issue. It explains the prediction mechanism produced by black box AI models, making it extremely practical for safety, security or financially important decision making. In another aspect, most deep learning studies are based on point estimate prediction with no measure of uncertainty which is vital for decision making. Obviously, these works are not suitable for real world applications. This paper presents a Remaining Useful Life (RUL) estimation problem for turbofan engines equipped with prognostic explainability and uncertainty quantification. A single input, multi outputs probabilistic Long Short-Term Memory (LSTM) is employed to predict the RULs distribution of the turbofans and SHapley Additive exPlanations (SHAP) approach is applied to explain the prognostic made. The explainable probabilistic LSTM is thus able to express its confidence in predicting and explains the produced estimation. The performance of the proposed method is comparable to several other published works
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ghobadi, Fatemeh y Doosun Kang. "Multi-Step Ahead Probabilistic Forecasting of Daily Streamflow Using Bayesian Deep Learning: A Multiple Case Study". Water 14, n.º 22 (14 de noviembre de 2022): 3672. http://dx.doi.org/10.3390/w14223672.

Texto completo
Resumen
In recent decades, natural calamities such as drought and flood have caused widespread economic and social damage. Climate change and rapid urbanization contribute to the occurrence of natural disasters. In addition, their destructive impact has been altered, posing significant challenges to the efficiency, equity, and sustainability of water resources allocation and management. Uncertainty estimation in hydrology is essential for water resources management. By quantifying the associated uncertainty of reliable hydrological forecasting, an efficient water resources management plan is obtained. Moreover, reliable forecasting provides significant future information to assist risk assessment. Currently, the majority of hydrological forecasts utilize deterministic approaches. Nevertheless, deterministic forecasting models cannot account for the intrinsic uncertainty of forecasted values. Using the Bayesian deep learning approach, this study developed a probabilistic forecasting model that covers the pertinent subproblem of univariate time series models for multi-step ahead daily streamflow forecasting to quantify epistemic and aleatory uncertainty. The new model implements Bayesian sampling in the Long short-term memory (LSTM) neural network by using variational inference to approximate the posterior distribution. The proposed method is verified with three case studies in the USA and three forecasting horizons. LSTM as a point forecasting neural network model and three probabilistic forecasting models, such as LSTM-BNN, BNN, and LSTM with Monte Carlo (MC) dropout (LSTM-MC), were applied for comparison with the proposed model. The results show that the proposed Bayesian long short-term memory (BLSTM) outperforms the other models in terms of forecasting reliability, sharpness, and overall performance. The results reveal that all probabilistic forecasting models outperformed the deterministic model with a lower RMSE value. Furthermore, the uncertainty estimation results show that BLSTM can handle data with higher variation and peak, particularly for long-term multi-step ahead streamflow forecasting, compared to other models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Bentsen, Lars Ødegaard, Narada Dilp Warakagoda, Roy Stenbro y Paal Engelstad. "Probabilistic Wind Park Power Prediction using Bayesian Deep Learning and Generative Adversarial Networks". Journal of Physics: Conference Series 2362, n.º 1 (1 de noviembre de 2022): 012005. http://dx.doi.org/10.1088/1742-6596/2362/1/012005.

Texto completo
Resumen
The rapid depletion of fossil-based energy supplies, along with the growing reliance on renewable resources, has placed supreme importance on the predictability of renewables. Research focusing on wind park power modelling has mainly been concerned with point estimators, while most probabilistic studies have been reserved for forecasting. In this paper, a few different approaches to estimate probability distributions for individual turbine powers in a real off-shore wind farm were studied. Two variational Bayesian inference models were used, one employing a multilayered perceptron and another a graph neural network (GNN) architecture. Furthermore, generative adversarial networks (GAN) have recently been proposed as Bayesian models and was here investigated as a novel area of research. The results showed that the two Bayesian models outperformed the GAN model with regards to mean absolute errors (MAE), with the GNN architecture yielding the best results. The GAN on the other hand, seemed potentially better at generating diverse distributions. Standard deviations of the predicted distributions were found to have a positive correlation with MAEs, indicating that the models could correctly provide estimates on the confidence associated with particular predictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lee, Taehee, Devin Rand, Lorraine E. Lisiecki, Geoffrey Gebbie y Charles Lawrence. "Bayesian age models and stacks: combining age inferences from radiocarbon and benthic δ18O stratigraphic alignment". Climate of the Past 19, n.º 10 (17 de octubre de 2023): 1993–2012. http://dx.doi.org/10.5194/cp-19-1993-2023.

Texto completo
Resumen
Abstract. Previously developed software packages that generate probabilistic age models for ocean sediment cores are designed to either interpolate between different age proxies at discrete depths (e.g., radiocarbon, tephra layers, or tie points) or perform a probabilistic stratigraphic alignment to a dated target (e.g., of benthic δ18O) and cannot combine age inferences from both techniques. Furthermore, many radiocarbon dating packages are not specifically designed for marine sediment cores, and the default settings may not accurately reflect the probability of sedimentation rate variability in the deep ocean, thus requiring subjective tuning of the parameter settings. Here we present a new technique for generating Bayesian age models and stacks using ocean sediment core radiocarbon and probabilistic alignment of benthic δ18O data, implemented in a software package named BIGMACS (Bayesian Inference Gaussian Process regression and Multiproxy Alignment of Continuous Signals). BIGMACS constructs multiproxy age models by combining age inferences from both radiocarbon ages and probabilistic benthic δ18O stratigraphic alignment and constrains sedimentation rates using an empirically derived prior model based on 37 14C-dated ocean sediment cores (Lin et al., 2014). BIGMACS also constructs continuous benthic δ18O stacks via a Gaussian process regression, which requires a smaller number of cores than previous stacking methods. This feature allows users to construct stacks for a region that shares a homogeneous deep-water δ18O signal, while leveraging radiocarbon dates across multiple cores. Thus, BIGMACS efficiently generates local or regional stacks with smaller uncertainties in both age and δ18O than previously available techniques. We present two example regional benthic δ18O stacks and demonstrate that the multiproxy age models produced by BIGMACS are more precise than their single-proxy counterparts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Li, Longyuan, Jihai Zhang, Junchi Yan, Yaohui Jin, Yunhao Zhang, Yanjie Duan y Guangjian Tian. "Synergetic Learning of Heterogeneous Temporal Sequences for Multi-Horizon Probabilistic Forecasting". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de mayo de 2021): 8420–28. http://dx.doi.org/10.1609/aaai.v35i10.17023.

Texto completo
Resumen
Time-series is ubiquitous across applications, such as transportation, finance and healthcare. Time-series is often influenced by external factors, especially in the form of asynchronous events, making forecasting difficult. However, existing models are mainly designated for either synchronous time-series or asynchronous event sequence, and can hardly provide a synthetic way to capture the relation between them. We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model. To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks. In addition, an aligned time coding and an auxiliary transition scheme are carefully devised for batched training on unaligned sequences. Our model can be trained effectively using stochastic variational inference and generates probabilistic predictions with Monte-Carlo simulation. Furthermore, our model produces accurate, sharp and more realistic probabilistic forecasts. We also show that modeling asynchronous event sequences is crucial for multi-horizon time-series forecasting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Pang, Bo, Erik Nijkamp y Ying Nian Wu. "Deep Learning With TensorFlow: A Review". Journal of Educational and Behavioral Statistics 45, n.º 2 (10 de septiembre de 2019): 227–48. http://dx.doi.org/10.3102/1076998619872761.

Texto completo
Resumen
This review covers the core concepts and design decisions of TensorFlow. TensorFlow, originally created by researchers at Google, is the most popular one among the plethora of deep learning libraries. In the field of deep learning, neural networks have achieved tremendous success and gained wide popularity in various areas. This family of models also has tremendous potential to promote data analysis and modeling for various problems in educational and behavioral sciences given its flexibility and scalability. We give the reader an overview of the basics of neural network models such as the multilayer perceptron, the convolutional neural network, and stochastic gradient descent, the most commonly used optimization method for neural network models. However, the implementation of these models and optimization algorithms is time-consuming and error-prone. Fortunately, TensorFlow greatly eases and accelerates the research and application of neural network models. We review several core concepts of TensorFlow such as graph construction functions, graph execution tools, and TensorFlow’s visualization tool, TensorBoard. Then, we apply these concepts to build and train a convolutional neural network model to classify handwritten digits. This review is concluded by a comparison of low- and high-level application programming interfaces and a discussion of graphical processing unit support, distributed training, and probabilistic modeling with TensorFlow Probability library.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Lim, Heejong, Kwanghun Chung y Sangbok Lee. "Probabilistic Forecasting for Demand of a Bike-Sharing Service Using a Deep-Learning Approach". Sustainability 14, n.º 23 (29 de noviembre de 2022): 15889. http://dx.doi.org/10.3390/su142315889.

Texto completo
Resumen
Efficient and sustainable bike-sharing service (BSS) operations require accurate demand forecasting for bike inventory management and rebalancing. Probabilistic forecasting provides a set of information on uncertainties in demand forecasting, and thus it is suitable for use in stochastic inventory management. Our research objective is to develop probabilistic time-series forecasting for BSS demand. We use an RNN–LSTM-based model, called DeepAR, for the station-wise bike-demand forecasting problem. The deep-learning structure of DeepAR captures complex demand patterns and correlations between the stations in one trained model; therefore, it is not necessary to develop demand-forecasting models for each individual station. DeepAR makes parameter forecast estimates for the probabilistic distribution of target values in the prediction range. We apply DeepAR to estimate the parameters of normal, truncated normal, and negative binomial distributions. We use the BSS dataset from Seoul Metropolitan City to evaluate the model’s performance. We create district- and station-level forecasts, comparing several statistical time-series forecasting methods; as a result, we show that DeepAR outperforms the other models. Furthermore, our district-level evaluation results show that all three distributions are acceptable for demand forecasting; however, the truncated normal distribution tends to overestimate the demand. At the station level, the truncated normal distribution performs the best, with the least forecasting errors out of the three tested distributions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Bi, Wei, Wenhua Chen y Jun Pan. "Multidisciplinary Reliability Design Considering Hybrid Uncertainty Incorporating Deep Learning". Wireless Communications and Mobile Computing 2022 (18 de noviembre de 2022): 1–11. http://dx.doi.org/10.1155/2022/5846684.

Texto completo
Resumen
Multidisciplinary reliability design optimization is considered an effective method for solving complex product design optimization problems under the influence of uncertainty factors; however, the high computational cost seriously affects its application in practice. As an important part of multidisciplinary reliability design optimization, multidisciplinary reliability analysis plays a direct leading role in its computational efficiency. At present, multidisciplinary reliability analysis under mixed uncertainty is still nested or sequential execution mode, which leads to the problem of poor disciplinary autonomy and inefficiency in the reliability analysis of complex products. To this end, a multidisciplinary reliability assessment method integrating deep neural networks and probabilistic computational models under mixed uncertainty is proposed for the problem of multidisciplinary reliability analysis under mixed uncertainty. The method considers the stochastic-interval-fuzzy uncertainty, decouples the nested multidisciplinary probability analysis, multidisciplinary likelihood analysis, and multidisciplinary interval analysis, uses deep neural networks to extract subdisciplinary high-dimensional features, and fuses them with probabilistic computational models. Moreover, the whole system is divided into several independent subsystems, then the collected reliability data are classified, and the fault data are attributed to each subsystem. Meanwhile, the environmental conditions of the system are considered, and the corresponding environmental factors are added as input neurons along with each subsystem. In this paper, the effectiveness of the proposed method is verified on numerical calculations and real inverter power failure data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

T, Ermolieva, Ermoliev Y, Zagorodniy) A, Bogdanov V, Borodina O, Havlik P, Komendantova N, Knopov P, Gorbachuk V y Zaslavskyi V. "Artificial Intelligence, Machine Learning, and Intelligent Decision Support Systems: Iterative “Learning” SQG-based procedures for Distributed Models’ Linkage". Artificial Intelligence 27, AI.2022.27(2) (29 de diciembre de 2022): 92–97. http://dx.doi.org/10.15407/jai2022.02.092.

Texto completo
Resumen
In this paper we discuss the on-going joint work contributing to the IIASA (International Institute for Applied Systems Analysis, Laxenburg, Austria) and National Academy of Science of Ukraine projects on “Modeling and management of dynamic stochastic interdependent systems for food-water-energy-health security nexus” (see [1-2] and references therein). The project develops methodological and modeling tools aiming to create Intelligent multimodel Decision Support System (IDSS) and Platform (IDSP), which can integrate national Food, Water, Energy, Social models with the models operating at the global scale (e.g., IIASA GLOBIOM and MESSAGE), in some cases ‘downscaling’ the results of the latter to a national level. Data harmonization procedures rely on new type non-smooth stochastic optimization and stochastic quasigradient (SQG) [3-4] methods for robust of-line and on-line decisions involving large-scale machine learning and Artificial Intelligence (AI) problems in particular, Deep Learning (DL) including deep neural learning or deep artificial neural network (ANN). Among the methodological aims of the project is the development of “Models’ Linkage” algorithms which are in the core of the IDSS as they enable distributed models’ linkage and data integration into one system on a platform [5-8]. The linkage algorithms solve the problem of linking distributed models, e.g., sectorial and/or regional, into an inter-sectorial inter-regional integrated models. The linkage problem can be viewed as a general endogenous reinforced learning problem of how software agents (models) take decisions in order to maximize the “cumulative reward". Based on novel ideas of systems’ linkage under asymmetric information and other uncertainties, nested strategic-operational and local-global models are being developed and used in combination with, in general, non-Bayesian probabilistic downscaling procedures. In this paper we illustrate the importance of the iterative “learning” solution algorithms based on stochastic quasigradient (SQG) procedures for robust of-line and on-line decisions involving large-scale Machine Learning, Big Data analysis, Distributed Models Linkage, and robust decision-making problems. Advanced robust statistical analysis and machine learning models of, in general, nonstationary stochastic optimization allow to account for potential distributional shifts, heavy tails, and nonstationarities in data streams that can mislead traditional statistical and machine learning models, in particular, deep neural learning or deep artificial neural network (ANN). Proposed models and methods rely on probabilistic and non-probabilistic (explicitly given or simulated) distributions combining measures of chances, experts’ beliefs and similarity measures (for example, compressed form of the kernel estimators). For highly nonconvex models such as the deep ANN network, the SQGs allow to avoid local solutions. In cases of nonstationary data, the SQGs allow for sequential revisions and adaptation of parameters to the changing environment, possibly, based on of-line adaptive simulations. The non-smooth STO approaches and SQG-based iterative solution procedures are illustrated with examples of robust estimation, models’ linkage, machine learning, adaptive Monte Carlo optimization for cat risks (floods, earthquakes, etc.) modeling and management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Liu, Xi, Tao Wu, Yuanyuan An y Yang Liu. "Probabilistic models of the strut efficiency factor for RC deep beams with MCMC method". Structural Concrete 21, n.º 3 (22 de enero de 2020): 917–33. http://dx.doi.org/10.1002/suco.201900249.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

de Zarzà, I., J. de Curtò, Gemma Roig y Carlos T. Calafate. "LLM Multimodal Traffic Accident Forecasting". Sensors 23, n.º 22 (16 de noviembre de 2023): 9225. http://dx.doi.org/10.3390/s23229225.

Texto completo
Resumen
With the rise in traffic congestion in urban centers, predicting accidents has become paramount for city planning and public safety. This work comprehensively studied the efficacy of modern deep learning (DL) methods in forecasting traffic accidents and enhancing Level-4 and Level-5 (L-4 and L-5) driving assistants with actionable visual and language cues. Using a rich dataset detailing accident occurrences, we juxtaposed the Transformer model against traditional time series models like ARIMA and the more recent Prophet model. Additionally, through detailed analysis, we delved deep into feature importance using principal component analysis (PCA) loadings, uncovering key factors contributing to accidents. We introduce the idea of using real-time interventions with large language models (LLMs) in autonomous driving with the use of lightweight compact LLMs like LLaMA-2 and Zephyr-7b-α. Our exploration extends to the realm of multimodality, through the use of Large Language-and-Vision Assistant (LLaVA)—a bridge between visual and linguistic cues by means of a Visual Language Model (VLM)—in conjunction with deep probabilistic reasoning, enhancing the real-time responsiveness of autonomous driving systems. In this study, we elucidate the advantages of employing large multimodal models within DL and deep probabilistic programming for enhancing the performance and usability of time series forecasting and feature weight importance, particularly in a self-driving scenario. This work paves the way for safer, smarter cities, underpinned by data-driven decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ali, Abdullah Marish, Fuad A. Ghaleb, Mohammed Sultan Mohammed, Fawaz Jaber Alsolami y Asif Irshad Khan. "Web-Informed-Augmented Fake News Detection Model Using Stacked Layers of Convolutional Neural Network and Deep Autoencoder". Mathematics 11, n.º 9 (23 de abril de 2023): 1992. http://dx.doi.org/10.3390/math11091992.

Texto completo
Resumen
Today, fake news is a growing concern due to its devastating impacts on communities. The rise of social media, which many users consider the main source of news, has exacerbated this issue because individuals can easily disseminate fake news more quickly and inexpensive with fewer checks and filters than traditional news media. Numerous approaches have been explored to automate the detection and prevent the spread of fake news. However, achieving accurate detection requires addressing two crucial aspects: obtaining the representative features of effective news and designing an appropriate model. Most of the existing solutions rely solely on content-based features that are insufficient and overlapping. Moreover, most of the models used for classification are constructed with the concept of a dense features vector unsuitable for short news sentences. To address this problem, this study proposed a Web-Informed-Augmented Fake News Detection Model using Stacked Layers of Convolutional Neural Network and Deep Autoencoder called ICNN-AEN-DM. The augmented information is gathered from web searches from trusted sources to either support or reject the claims in the news content. Then staked layers of CNN with a deep autoencoder were constructed to train a probabilistic deep learning-base classifier. The probabilistic outputs of the stacked layers were used to train decision-making by staking multilayer perceptron (MLP) layers to the probabilistic deep learning layers. The results based on extensive experiments challenging datasets show that the proposed model performs better than the related work models. It achieves 26.6% and 8% improvement in detection accuracy and overall detection performance, respectively. Such achievements are promising for reducing the negative impacts of fake news on communities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chipofya, Mapopa, Hilal Tayara y Kil To Chong. "Deep Probabilistic Learning Model for Prediction of Ionic Liquids Toxicity". International Journal of Molecular Sciences 23, n.º 9 (9 de mayo de 2022): 5258. http://dx.doi.org/10.3390/ijms23095258.

Texto completo
Resumen
Identification of ionic liquids with low toxicity is paramount for applications in various domains. Traditional approaches used for determining the toxicity of ionic liquids are often expensive, and can be labor intensive and time consuming. In order to mitigate these limitations, researchers have resorted to using computational models. This work presents a probabilistic model built from deep kernel learning with the aim of predicting the toxicity of ionic liquids in the leukemia rat cell line (IPC-81). Only open source tools, namely, RDKit and Mol2vec, are required to generate predictors for this model; as such, its predictions are solely based on chemical structure of the ionic liquids and no manual extraction of features is needed. The model recorded an RMSE of 0.228 and R2 of 0.943. These results indicate that the model is both reliable and accurate. Furthermore, this model provides an accompanying uncertainty level for every prediction it makes. This is important because discrepancies in experimental measurements that generated the dataset used herein are inevitable, and ought to be modeled. A user-friendly web server was developed as well, enabling researchers and practitioners ti make predictions using this model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Meng, Fan, Kunlin Yang, Yichen Yao, Zhibin Wang y Tao Song. "Tropical Cyclone Intensity Probabilistic Forecasting System Based on Deep Learning". International Journal of Intelligent Systems 2023 (18 de marzo de 2023): 1–17. http://dx.doi.org/10.1155/2023/3569538.

Texto completo
Resumen
Tropical cyclones (TC) are one of the extreme disasters that have the most significant impact on human beings. Unfortunately, intensity forecasting of TC has been a difficult and bottleneck in weather forecasting. Recently, deep learning-based intensity forecasting of TC has shown the potential to surpass traditional methods. However, due to the Earth system’s complexity, nonlinearity, and chaotic effects, there is inherent uncertainty in weather forecasting. Besides, previous studies have not quantified the uncertainty, which is necessary for decision-making and risk assessment. This study proposes an intelligent system based on deep learning, PTCIF, to quantify this uncertainty based on multimodal meteorological data, which, to our knowledge, is the first study to assess the uncertainty of TC based on a deep learning approach. In this study, probabilistic forecasts are made for the intensity of 6–24 hours. Experimental results show that our proposed method is comparable to the forecast performance of weather forecast centers in terms of deterministic forecasts. Moreover, reliable prediction intervals and probabilistic forecasts can be obtained, which is vital for disaster warning and is expected to be a complement to operational models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Pomponi, Jary, Simone Scardapane y Aurelio Uncini. "A Probabilistic Re-Intepretation of Confidence Scores in Multi-Exit Models". Entropy 24, n.º 1 (21 de diciembre de 2021): 1. http://dx.doi.org/10.3390/e24010001.

Texto completo
Resumen
In this paper, we propose a new approach to train a deep neural network with multiple intermediate auxiliary classifiers, branching from it. These ‘multi-exits’ models can be used to reduce the inference time by performing early exit on the intermediate branches, if the confidence of the prediction is higher than a threshold. They rely on the assumption that not all the samples require the same amount of processing to yield a good prediction. In this paper, we propose a way to train jointly all the branches of a multi-exit model without hyper-parameters, by weighting the predictions from each branch with a trained confidence score. Each confidence score is an approximation of the real one produced by the branch, and it is calculated and regularized while training the rest of the model. We evaluate our proposal on a set of image classification benchmarks, using different neural models and early-exit stopping criteria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhong, Z. y M. Mehltretter. "MIXED PROBABILITY MODELS FOR ALEATORIC UNCERTAINTY ESTIMATION IN THE CONTEXT OF DENSE STEREO MATCHING". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2021 (17 de junio de 2021): 17–26. http://dx.doi.org/10.5194/isprs-annals-v-2-2021-17-2021.

Texto completo
Resumen
Abstract. The ability to identify erroneous depth estimates is of fundamental interest. Information regarding the aleatoric uncertainty of depth estimates can be, for example, used to support the process of depth reconstruction itself. Consequently, various methods for the estimation of aleatoric uncertainty in the context of dense stereo matching have been presented in recent years, with deep learning-based approaches being particularly popular. Among these deep learning-based methods, probabilistic strategies are increasingly attracting interest, because the estimated uncertainty can be quantified in pixels or in metric units due to the consideration of real error distributions. However, existing probabilistic methods usually assume a unimodal distribution to describe the error distribution while simply neglecting cases in real-world scenarios that could violate this assumption. To overcome this limitation, we propose two novel mixed probability models consisting of Laplacian and Uniform distributions for the task of aleatoric uncertainty estimation. In this way, we explicitly address commonly challenging regions in the context of dense stereo matching and outlier measurements, respectively. To allow a fair comparison, we adapt a common neural network architecture to investigate the effects of the different uncertainty models. In an extensive evaluation using two datasets and two common dense stereo matching methods, the proposed methods demonstrate state-of-the-art accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Shao, Mingyue, Wei Song y Xiaobing Zhao. "Polymetallic Nodule Resource Assessment of Seabed Photography Based on Denoising Diffusion Probabilistic Models". Journal of Marine Science and Engineering 11, n.º 8 (27 de julio de 2023): 1494. http://dx.doi.org/10.3390/jmse11081494.

Texto completo
Resumen
Polymetallic nodules, found abundantly in deep-ocean deposits, possess significant economic value and represent a valuable resource due to their high metal enrichment, crucial for the high-tech industry. However, accurately evaluating these valuable mineral resources presents challenges for traditional image segmentation methods due to issues like color distortion, uneven illumination, and the diverse distribution of nodules in seabed images. Moreover, the scarcity of annotated images further compounds these challenges, impeding resource assessment efforts. To overcome these limitations, we propose a novel two-stage diffusion-based model for nodule image segmentation, along with a linear regression model for predicting nodule abundance based on the coverage obtained through nodule segmentation. In the first stage, we leverage a diffusion model trained on predominantly unlabeled mineral images to extract multiscale semantic features. Subsequently, we introduce an efficient segmentation network designed specifically for nodule segmentation. Experimental evaluations conducted on a comprehensive seabed nodule dataset demonstrate the exceptional performance of our approach compared to other deep learning methods, particularly in addressing challenging conditions like uneven illumination and dense nodule distributions. Our proposed model not only extends the application of diffusion models but also exhibits superior performance in seabed nodule segmentation. Additionally, we establish a linear regression model that accurately predicts nodule abundance by utilizing the coverage calculated through seabed nodule image segmentation. The results highlight the model’s capacity to accurately assess nodule coverage and abundance, even in regions beyond the sampled sites, thereby providing valuable insights for seabed resource evaluation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Xu, Duo, Jonathan C. Tan, Chia-Jung Hsu y Ye Zhu. "Denoising Diffusion Probabilistic Models to Predict the Density of Molecular Clouds". Astrophysical Journal 950, n.º 2 (1 de junio de 2023): 146. http://dx.doi.org/10.3847/1538-4357/accae5.

Texto completo
Resumen
Abstract We introduce the state-of-the-art deep-learning denoising diffusion probabilistic model as a method to infer the volume or number density of giant molecular clouds (GMCs) from projected mass surface density maps. We adopt magnetohydrodynamic simulations with different global magnetic field strengths and large-scale dynamics, i.e., noncolliding and colliding GMCs. We train a diffusion model on both mass surface density maps and their corresponding mass-weighted number density maps from different viewing angles for all the simulations. We compare the diffusion model performance with a more traditional empirical two-component and three-component power-law fitting method and with a more traditional neural network machine-learning approach. We conclude that the diffusion model achieves an order-of-magnitude improvement on the accuracy of predicting number density compared to that by other methods. We apply the diffusion method to some example astronomical column density maps of Taurus and the infrared dark clouds G28.37+0.07 and G35.39-0.33 to produce maps of their mean volume densities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Pandarinathan, Mr, S. Velan y S. Deepak. "Human Emotion Detection Using Deep Learning". International Journal for Research in Applied Science and Engineering Technology 11, n.º 5 (31 de mayo de 2023): 2225–29. http://dx.doi.org/10.22214/ijraset.2023.52016.

Texto completo
Resumen
Abstract: Human emotions are the mental state of feelings and are spontaneous. There is no clear connection between emotions and facial expressions and there is significant variability making facial recognition a challenging research area. Emotion recognition becomes a popular topic of deep learning and provides more application fields in our daily life. The primary idea of our project is to process the input imagesof human facial emotion to train the pre-trained models on datasets. In recent years, Machine Learning (ML) and Neural Networks (NNs) have been used for emotion recognition. In this project, a Convolutional Neural Network (CNN) is used toextract features from images to detect emotions. The numerical result of the algorithm will show a probabilistic result of each labeled class.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Candela, Alberto, David R. Thompson, David Wettergreen, Kerry Cawse-Nicholson, Sven Geier, Michael L. Eastwood y Robert O. Green. "Probabilistic Super Resolution for Mineral Spectroscopy". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 08 (3 de abril de 2020): 13241–47. http://dx.doi.org/10.1609/aaai.v34i08.7030.

Texto completo
Resumen
Earth and planetary sciences often rely upon the detailed examination of spectroscopic data for rock and mineral identification. This typically requires the collection of high resolution spectroscopic measurements. However, they tend to be scarce, as compared to low resolution remote spectra. This work addresses the problem of inferring high-resolution mineral spectroscopic measurements from low resolution observations using probability models. We present the Deep Gaussian Conditional Model, a neural network that performs probabilistic super resolution via maximum likelihood estimation. It also provides insight into learned correlations between measurements and spectroscopic features, allowing for the tractability and interpretability that scientists often require for mineral identification. Experiments using remote spectroscopic data demonstrate that our method compares favorably to other analogous probabilistic methods. Finally, we show and discuss how our method provides human-interpretable results, making it a compelling analysis tool for scientists.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

M. Rajalakshmi y V. Sulochana. "Enhancing deep learning model performance in air quality classification through probabilistic hyperparameter tuning with tree-structured parzen estimators". Scientific Temper 14, n.º 04 (30 de diciembre de 2023): 1244–50. http://dx.doi.org/10.58414/scientifictemper.2023.14.4.27.

Texto completo
Resumen
The research introduces an innovative approach to enhance deep learning models for air quality classification by integrating tree-structured Parzen estimators (TPE) into the hyperparameter tuning process. It applies this approach to CNN, LSTM, DNN, and DBN models and conducts extensive experiments using an air quality dataset, comparing it with grid search, random search, and genetic algorithm methods. The TPE algorithm consistently outperforms these methods, demonstrating improved classification accuracy and generalization. This approach’s potential extends to enriching water quality classification models, contributing to environmental sustainability and resource management. Bridging deep learning with TPE offers a promising solution for optimized air quality classification, supporting informed environmental preservation efforts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Türkmen, Ali Caner, Tim Januschowski, Yuyang Wang y Ali Taylan Cemgil. "Forecasting intermittent and sparse time series: A unified probabilistic framework via deep renewal processes". PLOS ONE 16, n.º 11 (29 de noviembre de 2021): e0259764. http://dx.doi.org/10.1371/journal.pone.0259764.

Texto completo
Resumen
Intermittency are a common and challenging problem in demand forecasting. We introduce a new, unified framework for building probabilistic forecasting models for intermittent demand time series, which incorporates and allows to generalize existing methods in several directions. Our framework is based on extensions of well-established model-based methods to discrete-time renewal processes, which can parsimoniously account for patterns such as aging, clustering and quasi-periodicity in demand arrivals. The connection to discrete-time renewal processes allows not only for a principled extension of Croston-type models, but additionally for a natural inclusion of neural network based models—by replacing exponential smoothing with a recurrent neural network. We also demonstrate that modeling continuous-time demand arrivals, i.e., with a temporal point process, is possible via a trivial extension of our framework. This leads to more flexible modeling in scenarios where data of individual purchase orders are directly available with granular timestamps. Complementing this theoretical advancement, we demonstrate the efficacy of our framework for forecasting practice via an extensive empirical study on standard intermittent demand data sets, in which we report predictive accuracy in a variety of scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Li, Zhanli, Xinyu Zhang, Fan Deng y Yun Zhang. "Integrating deep neural network with logic rules for credit scoring". Intelligent Data Analysis 27, n.º 2 (15 de marzo de 2023): 483–500. http://dx.doi.org/10.3233/ida-216460.

Texto completo
Resumen
Credit scoring is an important topic in financial activities and bankruptcy prediction that has been extensively explored using deep neural network (DNN) methods. DNN-based credit scoring models rely heavily on a large amount of labeled data. The accuracy of DNN-based credit assessment models relies heavily on large amounts of labeled data. However, purely data-driven learning makes it difficult to encode human intent to guide the model to capture the desired patterns and leads to low transparency of the model. Therefore, the Probabilistic Soft Logic Posterior Regularization (PSLPR) framework is proposed for integrating prior knowledge of logic rule with neural network. First, the PSLPR framework calculates the rule satisfaction distance for each instance using a probabilistic soft logic formula. Second, the logic rules are integrated into the posterior distribution of the DNN output to form a logic output. Finally, a novel discrepancy loss which measures the difference between the real label and the logic output is used to incorporate logic rules into the parameters of the neural network. Extensive experiments were conducted on two datasets, the Australian credit dataset and the credit card customer default dataset. To evaluate the obtained systems, several performance metrics were used, including PCC, Recall, F1 and AUC. The results show that compared to the standard DNN model, the four evaluation metrics are increased by 7.14%, 14.29%, 8.15%, and 5.43% respectively on the Australian credit dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía