Добірка наукової літератури з теми "Interpretable deep learning"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Interpretable deep learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Interpretable deep learning"

1

Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael, and Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability." IFAC-PapersOnLine 53, no. 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zheng, Hong, Yinglong Dai, Fumin Yu, and Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning." Journal of Physics: Conference Series 1757, no. 1 (2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ruffolo, Jeffrey A., Jeremias Sulam, and Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning." Patterns 3, no. 2 (2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bhambhoria, Rohan, Hui Liu, Samuel Dahan, and Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Повний текст джерела
Анотація:
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Arik, Sercan Ö., and Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.

Повний текст джерела
Анотація:
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, sign
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lin, Chih-Hsu, and Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies." Bioinformatics 37, no. 17 (2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.

Повний текст джерела
Анотація:
Abstract Motivation Cancer dependencies provide potential drug targets. Unfortunately, dependencies differ among cancers and even individuals. To this end, visible neural networks (VNNs) are promising due to robust performance and the interpretability required for the biomedical field. Results We design Biological visible neural network (BioVNN) using pathway knowledge to predict cancer dependencies. Despite having fewer parameters, BioVNN marginally outperforms traditional neural networks (NNs) and converges faster. BioVNN also outperforms an NN based on randomized pathways. More importantly,
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He, and MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis." IEEE Journal of Biomedical and Health Informatics 24, no. 5 (2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Matsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach." Nonlinear Theory and Its Applications, IEICE 11, no. 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Yi, Kenneth Barr, and John Reinitz. "Fully interpretable deep learning model of transcriptional control." Bioinformatics 36, Supplement_1 (2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.

Повний текст джерела
Анотація:
Abstract Motivation The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a ‘black box’ approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modelin
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yamuna, Vadada. "Interpretable Deep Learning Models for Improved Diabetes Diagnosis." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50834.

Повний текст джерела
Анотація:
Diabetes, a chronic condition marked by persistent high blood sugar, poses major global health challenges due to complications like cardiovascular disease and neuropathy. Traditional diagnostic methods, though common, are invasive, time-consuming, and prone to interpretation errors. To overcome these issues, this project proposes a novel machine learning framework that integrates structured data (e.g., demographics, test results) and unstructured data (e.g., retinal images, clinical notes) using deep learning models like CNNs, RNNs, and transformers. Explainable AI techniques, such as SHAP and
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Interpretable deep learning"

1

FERRONE, LORENZO. "On interpretable information in deep learning: encoding and decoding of distributed structures." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2016. http://hdl.handle.net/2108/202245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Xie, Ning. "Towards Interpretable and Reliable Deep Neural Networks for Visual Intelligence." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1596208422672732.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Emschwiller, Matt V. "Understanding neural network sample complexity and interpretable convergence-guaranteed deep learning with polynomial regression." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127290.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 83-89).<br>We first study the sample complexity of one-layer neural networks, namely the number of examples that are needed in the training set for such models to be able to learn meaningful information out-of-sample. We empirically derive quantitative relationships between the sample complexity and the parameters of the network, such as its input dimension and its width. Then, we introduce
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Terzi, Matteo. "Learning interpretable representations for classification, anomaly detection, human gesture and action recognition." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423183.

Повний текст джерела
Анотація:
The goal of this thesis is to provide algorithms and models for classification, gesture recognition and anomaly detection with a partial focus on human activity. In applications where humans are involved, it is of paramount importance to provide robust and understandable algorithms and models. A way to accomplish this requirement is to use relatively simple and robust approaches, especially when devices are resource-constrained. The second approach, when a large amount of data is present, is to adopt complex algorithms and models and make them robust and interpretable from a human-like point
Стилі APA, Harvard, Vancouver, ISO та ін.
5

REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.

Повний текст джерела
Анотація:
I recenti algoritmi di apprendimento automatico ad alte prestazioni sono convincenti ma opachi, quindi spesso è difficile capire come arrivano alle loro previsioni, dando origine a problemi di interpretabilità. Questi problemi sono particolarmente rilevanti nell'apprendimento supervisionato, dove questi modelli "black-box" non sono facilmente comprensibili per le parti interessate. Un numero crescente di lavori si concentra sul rendere più interpretabili i modelli di apprendimento automatico, in particolare quelli di apprendimento profondo. Gli approcci attualmente proposti si basano su un'i
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Thibeau-Sutre, Elina. "Reproducible and interpretable deep learning for the diagnosis, prognosis and subtyping of Alzheimer’s disease from neuroimaging data." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS495.

Повний текст джерела
Анотація:
L’objectif de cette thèse était la validation de l’existence ainsi que la découverte de nouveaux sous-types au sein de la maladie d’Alzheimer, première cause de démence au monde. Afin d’explorer son hétérogénéité, nous avons employé des méthodes d’apprentissage profond appliquées à une modalité de neuroimagerie, l’imagerie par résonance magnétique structurelle.Cependant, la découverte de biais méthodologiques importants dans de nombreuses études de notre domaine, ainsi que l’absence de consensus de la communauté sur la manière d’interpréter les résultats des méthodes d’apprentissage profond a
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Parekh, Jayneel. "A Flexible Framework for Interpretable Machine Learning : application to image and audio classification." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT032.

Повний текст джерела
Анотація:
Les systèmes d'apprentissage automatique, et en particulier les réseaux de neurones, ont rapidement développé leur capacité à résoudre des problèmes d'apprentissage complexes. Par conséquent, ils sont intégrés dans la société avec une influence de plus en plus grande sur tous les niveaux de l'expérience humaine. Cela a entraîné la nécessité d'acquérir des informations compréhensibles par l'homme dans leur processus de prise de décision pour s'assurer que les décisions soient prises de manière éthique et fiable. L'étude et le développement de méthodes capables de générer de telles informations
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bennetot, Adrien. "A Neural-Symbolic learning framework to produce interpretable predictions for image classification." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS418.

Повний текст джерела
Анотація:
L'intelligence artificielle s'est développée de manière exponentielle au cours de la dernière décennie. Son évolution est principalement liée aux progrès des processeurs des cartes graphiques des ordinateurs, permettant d'accélérer le calcul des algorithmes d'apprentissage, et à l'accès à des volumes massifs de données. Ces progrès ont été principalement motivés par la recherche de modèles de prédiction de qualité, rendant ces derniers extrêmement précis mais opaques. Leur adoption à grande échelle est entravée par leur manque de transparence, ce qui provoque l'émergence de l'intelligence arti
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sheikhalishahi, Seyedmostafa. "Machine learning applications in Intensive Care Unit." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/339274.

Повний текст джерела
Анотація:
The rapid digitalization of the healthcare domain in recent years highlighted the need for advanced predictive methods particularly based upon deep learning methods. Deep learning methods which are capable of dealing with time- series data have recently emerged in various fields such as natural language processing, machine translation, and the Intensive Care Unit (ICU). The recent applications of deep learning in ICU have increasingly received attention, and it has shown promising results for different clinical tasks; however, there is still a need for the benchmark models as far as a handful
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Loiseau, Romain. "Real-World 3D Data Analysis : Toward Efficiency and Interpretability." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0028.

Повний текст джерела
Анотація:
Cette thèse explore de nouvelles approches d'apprentissage profond pour l'analyse des données 3D du monde réel. Le traitement des données 3D est utile pour de nombreuses applications telles que la conduite autonome, la gestion du territoire, la surveillance des installations industrielles, l'inventaire forestier et la mesure de biomasse. Cependant, l'annotation et l'analyse des données 3D peuvent être exigeantes. En particulier, il est souvent difficile de respecter des contraintes liées à l'utilisation des ressources de calcul ou à l'efficacité de l'annotation. La difficulté d'interpréter et
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Interpretable deep learning"

1

Thakoor, Kaveri Anil. Robust, Interpretable, and Portable Deep Learning Systems for Detection of Ophthalmic Diseases. [publisher not identified], 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Interpretable deep learning"

1

Kamath, Uday, and John Liu. "Explainable Deep Learning." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Preuer, Kristina, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, and Thomas Unterthiner. "Interpretable Deep Learning in Drug Discovery." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Perumal, Boominathan, Swathi Jamjala Narayanan, and Sangeetha Saman. "Explainable Deep Learning Architectures for Product Recommendations." In Explainable, Interpretable, and Transparent AI Systems. CRC Press, 2024. http://dx.doi.org/10.1201/9781003442509-13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wüthrich, Mario V., and Michael Merz. "Selected Topics in Deep Learning." In Springer Actuarial. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_11.

Повний текст джерела
Анотація:
AbstractThis chapter presents a selection of different topics. We discuss forecasting under model uncertainty, deep quantile regression, deep composite regression and the LocalGLMnet which is an interpretable FN network architecture. Moreover, we provide a bootstrap example to assess prediction uncertainty, we discuss mixture density networks, and we give an outlook to studying variational inference.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Rodrigues, Mark, Michael Mayo, and Panos Patros. "Interpretable Deep Learning for Surgical Tool Management." In Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87444-5_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Batra, Reenu, and Manish Mahajan. "Deep Learning Models: An Understandable Interpretable Approach." In Deep Learning for Security and Privacy Preservation in IoT. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6186-0_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shinde, Swati V., and Sagar Lahade. "Deep Learning for Tea Leaf Disease Classification." In Applied Computer Vision and Soft Computing with Interpretable AI. Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003359456-20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lu, Yu, Deliang Wang, Qinggang Meng, and Penghe Chen. "Towards Interpretable Deep Learning Models for Knowledge Tracing." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52240-7_34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pasquini, Dario, Giuseppe Ateniese, and Massimo Bernaschi. "Interpretable Probabilistic Password Strength Meters via Deep Learning." In Computer Security – ESORICS 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58951-6_25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kontogiannis, Andreas, and George A. Vouros. "Inherently Interpretable Deep Reinforcement Learning Through Online Mimicking." In Explainable and Transparent AI and Multi-Agent Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Interpretable deep learning"

1

Gazula, Vinay Ram, Katherine G. Herbert, Yasser Abduallah, and Jason T. L. Wang. "Interpretable Deep Learning for Solar Flare Prediction." In 2024 IEEE 36th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2024. https://doi.org/10.1109/ictai62512.2024.00078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tasnim, Raihana, Kaushik Roy, and Madhuri Siddula. "Interpretable Deep Learning Model for Multiclass Brain Tumor Classification." In 2024 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2024. https://doi.org/10.1109/icmla61862.2024.00219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Yizhen, Yang Zhang, and Xiao Yao. "Towards Self-Interpretable Graph Neural Networks via Augmentation-Contrastive Learning." In 2025 6th International Conference on Computer Vision, Image and Deep Learning (CVIDL). IEEE, 2025. https://doi.org/10.1109/cvidl65390.2025.11086007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chisty, Tanjir Alam, and Md Mahbubur Rahman Rahman. "Ransomware Detection Utilizing Ensemble Based Interpretable Deep Learning Model." In 2024 IEEE International Conference on Power, Electrical, Electronics and Industrial Applications (PEEIACON). IEEE, 2024. https://doi.org/10.1109/peeiacon63629.2024.10800005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bhatti, Uzair Aslam, Yang Ke Yu, O. Zh Mamyrbayev, A. A. Aitkazina, Tang Hao, and N. O. Zhumazhan. "Recommendations for Healthcare: An Interpretable Approach Using Deep Learning." In 2024 7th International Conference on Pattern Recognition and Artificial Intelligence (PRAI). IEEE, 2024. https://doi.org/10.1109/prai62207.2024.10827288.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hu, Shulin, Cao Zeng, Minti Liu, and Guisheng Liao. "Learning Interpretable Phase Difference Mapping for Scalable DOA Estimation via Deep Learning." In 2024 IEEE/CIC International Conference on Communications in China (ICCC Workshops). IEEE, 2024. http://dx.doi.org/10.1109/icccworkshops62562.2024.10693687.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Temenos, Anastasios, Nikos Temenos, Ioannis Rallis, Margarita Skamantzari, Anastasios Doulamis, and Nikolaos Doulamis. "Identifying False Negative Flood Events Using Interpretable Deep Learning Framework." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10642460.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Soelistyo, Christopher J., and Alan R. Lowe. "Discovering interpretable models of scientific image data with deep learning." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

B, Srinithi, Sruthi Nirmala S. R, Senthil Kumar Thangavel, Somasundaram K, and M. Ramasamy. "Enhancing Milk Yield Forecasting in Dairy Farming Using an Interpretable Machine Learning Framework." In 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL). IEEE, 2025. https://doi.org/10.1109/icsadl65848.2025.10933035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sah, Nabin Kumar, M. Vivek Srikar Reddy, Karthik Ullas, Tripty Singh, Adhirath Mandal, and Suman Chatterji. "Interpretable Deep Learning for Skin Cancer Detection: Exploring LIME and SHAP." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10723848.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Interpretable deep learning"

1

Jiang, Peishi, Xingyuan Chen, Maruti Mudunuru, et al. Towards Trustworthy and Interpretable Deep Learning-assisted Ecohydrological Models. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1769787.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Begeman, Carolyn, Marian Anghel, and Ishanu Chattopadhyay. Interpretable Deep Learning for the Earth System with Fractal Nets. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1769730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pasupuleti, Murali Krishna. Decision Theory and Model-Based AI: Probabilistic Learning, Inference, and Explainability. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv525.

Повний текст джерела
Анотація:
Abstract Decision theory and model-based AI provide the foundation for probabilistic learning, optimal inference, and explainable decision-making, enabling AI systems to reason under uncertainty, optimize long-term outcomes, and provide interpretable predictions. This research explores Bayesian inference, probabilistic graphical models, reinforcement learning (RL), and causal inference, analyzing their role in AI-driven decision systems across various domains, including healthcare, finance, robotics, and autonomous systems. The study contrasts model-based and model-free approaches in decision-
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pasupuleti, Murali Krishna. Stochastic Computation for AI: Bayesian Inference, Uncertainty, and Optimization. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv325.

Повний текст джерела
Анотація:
Abstract: Stochastic computation is a fundamental approach in artificial intelligence (AI) that enables probabilistic reasoning, uncertainty quantification, and robust decision-making in complex environments. This research explores the theoretical foundations, computational techniques, and real-world applications of stochastic methods, focusing on Bayesian inference, Monte Carlo methods, stochastic optimization, and uncertainty-aware AI models. Key topics include probabilistic graphical models, Markov Chain Monte Carlo (MCMC), variational inference, stochastic gradient descent (SGD), and Bayes
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Повний текст джерела
Анотація:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capa
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!