Academic literature on the topic 'Machine Learning Model Robustness'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine Learning Model Robustness.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Machine Learning Model Robustness"
Arslan, Ayse. "Rethinking Robustness in Machine Learning: Use of Generative Adversarial Networks for Enhanced Robustness." Scholars Journal of Engineering and Technology 10, no. 3 (March 28, 2022): 9–15. http://dx.doi.org/10.36347/sjet.2022.v10i03.001.
Full textEinziger, Gil, Maayan Goldstein, Yaniv Sa’ar, and Itai Segall. "Verifying Robustness of Gradient Boosted Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2446–53. http://dx.doi.org/10.1609/aaai.v33i01.33012446.
Full textThapa, Chandra, Pathum Chamikara Mahawaga Arachchige, Seyit Camtepe, and Lichao Sun. "SplitFed: When Federated Learning Meets Split Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8485–93. http://dx.doi.org/10.1609/aaai.v36i8.20825.
Full textBalakrishnan, Charumathi, and Mangaiyarkarasi Thiagarajan. "CREDIT RISK MODELLING FOR INDIAN DEBT SECURITIES USING MACHINE LEARNING." Buletin Ekonomi Moneter dan Perbankan 24 (March 8, 2021): 107–28. http://dx.doi.org/10.21098/bemp.v24i0.1401.
Full textNguyen, Ngoc-Kim-Khanh, Quang Nguyen, Hai-Ha Pham, Thi-Trang Le, Tuan-Minh Nguyen, Davide Cassi, Francesco Scotognella, Roberto Alfierif, and Michele Bellingeri. "Predicting the Robustness of Large Real-World Social Networks Using a Machine Learning Model." Complexity 2022 (November 9, 2022): 1–16. http://dx.doi.org/10.1155/2022/3616163.
Full textWu, Zhijing, and Hua Xu. "A Multi-Task Learning Machine Reading Comprehension Model for Noisy Document (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13963–64. http://dx.doi.org/10.1609/aaai.v34i10.7254.
Full textChuah, Joshua, Uwe Kruger, Ge Wang, Pingkun Yan, and Juergen Hahn. "Framework for Testing Robustness of Machine Learning-Based Classifiers." Journal of Personalized Medicine 12, no. 8 (August 14, 2022): 1314. http://dx.doi.org/10.3390/jpm12081314.
Full textSepulveda, Natalia Espinoza, and Jyoti Sinha. "Parameter Optimisation in the Vibration-Based Machine Learning Model for Accurate and Reliable Faults Diagnosis in Rotating Machines." Machines 8, no. 4 (October 23, 2020): 66. http://dx.doi.org/10.3390/machines8040066.
Full textZhang, Lingwen, Ning Xiao, Wenkao Yang, and Jun Li. "Advanced Heterogeneous Feature Fusion Machine Learning Models and Algorithms for Improving Indoor Localization." Sensors 19, no. 1 (January 2, 2019): 125. http://dx.doi.org/10.3390/s19010125.
Full textDrews, Samuel, Aws Albarghouthi, and Loris D'Antoni. "Proving Data-Poisoning Robustness in Decision Trees." Communications of the ACM 66, no. 2 (January 20, 2023): 105–13. http://dx.doi.org/10.1145/3576894.
Full textDissertations / Theses on the topic "Machine Learning Model Robustness"
Adams, William A. "Analysis of Robustness in Lane Detection using Machine Learning Models." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1449167611.
Full textLundström, Linnea. "Formally Verifying the Robustness of Machine Learning Models : A Comparative Study." Thesis, Linköpings universitet, Programvara och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167504.
Full textMAURI, LARA. "DATA PARTITIONING AND COMPENSATION TECHNIQUES FOR SECURE TRAINING OF MACHINE LEARNING MODELS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/932387.
Full textRado, Omesaad A. M. "Contributions to evaluation of machine learning models. Applicability domain of classification models." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18447.
Full textMinistry of Higher Education in Libya
Cherief-Abdellatif, Badr-Eddine. "Contributions to the theoretical study of variational inference and robustness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG001.
Full textThis PhD thesis deals with variational inference and robustness. More precisely, it focuses on the statistical properties of variational approximations and the design of efficient algorithms for computing them in an online fashion, and investigates Maximum Mean Discrepancy based estimators as learning rules that are robust to model misspecification.In recent years, variational inference has been extensively studied from the computational viewpoint, but only little attention has been put in the literature towards theoretical properties of variational approximations until very recently. In this thesis, we investigate the consistency of variational approximations in various statistical models and the conditions that ensure the consistency of variational approximations. In particular, we tackle the special case of mixture models and deep neural networks. We also justify in theory the use of the ELBO maximization strategy, a model selection criterion that is widely used in the Variational Bayes community and is known to work well in practice.Moreover, Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even under model mismatch and with adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference? In this thesis, we show that this is indeed the case for some variational inference algorithms. We propose new online, tempered variational algorithms and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that our result should hold more generally and present empirical evidence in support of this. Our work presents theoretical justifications in favor of online algorithms that rely on approximate Bayesian methods. Another point that is addressed in this thesis is the design of a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. We tackle the problem of universal estimation using a minimum distance estimator based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations. We also propose a Bayesian version of our estimator, that we study from both a theoretical and a computational points of view
Ilyas, Andrew. "On practical robustness of machine learning systems." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122911.
Full textThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-79).
We consider the importance of robustness in evaluating machine learning systems, an in particular systems involving deep learning. We consider these systems' vulnerability to adversarial examples--subtle, crafted perturbations to inputs which induce large change in output. We show that these adversarial examples are not only theoretical concern, by desigining the first 3D adversarial objects, and by demonstrating that these examples can be constructed even when malicious actors have little power. We suggest a potential avenue for building robust deep learning models by leveraging generative models.
by Andrew Ilyas.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Ishii, Shotaro, and David Ljunggren. "A Comparative Analysis of Robustness to Noise in Machine Learning Classifiers." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302532.
Full textData som härstammar från verkliga mätningar innehåller ofta förvrängningar i viss utsträckning. Sådana förvrängningar kan i vissa fall leda till försämrad klassificeringsnoggrannhet. I den här studien jämförs tre klassificeringsalgoritmer med avseende på hur pass robusta de är när den data de presenteras innehåller syntetiska förvrängningar. Mer specifikt så tränades och jämfördes slumpskogar, stödvektormaskiner och artificiella neuronnät på fyra olika mängder data med varierande nivåer av syntetiska förvrängningar. Sammanfattningsvis så presterade slumpskogen bäst, och var den mest robusta klassificeringsalgoritmen på åtta av tio förvrängningsnivåer, tätt följt av det artificiella neuronnätet. På de två återstående förvrängningsnivåerna presterade stödvektormaskinen med linjär kärna bäst och var den mest robusta klassificeringsalgoritmen.
Ebrahimi, Javid. "Robustness of Neural Networks for Discrete Input: An Adversarial Perspective." Thesis, University of Oregon, 2019. http://hdl.handle.net/1794/24535.
Full textFagogenis, Georgios. "Increasing the robustness of autonomous systems to hardware degradation using machine learning." Thesis, Heriot-Watt University, 2016. http://hdl.handle.net/10399/3378.
Full textHaussamer, Nicolai Haussamer. "Model Calibration with Machine Learning." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29451.
Full textBooks on the topic "Machine Learning Model Robustness"
Mohamed, Khaled Salah. Machine Learning for Model Order Reduction. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75714-8.
Full textSubrahmanian, V. S., Chiara Pulice, James F. Brown, and Jacob Bonen-Clark. A Machine Learning Based Model of Boko Haram. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-60614-5.
Full textSturm, Jürgen. Approaches to Probabilistic Model Learning for Mobile Manipulation Robots. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
Find full textWidjanarko, Bambang. Pengembangan model model machine learning ketahanan pangan melalui pembentukan zona musim (ZOM) suatu wilayah: Laporan akhir hibah kompetitif penelitian sesuai prioritas nasional tahun I. Surabaya: Lembaga Penelitian dan Pengabdian Kepada Masyarakat, Institut Teknologi Sepuluh Nopember, 2010.
Find full textAdversarial Robustness for Machine Learning Models. Elsevier Science & Technology Books, 2022.
Find full textAdversarial Robustness for Machine Learning Models. Elsevier Science & Technology, 2022.
Find full textAdversarial Robustness for Machine Learning. Elsevier, 2023. http://dx.doi.org/10.1016/c2020-0-01078-9.
Full textMachine Learning Algorithms: Adversarial Robustness in Signal Processing. Springer International Publishing AG, 2022.
Find full textWinn, John Michael. Model-Based Machine Learning. Taylor & Francis Group, 2021.
Find full textMohamed, Khaled Salah. Machine Learning for Model Order Reduction. Springer, 2019.
Find full textBook chapters on the topic "Machine Learning Model Robustness"
Bunse, Mirko, and Katharina Morik. "Certification of Model Robustness in Active Class Selection." In Machine Learning and Knowledge Discovery in Databases. Research Track, 266–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86520-7_17.
Full textGuan, Ji, Wang Fang, and Mingsheng Ying. "Robustness Verification of Quantum Classifiers." In Computer Aided Verification, 151–74. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_7.
Full textBartz-Beielstein, Thomas, and Martin Zaefferer. "Models." In Hyperparameter Tuning for Machine and Deep Learning with R, 27–69. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-5170-1_3.
Full textMancino, Alberto Carlo Maria, and Tommaso Di Noia. "Towards Differentially Private Machine Learning Models and Their Robustness to Adversaries." In Lecture Notes in Computer Science, 455–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09917-5_35.
Full textJohnson, Patricia M., Geunu Jeong, Kerstin Hammernik, Jo Schlemper, Chen Qin, Jinming Duan, Daniel Rueckert, et al. "Evaluation of the Robustness of Learned MR Image Reconstruction to Systematic Deviations Between Training and Test Data for the Models from the fastMRI Challenge." In Machine Learning for Medical Image Reconstruction, 25–34. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88552-6_3.
Full textLehrer, Steven F., Tian Xie, and Guanxi Yi. "Do the Hype of the Benefits from Using New Data Science Tools Extend to Forecasting Extremely Volatile Assets?" In Data Science for Economics and Finance, 287–330. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66891-4_13.
Full textHan, Bo, Bo He, Mengmeng Ma, Tingting Sun, Tianhong Yan, and Amaury Lendasse. "RMSE-ELM: Recursive Model Based Selective Ensemble of Extreme Learning Machines for Robustness Improvement." In Proceedings of ELM-2014 Volume 1, 273–92. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14063-6_24.
Full textConrad, F., E. Boos, M. Mälzer, H. Wiemer, and S. Ihlenfeldt. "Impact of Data Sampling on Performance and Robustness of Machine Learning Models in Production Engineering." In Lecture Notes in Production Engineering, 463–72. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-18318-8_47.
Full textDeng, Lirui, Youjian Zhao, and Heng Bao. "A Self-supervised Adversarial Learning Approach for Network Intrusion Detection System." In Communications in Computer and Information Science, 73–85. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8285-9_5.
Full textLabaca Castro, Raphael. "Towards Robustness." In Machine Learning under Malware Attack, 83–91. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-40442-0_11.
Full textConference papers on the topic "Machine Learning Model Robustness"
Zhou, Zhengbo, and Jianfei Yang. "Attentive Manifold Mixup for Model Robustness." In ICMLSC 2022: 2022 The 6th International Conference on Machine Learning and Soft Computing. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3523150.3523164.
Full textSivaslioglu, Samed, Ferhat Ozgur Catak, and Ensar Gul. "Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks." In 2019 27th Signal Processing and Communications Applications Conference (SIU). IEEE, 2019. http://dx.doi.org/10.1109/siu.2019.8806432.
Full textJeanselme, V., A. Wertz, G. Clermont, M. R. Pinsky, and A. Dubrawski. "Robustness of Machine Learning Models for Hemorrhage Detection." In American Thoracic Society 2020 International Conference, May 15-20, 2020 - Philadelphia, PA. American Thoracic Society, 2020. http://dx.doi.org/10.1164/ajrccm-conference.2020.201.1_meetingabstracts.a6320.
Full textIzmailov, Rauf, Sridhar Venkatesan, Achyut Reddy, Ritu Chadha, Michael De Lucia, and Alina Oprea. "Poisoning attacks on machine learning models in cyber systems and mitigation strategies." In Security, Robustness, and Trust in Artificial Intelligence and Distributed Architectures, edited by Misty Blowers, Russell D. Hall, and Venkateswara R. Dasari. SPIE, 2022. http://dx.doi.org/10.1117/12.2622112.
Full textBharitkar, Sunil. "Generative Feature Models and Robustness Analysis for Multimedia Content Classification." In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE, 2019. http://dx.doi.org/10.1109/icmla.2019.00025.
Full textShi, Ziqiang, Chaoliang Zhong, Yasuto Yokota, Wensheng Xia, and Jun Sun. "Robustness Evaluation of Deep Learning Models Based on Local Prediction Consistency." In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE, 2019. http://dx.doi.org/10.1109/icmla.2019.00224.
Full textReshytko, A., D. Egorov, A. Klenitskiy, and A. Shchepetnov. "WellNet: improvement of machine learning models robustness via comprehensive multi oilfield dataset." In EAGE Subsurface Intelligence Workshop. European Association of Geoscientists & Engineers, 2019. http://dx.doi.org/10.3997/2214-4609.2019x610116.
Full textZhang, Yu-Nong, Zhen Li, Dong-Sheng Guo, Ke Chen, and Pei Chen. "Superior robustness of using power-sigmoid activation functions in Z-type models for time-varying problems solving." In 2013 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2013. http://dx.doi.org/10.1109/icmlc.2013.6890387.
Full textSun, Haotian, Wenxing Zhou, and Jidong Kang. "Development of a Near-Neutral pH Stress Corrosion Cracking Growth Model for Pipelines Using Machine Learning Algorithms." In 2022 14th International Pipeline Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/ipc2022-87207.
Full textAlbeanu, Grigore, and Alexandra stefania Moloiu. "LEARNING METHODS AND TRANSFERABLE APPROACHES." In eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-082.
Full textReports on the topic "Machine Learning Model Robustness"
Perdigão, Rui A. P. Information physics and quantum space technologies for natural hazard sensing, modelling and prediction. Meteoceanics, September 2021. http://dx.doi.org/10.46337/210930.
Full textRduner, Tim G. J., and Helen Toner. Key Concepts in AI Safety: Specification in Machine Learning. Center for Security and Emerging Technology, December 2021. http://dx.doi.org/10.51593/20210031.
Full textRudner, Tim, and Helen Toner. Key Concepts in AI Safety: Interpretability in Machine Learning. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20190042.
Full textBajari, Patrick, Denis Nekipelov, Stephen Ryan, and Miaoyu Yang. Demand Estimation with Machine Learning and Model Combination. Cambridge, MA: National Bureau of Economic Research, February 2015. http://dx.doi.org/10.3386/w20955.
Full textMueller, Juliane, Charuleka Varadharajan, Erica Siirila-Woodburn, and Charles Koven. Machine Learning for Adaptive Model Refinement to Bridge Scales. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769741.
Full textRudner, Tim, and Helen Toner. Key Concepts in AI Safety: Robustness and Adversarial Examples. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20190041.
Full textHamann, Hendrik F. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology. Office of Scientific and Technical Information (OSTI), May 2017. http://dx.doi.org/10.2172/1395344.
Full textGeza, Mangistu, T. Tesfa, Liangping Li, and M. Qiao. Toward Hybrid Physics -Machine Learning to improve Land Surface Model predictions. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769785.
Full textTebaldi, Claudia, Zhangshuan Hou, Abigail Snyder, and Kalyn Dorheim. Machine Learning for a-posteriori model-observed data fusion to enhance predictive value of ESM output. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769740.
Full textTang, Jinyun, William Riley, Qing Zhu, and Trevor Keenan. Using machine learning and artificial intelligence to improve model-data integrated earth system model predictions of water and carbon cycle extremes. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769794.
Full text