Academic literature on the topic 'Model-agnostic methods'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Model-agnostic methods.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Model-agnostic methods":
Su, Houcheng, Weihao Luo, Daixian Liu, Mengzhu Wang, Jing Tang, Junyang Chen, Cong Wang, and Zhenghan Chen. "Sharpness-Aware Model-Agnostic Long-Tailed Domain Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 15091–99. http://dx.doi.org/10.1609/aaai.v38i13.29431.
Pugnana, Andrea, and Salvatore Ruggieri. "A Model-Agnostic Heuristics for Selective Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9461–69. http://dx.doi.org/10.1609/aaai.v37i8.26133.
Satrya, Wahyu Fadli, and Ji-Hoon Yun. "Combining Model-Agnostic Meta-Learning and Transfer Learning for Regression." Sensors 23, no. 2 (January 4, 2023): 583. http://dx.doi.org/10.3390/s23020583.
Atallah, Rasha Ragheb, Amirrudin Kamsin, Maizatul Akmar Ismail, and Ahmad Sami Al-Shamayleh. "NEURAL NETWORK WITH AGNOSTIC META-LEARNING MODEL FOR FACE-AGING RECOGNITION." Malaysian Journal of Computer Science 35, no. 1 (January 31, 2022): 56–69. http://dx.doi.org/10.22452/mjcs.vol35no1.4.
Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.
Tak, Jae-Ho, and Byung-Woo Hong. "Enhancing Model Agnostic Meta-Learning via Gradient Similarity Loss." Electronics 13, no. 3 (January 29, 2024): 535. http://dx.doi.org/10.3390/electronics13030535.
Hou, Xiaoyu, Jihui Xu, Jinming Wu, and Huaiyu Xu. "Cross Domain Adaptation of Crowd Counting with Model-Agnostic Meta-Learning." Applied Sciences 11, no. 24 (December 17, 2021): 12037. http://dx.doi.org/10.3390/app112412037.
Chen, Zhouyuan, Zhichao Lian, and Zhe Xu. "Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing." Axioms 12, no. 10 (October 23, 2023): 997. http://dx.doi.org/10.3390/axioms12100997.
Hu, Cong, Kai Xu, Zhengqiu Zhu, Long Qin, and Quanjun Yin. "Multi-Agent Chronological Planning with Model-Agnostic Meta Reinforcement Learning." Applied Sciences 13, no. 16 (August 11, 2023): 9174. http://dx.doi.org/10.3390/app13169174.
Xue, Tianfang, and Haibin Yu. "Unbiased Model-Agnostic Metalearning Algorithm for Learning Target-Driven Visual Navigation Policy." Computational Intelligence and Neuroscience 2021 (December 8, 2021): 1–12. http://dx.doi.org/10.1155/2021/5620751.
Dissertations / Theses on the topic "Model-agnostic methods":
Kanerva, Anton, and Fredrik Helgesson. "On the Use of Model-Agnostic Interpretation Methods as Defense Against Adversarial Input Attacks on Tabular Data." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20085.
Kontext. Maskininlärning är ett område inom artificiell intelligens som är under konstant utveckling. Mängden domäner som vi sprider maskininlärningsmodeller i växer sig allt större och systemen sprider sig obemärkt nära inpå våra dagliga liv genom olika elektroniska enheter. Genom åren har mycket tid och arbete lagts på att öka dessa modellers prestanda vilket har överskuggat risken för sårbarheter i systemens kärna, den tränade modellen. En relativt ny attack, kallad "adversarial input attack", med målet att lura modellen till felaktiga beslutstaganden har nästan uteslutande forskats på inom bildigenkänning. Men, hotet som adversarial input-attacker utgör sträcker sig utom ramarna för bilddata till andra datadomäner som den tabulära domänen vilken är den vanligaste datadomänen inom industrin. Metoder för att tolka komplexa maskininlärningsmodeller kan hjälpa människor att förstå beteendet hos dessa komplexa maskininlärningssystem samt de beslut som de tar. Att förstå en modells beteende är en viktig komponent för att upptäcka, förstå och mitigera sårbarheter hos modellen. Syfte. Den här studien försöker reducera det forskningsgap som adversarial input-attacker och motsvarande försvarsmetoder i den tabulära domänen utgör. Målet med denna studie är att analysera hur modelloberoende tolkningsmetoder kan användas för att mitigera och detektera adversarial input-attacker mot tabulär data. Metod. Det uppsatta målet nås genom tre på varandra följande experiment där modelltolkningsmetoder analyseras, adversarial input-attacker utvärderas och visualiseras samt där en ny metod baserad på modelltolkning föreslås för detektion av adversarial input-attacker tillsammans med en ny mitigeringsteknik där feature selection används defensivt för att minska attackvektorns storlek. Resultat. Den föreslagna metoden för detektering av adversarial input-attacker visar state-of-the-art-resultat med över 86% träffsäkerhet. Den föreslagna mitigeringstekniken visades framgångsrik i att härda modellen mot adversarial input attacker genom att minska deras attackstyrka med 33% utan att degradera modellens klassifieringsprestanda. Slutsats. Denna studie bidrar med användbara metoder för detektering och mitigering av adversarial input-attacker såväl som metoder för att utvärdera och visualisera svårt förnimbara attacker mot tabulär data.
Danesh, Alaghehband Tina Sadat. "Vers une conception robuste en ingénierie des procédés. Utilisation de modèles agnostiques de l'interprétabilité en apprentissage automatique." Electronic Thesis or Diss., Toulouse, INPT, 2023. http://www.theses.fr/2023INPT0138.
Robust process design holds paramount importance in various industries, such as process and chemical engineering. The nature of robustness lies in ensuring that a process can consistently deliver desired outcomes for decision-makers and/or stakeholders, even when faced with intrinsic variability and uncertainty. A robustly designed process not only enhances product quality and reliability but also significantly reduces the risk of costly failures, downtime, and product recalls. It enhances efficiency and sustainability by minimizing process deviations and failures. There are different methods to approach the robustness of a complex system, such as the design of experiments, robust optimization, and response surface methodology. Among the robust design methods, sensitivity analysis could be applied as a supportive technique to gain insights into how changes in input parameters affect performance and robustness. Due to the rapid development and advancement of engineering science, the use of physical models for sensitivity analysis presents several challenges, such as unsatisfied assumptions and computation time. These problems lead us to consider applying machine learning (ML) models to complex processes. Although, the issue of interpretability in ML has gained increasing importance, there is a growing need to understand how these models arrive at their predictions or decisions and how different parameters are related. As their performance consistently surpasses that of other models, such as knowledge-based models, the provision of explanations, justifications, and insights into the workings of ML models not only enhances their trustworthiness and fairness but also empowers stakeholders to make informed decisions, identify biases, detect errors, and improve the overall performance and reliability of the process. Various methods are available to address interpretability, including model-specific and model-agnostic methods. In this thesis, our objective is to enhance the interpretability of various ML methods while maintaining a balance between accuracy and interpretability to ensure decision-makers or stakeholders that our model or process could be considered robust. Simultaneously, we aim to demonstrate that users can trust ML model predictions guaranteed by model-agnostic techniques, which work across various scenarios, including equation-based, hybrid, and data-driven models. To achieve this goal, we applied several model-agnostic methods, such as partial dependence plots, individual conditional expectations, accumulated local effects, etc., to diverse applications
Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.
As doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Book chapters on the topic "Model-agnostic methods":
Gianfagna, Leonida, and Antonio Di Cecco. "Model-Agnostic Methods for XAI." In Explainable AI with Python, 81–113. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_4.
Gunel, Kadir, and Mehmet Fatih Amasyali. "Model Agnostic Knowledge Transfer Methods for Sentence Embedding Models." In 2nd International Congress of Electrical and Computer Engineering, 3–16. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52760-9_1.
Molnar, Christoph, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, and Bernd Bischl. "General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models." In xxAI - Beyond Explainable AI, 39–68. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_4.
Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Nguyen, Thu Trang, Thach Le Nguyen, and Georgiana Ifrim. "A Model-Agnostic Approach to Quantifying the Informativeness of Explanation Methods for Time Series Classification." In Advanced Analytics and Learning on Temporal Data, 77–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65742-0_6.
Krishna, Siddharth, Michael Emmi, Constantin Enea, and Dejan Jovanović. "Verifying Visibility-Based Weak Consistency." In Programming Languages and Systems, 280–307. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_11.
Gunasekaran, Abirami, Minsi Chen, Richard Hill, and Keith McCabe. "Method Agnostic Model Class Reliance (MAMCR) Explanation of Multiple Machine Learning Models." In Soft Computing and Its Engineering Applications, 56–71. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27609-5_5.
Lampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.
Sim, Min K. "Explanation using model-agnostic methods." In Human-Centered Artificial Intelligence, 17–31. Elsevier, 2022. http://dx.doi.org/10.1016/b978-0-323-85648-5.00008-6.
Tiwari, Ravi Shekhar. "Hate speech detection using LSTM and explanation by LIME (local interpretable model-agnostic explanations)." In Computational Intelligence Methods for Sentiment Analysis in Natural Language Processing Applications, 93–110. Elsevier, 2024. http://dx.doi.org/10.1016/b978-0-443-22009-8.00005-7.
Conference papers on the topic "Model-agnostic methods":
Menon, Rakesh, Kerem Zaman, and Shashank Srivastava. "MaNtLE: Model-agnostic Natural Language Explainer." In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-main.832.
Sikder, Md Nazmul Kabir, Feras A. Batarseh, Pei Wang, and Nitish Gorentala. "Model-Agnostic Scoring Methods for Artificial Intelligence Assurance." In 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022. http://dx.doi.org/10.1109/stc55697.2022.00011.
Tayal, Kshitij, Rahul Ghosh, and Vipin Kumar. "Model-agnostic Methods for Text Classification with Inherent Noise." In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Tayal, Kshitij, Rahul Ghosh, and Vipin Kumar. "Model-agnostic Methods for Text Classification with Inherent Noise." In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Sandu, Marian Gabriel, and Ștefan Trăușan-Matu. "Comparing model-agnostic and model-specific XAI methods in Natural Language Processing." In RoCHI - International Conference on Human-Computer Interaction. MATRIX ROM, 2022. http://dx.doi.org/10.37789/rochi.2022.1.1.19.
Letrache, Khadija, and Mohammed Ramdani. "Explainable Artificial Intelligence: A Review and Case Study on Model-Agnostic Methods." In 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA). IEEE, 2023. http://dx.doi.org/10.1109/sita60746.2023.10373722.
Emelin, Denis, Ivan Titov, and Rico Sennrich. "Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.616.
Regenwetter, Lyle, Yazan Abu Obaideh, and Faez Ahmed. "Counterfactuals for Design: A Model-Agnostic Method for Design Recommendations." In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-117216.
Ouyang, Linshu, Yongzheng Zhang, Hui Liu, Yige Chen, and Yipeng Wang. "Gated POS-Level Language Model for Authorship Verification." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/557.
Zhan, Guodong David, Mohammed J. Dossary, Trieu Phat Luu, Huang Xu, Ted Furlong, and John Bomidi. "On Field Implementation of Real-Time Bit-Wear Estimation with Bit Agnostic Deep Learning Artificial Intelligence Model Along with Physics-Hybrid Features." In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/214603-ms.
Reports on the topic "Model-agnostic methods":
Walizer, Laura, Robert Haehnel, Luke Allen, and Yonghu Wenren. Application of multi-fidelity methods to rotorcraft performance assessment. Engineer Research and Development Center (U.S.), May 2024. http://dx.doi.org/10.21079/11681/48474.
Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.