Auswahl der wissenschaftlichen Literatur zum Thema „Model-agnostic methods“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Model-agnostic methods" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Model-agnostic methods"
Su, Houcheng, Weihao Luo, Daixian Liu, Mengzhu Wang, Jing Tang, Junyang Chen, Cong Wang und Zhenghan Chen. „Sharpness-Aware Model-Agnostic Long-Tailed Domain Generalization“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 15091–99. http://dx.doi.org/10.1609/aaai.v38i13.29431.
Der volle Inhalt der QuellePugnana, Andrea, und Salvatore Ruggieri. „A Model-Agnostic Heuristics for Selective Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 8 (26.06.2023): 9461–69. http://dx.doi.org/10.1609/aaai.v37i8.26133.
Der volle Inhalt der QuelleSatrya, Wahyu Fadli, und Ji-Hoon Yun. „Combining Model-Agnostic Meta-Learning and Transfer Learning for Regression“. Sensors 23, Nr. 2 (04.01.2023): 583. http://dx.doi.org/10.3390/s23020583.
Der volle Inhalt der QuelleAtallah, Rasha Ragheb, Amirrudin Kamsin, Maizatul Akmar Ismail und Ahmad Sami Al-Shamayleh. „NEURAL NETWORK WITH AGNOSTIC META-LEARNING MODEL FOR FACE-AGING RECOGNITION“. Malaysian Journal of Computer Science 35, Nr. 1 (31.01.2022): 56–69. http://dx.doi.org/10.22452/mjcs.vol35no1.4.
Der volle Inhalt der QuelleZafar, Muhammad Rehman, und Naimul Khan. „Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability“. Machine Learning and Knowledge Extraction 3, Nr. 3 (30.06.2021): 525–41. http://dx.doi.org/10.3390/make3030027.
Der volle Inhalt der QuelleTak, Jae-Ho, und Byung-Woo Hong. „Enhancing Model Agnostic Meta-Learning via Gradient Similarity Loss“. Electronics 13, Nr. 3 (29.01.2024): 535. http://dx.doi.org/10.3390/electronics13030535.
Der volle Inhalt der QuelleHou, Xiaoyu, Jihui Xu, Jinming Wu und Huaiyu Xu. „Cross Domain Adaptation of Crowd Counting with Model-Agnostic Meta-Learning“. Applied Sciences 11, Nr. 24 (17.12.2021): 12037. http://dx.doi.org/10.3390/app112412037.
Der volle Inhalt der QuelleChen, Zhouyuan, Zhichao Lian und Zhe Xu. „Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing“. Axioms 12, Nr. 10 (23.10.2023): 997. http://dx.doi.org/10.3390/axioms12100997.
Der volle Inhalt der QuelleHu, Cong, Kai Xu, Zhengqiu Zhu, Long Qin und Quanjun Yin. „Multi-Agent Chronological Planning with Model-Agnostic Meta Reinforcement Learning“. Applied Sciences 13, Nr. 16 (11.08.2023): 9174. http://dx.doi.org/10.3390/app13169174.
Der volle Inhalt der QuelleXue, Tianfang, und Haibin Yu. „Unbiased Model-Agnostic Metalearning Algorithm for Learning Target-Driven Visual Navigation Policy“. Computational Intelligence and Neuroscience 2021 (08.12.2021): 1–12. http://dx.doi.org/10.1155/2021/5620751.
Der volle Inhalt der QuelleDissertationen zum Thema "Model-agnostic methods"
Kanerva, Anton, und Fredrik Helgesson. „On the Use of Model-Agnostic Interpretation Methods as Defense Against Adversarial Input Attacks on Tabular Data“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20085.
Der volle Inhalt der QuelleKontext. Maskininlärning är ett område inom artificiell intelligens som är under konstant utveckling. Mängden domäner som vi sprider maskininlärningsmodeller i växer sig allt större och systemen sprider sig obemärkt nära inpå våra dagliga liv genom olika elektroniska enheter. Genom åren har mycket tid och arbete lagts på att öka dessa modellers prestanda vilket har överskuggat risken för sårbarheter i systemens kärna, den tränade modellen. En relativt ny attack, kallad "adversarial input attack", med målet att lura modellen till felaktiga beslutstaganden har nästan uteslutande forskats på inom bildigenkänning. Men, hotet som adversarial input-attacker utgör sträcker sig utom ramarna för bilddata till andra datadomäner som den tabulära domänen vilken är den vanligaste datadomänen inom industrin. Metoder för att tolka komplexa maskininlärningsmodeller kan hjälpa människor att förstå beteendet hos dessa komplexa maskininlärningssystem samt de beslut som de tar. Att förstå en modells beteende är en viktig komponent för att upptäcka, förstå och mitigera sårbarheter hos modellen. Syfte. Den här studien försöker reducera det forskningsgap som adversarial input-attacker och motsvarande försvarsmetoder i den tabulära domänen utgör. Målet med denna studie är att analysera hur modelloberoende tolkningsmetoder kan användas för att mitigera och detektera adversarial input-attacker mot tabulär data. Metod. Det uppsatta målet nås genom tre på varandra följande experiment där modelltolkningsmetoder analyseras, adversarial input-attacker utvärderas och visualiseras samt där en ny metod baserad på modelltolkning föreslås för detektion av adversarial input-attacker tillsammans med en ny mitigeringsteknik där feature selection används defensivt för att minska attackvektorns storlek. Resultat. Den föreslagna metoden för detektering av adversarial input-attacker visar state-of-the-art-resultat med över 86% träffsäkerhet. Den föreslagna mitigeringstekniken visades framgångsrik i att härda modellen mot adversarial input attacker genom att minska deras attackstyrka med 33% utan att degradera modellens klassifieringsprestanda. Slutsats. Denna studie bidrar med användbara metoder för detektering och mitigering av adversarial input-attacker såväl som metoder för att utvärdera och visualisera svårt förnimbara attacker mot tabulär data.
Danesh, Alaghehband Tina Sadat. „Vers une conception robuste en ingénierie des procédés. Utilisation de modèles agnostiques de l'interprétabilité en apprentissage automatique“. Electronic Thesis or Diss., Toulouse, INPT, 2023. http://www.theses.fr/2023INPT0138.
Der volle Inhalt der QuelleRobust process design holds paramount importance in various industries, such as process and chemical engineering. The nature of robustness lies in ensuring that a process can consistently deliver desired outcomes for decision-makers and/or stakeholders, even when faced with intrinsic variability and uncertainty. A robustly designed process not only enhances product quality and reliability but also significantly reduces the risk of costly failures, downtime, and product recalls. It enhances efficiency and sustainability by minimizing process deviations and failures. There are different methods to approach the robustness of a complex system, such as the design of experiments, robust optimization, and response surface methodology. Among the robust design methods, sensitivity analysis could be applied as a supportive technique to gain insights into how changes in input parameters affect performance and robustness. Due to the rapid development and advancement of engineering science, the use of physical models for sensitivity analysis presents several challenges, such as unsatisfied assumptions and computation time. These problems lead us to consider applying machine learning (ML) models to complex processes. Although, the issue of interpretability in ML has gained increasing importance, there is a growing need to understand how these models arrive at their predictions or decisions and how different parameters are related. As their performance consistently surpasses that of other models, such as knowledge-based models, the provision of explanations, justifications, and insights into the workings of ML models not only enhances their trustworthiness and fairness but also empowers stakeholders to make informed decisions, identify biases, detect errors, and improve the overall performance and reliability of the process. Various methods are available to address interpretability, including model-specific and model-agnostic methods. In this thesis, our objective is to enhance the interpretability of various ML methods while maintaining a balance between accuracy and interpretability to ensure decision-makers or stakeholders that our model or process could be considered robust. Simultaneously, we aim to demonstrate that users can trust ML model predictions guaranteed by model-agnostic techniques, which work across various scenarios, including equation-based, hybrid, and data-driven models. To achieve this goal, we applied several model-agnostic methods, such as partial dependence plots, individual conditional expectations, accumulated local effects, etc., to diverse applications
Neves, Maria Inês Lourenço das. „Opening the black-box of artificial intelligence predictions on clinical decision support systems“. Master's thesis, 2021. http://hdl.handle.net/10362/126699.
Der volle Inhalt der QuelleAs doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Buchteile zum Thema "Model-agnostic methods"
Gianfagna, Leonida, und Antonio Di Cecco. „Model-Agnostic Methods for XAI“. In Explainable AI with Python, 81–113. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_4.
Der volle Inhalt der QuelleGunel, Kadir, und Mehmet Fatih Amasyali. „Model Agnostic Knowledge Transfer Methods for Sentence Embedding Models“. In 2nd International Congress of Electrical and Computer Engineering, 3–16. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52760-9_1.
Der volle Inhalt der QuelleMolnar, Christoph, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup und Bernd Bischl. „General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models“. In xxAI - Beyond Explainable AI, 39–68. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_4.
Der volle Inhalt der QuelleBaniecki, Hubert, Wojciech Kretowicz und Przemyslaw Biecek. „Fooling Partial Dependence via Data Poisoning“. In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Der volle Inhalt der QuelleNguyen, Thu Trang, Thach Le Nguyen und Georgiana Ifrim. „A Model-Agnostic Approach to Quantifying the Informativeness of Explanation Methods for Time Series Classification“. In Advanced Analytics and Learning on Temporal Data, 77–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65742-0_6.
Der volle Inhalt der QuelleKrishna, Siddharth, Michael Emmi, Constantin Enea und Dejan Jovanović. „Verifying Visibility-Based Weak Consistency“. In Programming Languages and Systems, 280–307. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_11.
Der volle Inhalt der QuelleGunasekaran, Abirami, Minsi Chen, Richard Hill und Keith McCabe. „Method Agnostic Model Class Reliance (MAMCR) Explanation of Multiple Machine Learning Models“. In Soft Computing and Its Engineering Applications, 56–71. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27609-5_5.
Der volle Inhalt der QuelleLampridis, Orestis, Riccardo Guidotti und Salvatore Ruggieri. „Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars“. In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.
Der volle Inhalt der QuelleSim, Min K. „Explanation using model-agnostic methods“. In Human-Centered Artificial Intelligence, 17–31. Elsevier, 2022. http://dx.doi.org/10.1016/b978-0-323-85648-5.00008-6.
Der volle Inhalt der QuelleTiwari, Ravi Shekhar. „Hate speech detection using LSTM and explanation by LIME (local interpretable model-agnostic explanations)“. In Computational Intelligence Methods for Sentiment Analysis in Natural Language Processing Applications, 93–110. Elsevier, 2024. http://dx.doi.org/10.1016/b978-0-443-22009-8.00005-7.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Model-agnostic methods"
Menon, Rakesh, Kerem Zaman und Shashank Srivastava. „MaNtLE: Model-agnostic Natural Language Explainer“. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-main.832.
Der volle Inhalt der QuelleSikder, Md Nazmul Kabir, Feras A. Batarseh, Pei Wang und Nitish Gorentala. „Model-Agnostic Scoring Methods for Artificial Intelligence Assurance“. In 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022. http://dx.doi.org/10.1109/stc55697.2022.00011.
Der volle Inhalt der QuelleTayal, Kshitij, Rahul Ghosh und Vipin Kumar. „Model-agnostic Methods for Text Classification with Inherent Noise“. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Der volle Inhalt der QuelleTayal, Kshitij, Rahul Ghosh und Vipin Kumar. „Model-agnostic Methods for Text Classification with Inherent Noise“. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Der volle Inhalt der QuelleSandu, Marian Gabriel, und Ștefan Trăușan-Matu. „Comparing model-agnostic and model-specific XAI methods in Natural Language Processing“. In RoCHI - International Conference on Human-Computer Interaction. MATRIX ROM, 2022. http://dx.doi.org/10.37789/rochi.2022.1.1.19.
Der volle Inhalt der QuelleLetrache, Khadija, und Mohammed Ramdani. „Explainable Artificial Intelligence: A Review and Case Study on Model-Agnostic Methods“. In 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA). IEEE, 2023. http://dx.doi.org/10.1109/sita60746.2023.10373722.
Der volle Inhalt der QuelleEmelin, Denis, Ivan Titov und Rico Sennrich. „Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks“. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.616.
Der volle Inhalt der QuelleRegenwetter, Lyle, Yazan Abu Obaideh und Faez Ahmed. „Counterfactuals for Design: A Model-Agnostic Method for Design Recommendations“. In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-117216.
Der volle Inhalt der QuelleOuyang, Linshu, Yongzheng Zhang, Hui Liu, Yige Chen und Yipeng Wang. „Gated POS-Level Language Model for Authorship Verification“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/557.
Der volle Inhalt der QuelleZhan, Guodong David, Mohammed J. Dossary, Trieu Phat Luu, Huang Xu, Ted Furlong und John Bomidi. „On Field Implementation of Real-Time Bit-Wear Estimation with Bit Agnostic Deep Learning Artificial Intelligence Model Along with Physics-Hybrid Features“. In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/214603-ms.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Model-agnostic methods"
Walizer, Laura, Robert Haehnel, Luke Allen und Yonghu Wenren. Application of multi-fidelity methods to rotorcraft performance assessment. Engineer Research and Development Center (U.S.), Mai 2024. http://dx.doi.org/10.21079/11681/48474.
Der volle Inhalt der QuelleYu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang und Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, Dezember 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Der volle Inhalt der Quelle