Littérature scientifique sur le sujet « Model-agnostic methods »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Model-agnostic methods ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Model-agnostic methods"
Su, Houcheng, Weihao Luo, Daixian Liu, Mengzhu Wang, Jing Tang, Junyang Chen, Cong Wang et Zhenghan Chen. « Sharpness-Aware Model-Agnostic Long-Tailed Domain Generalization ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 13 (24 mars 2024) : 15091–99. http://dx.doi.org/10.1609/aaai.v38i13.29431.
Texte intégralPugnana, Andrea, et Salvatore Ruggieri. « A Model-Agnostic Heuristics for Selective Classification ». Proceedings of the AAAI Conference on Artificial Intelligence 37, no 8 (26 juin 2023) : 9461–69. http://dx.doi.org/10.1609/aaai.v37i8.26133.
Texte intégralSatrya, Wahyu Fadli, et Ji-Hoon Yun. « Combining Model-Agnostic Meta-Learning and Transfer Learning for Regression ». Sensors 23, no 2 (4 janvier 2023) : 583. http://dx.doi.org/10.3390/s23020583.
Texte intégralAtallah, Rasha Ragheb, Amirrudin Kamsin, Maizatul Akmar Ismail et Ahmad Sami Al-Shamayleh. « NEURAL NETWORK WITH AGNOSTIC META-LEARNING MODEL FOR FACE-AGING RECOGNITION ». Malaysian Journal of Computer Science 35, no 1 (31 janvier 2022) : 56–69. http://dx.doi.org/10.22452/mjcs.vol35no1.4.
Texte intégralZafar, Muhammad Rehman, et Naimul Khan. « Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability ». Machine Learning and Knowledge Extraction 3, no 3 (30 juin 2021) : 525–41. http://dx.doi.org/10.3390/make3030027.
Texte intégralTak, Jae-Ho, et Byung-Woo Hong. « Enhancing Model Agnostic Meta-Learning via Gradient Similarity Loss ». Electronics 13, no 3 (29 janvier 2024) : 535. http://dx.doi.org/10.3390/electronics13030535.
Texte intégralHou, Xiaoyu, Jihui Xu, Jinming Wu et Huaiyu Xu. « Cross Domain Adaptation of Crowd Counting with Model-Agnostic Meta-Learning ». Applied Sciences 11, no 24 (17 décembre 2021) : 12037. http://dx.doi.org/10.3390/app112412037.
Texte intégralChen, Zhouyuan, Zhichao Lian et Zhe Xu. « Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing ». Axioms 12, no 10 (23 octobre 2023) : 997. http://dx.doi.org/10.3390/axioms12100997.
Texte intégralHu, Cong, Kai Xu, Zhengqiu Zhu, Long Qin et Quanjun Yin. « Multi-Agent Chronological Planning with Model-Agnostic Meta Reinforcement Learning ». Applied Sciences 13, no 16 (11 août 2023) : 9174. http://dx.doi.org/10.3390/app13169174.
Texte intégralXue, Tianfang, et Haibin Yu. « Unbiased Model-Agnostic Metalearning Algorithm for Learning Target-Driven Visual Navigation Policy ». Computational Intelligence and Neuroscience 2021 (8 décembre 2021) : 1–12. http://dx.doi.org/10.1155/2021/5620751.
Texte intégralThèses sur le sujet "Model-agnostic methods"
Kanerva, Anton, et Fredrik Helgesson. « On the Use of Model-Agnostic Interpretation Methods as Defense Against Adversarial Input Attacks on Tabular Data ». Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20085.
Texte intégralKontext. Maskininlärning är ett område inom artificiell intelligens som är under konstant utveckling. Mängden domäner som vi sprider maskininlärningsmodeller i växer sig allt större och systemen sprider sig obemärkt nära inpå våra dagliga liv genom olika elektroniska enheter. Genom åren har mycket tid och arbete lagts på att öka dessa modellers prestanda vilket har överskuggat risken för sårbarheter i systemens kärna, den tränade modellen. En relativt ny attack, kallad "adversarial input attack", med målet att lura modellen till felaktiga beslutstaganden har nästan uteslutande forskats på inom bildigenkänning. Men, hotet som adversarial input-attacker utgör sträcker sig utom ramarna för bilddata till andra datadomäner som den tabulära domänen vilken är den vanligaste datadomänen inom industrin. Metoder för att tolka komplexa maskininlärningsmodeller kan hjälpa människor att förstå beteendet hos dessa komplexa maskininlärningssystem samt de beslut som de tar. Att förstå en modells beteende är en viktig komponent för att upptäcka, förstå och mitigera sårbarheter hos modellen. Syfte. Den här studien försöker reducera det forskningsgap som adversarial input-attacker och motsvarande försvarsmetoder i den tabulära domänen utgör. Målet med denna studie är att analysera hur modelloberoende tolkningsmetoder kan användas för att mitigera och detektera adversarial input-attacker mot tabulär data. Metod. Det uppsatta målet nås genom tre på varandra följande experiment där modelltolkningsmetoder analyseras, adversarial input-attacker utvärderas och visualiseras samt där en ny metod baserad på modelltolkning föreslås för detektion av adversarial input-attacker tillsammans med en ny mitigeringsteknik där feature selection används defensivt för att minska attackvektorns storlek. Resultat. Den föreslagna metoden för detektering av adversarial input-attacker visar state-of-the-art-resultat med över 86% träffsäkerhet. Den föreslagna mitigeringstekniken visades framgångsrik i att härda modellen mot adversarial input attacker genom att minska deras attackstyrka med 33% utan att degradera modellens klassifieringsprestanda. Slutsats. Denna studie bidrar med användbara metoder för detektering och mitigering av adversarial input-attacker såväl som metoder för att utvärdera och visualisera svårt förnimbara attacker mot tabulär data.
Danesh, Alaghehband Tina Sadat. « Vers une conception robuste en ingénierie des procédés. Utilisation de modèles agnostiques de l'interprétabilité en apprentissage automatique ». Electronic Thesis or Diss., Toulouse, INPT, 2023. http://www.theses.fr/2023INPT0138.
Texte intégralRobust process design holds paramount importance in various industries, such as process and chemical engineering. The nature of robustness lies in ensuring that a process can consistently deliver desired outcomes for decision-makers and/or stakeholders, even when faced with intrinsic variability and uncertainty. A robustly designed process not only enhances product quality and reliability but also significantly reduces the risk of costly failures, downtime, and product recalls. It enhances efficiency and sustainability by minimizing process deviations and failures. There are different methods to approach the robustness of a complex system, such as the design of experiments, robust optimization, and response surface methodology. Among the robust design methods, sensitivity analysis could be applied as a supportive technique to gain insights into how changes in input parameters affect performance and robustness. Due to the rapid development and advancement of engineering science, the use of physical models for sensitivity analysis presents several challenges, such as unsatisfied assumptions and computation time. These problems lead us to consider applying machine learning (ML) models to complex processes. Although, the issue of interpretability in ML has gained increasing importance, there is a growing need to understand how these models arrive at their predictions or decisions and how different parameters are related. As their performance consistently surpasses that of other models, such as knowledge-based models, the provision of explanations, justifications, and insights into the workings of ML models not only enhances their trustworthiness and fairness but also empowers stakeholders to make informed decisions, identify biases, detect errors, and improve the overall performance and reliability of the process. Various methods are available to address interpretability, including model-specific and model-agnostic methods. In this thesis, our objective is to enhance the interpretability of various ML methods while maintaining a balance between accuracy and interpretability to ensure decision-makers or stakeholders that our model or process could be considered robust. Simultaneously, we aim to demonstrate that users can trust ML model predictions guaranteed by model-agnostic techniques, which work across various scenarios, including equation-based, hybrid, and data-driven models. To achieve this goal, we applied several model-agnostic methods, such as partial dependence plots, individual conditional expectations, accumulated local effects, etc., to diverse applications
Neves, Maria Inês Lourenço das. « Opening the black-box of artificial intelligence predictions on clinical decision support systems ». Master's thesis, 2021. http://hdl.handle.net/10362/126699.
Texte intégralAs doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Chapitres de livres sur le sujet "Model-agnostic methods"
Gianfagna, Leonida, et Antonio Di Cecco. « Model-Agnostic Methods for XAI ». Dans Explainable AI with Python, 81–113. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68640-6_4.
Texte intégralGunel, Kadir, et Mehmet Fatih Amasyali. « Model Agnostic Knowledge Transfer Methods for Sentence Embedding Models ». Dans 2nd International Congress of Electrical and Computer Engineering, 3–16. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-52760-9_1.
Texte intégralMolnar, Christoph, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup et Bernd Bischl. « General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models ». Dans xxAI - Beyond Explainable AI, 39–68. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_4.
Texte intégralBaniecki, Hubert, Wojciech Kretowicz et Przemyslaw Biecek. « Fooling Partial Dependence via Data Poisoning ». Dans Machine Learning and Knowledge Discovery in Databases, 121–36. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.
Texte intégralNguyen, Thu Trang, Thach Le Nguyen et Georgiana Ifrim. « A Model-Agnostic Approach to Quantifying the Informativeness of Explanation Methods for Time Series Classification ». Dans Advanced Analytics and Learning on Temporal Data, 77–94. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65742-0_6.
Texte intégralKrishna, Siddharth, Michael Emmi, Constantin Enea et Dejan Jovanović. « Verifying Visibility-Based Weak Consistency ». Dans Programming Languages and Systems, 280–307. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_11.
Texte intégralGunasekaran, Abirami, Minsi Chen, Richard Hill et Keith McCabe. « Method Agnostic Model Class Reliance (MAMCR) Explanation of Multiple Machine Learning Models ». Dans Soft Computing and Its Engineering Applications, 56–71. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27609-5_5.
Texte intégralLampridis, Orestis, Riccardo Guidotti et Salvatore Ruggieri. « Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars ». Dans Discovery Science, 357–73. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.
Texte intégralSim, Min K. « Explanation using model-agnostic methods ». Dans Human-Centered Artificial Intelligence, 17–31. Elsevier, 2022. http://dx.doi.org/10.1016/b978-0-323-85648-5.00008-6.
Texte intégralTiwari, Ravi Shekhar. « Hate speech detection using LSTM and explanation by LIME (local interpretable model-agnostic explanations) ». Dans Computational Intelligence Methods for Sentiment Analysis in Natural Language Processing Applications, 93–110. Elsevier, 2024. http://dx.doi.org/10.1016/b978-0-443-22009-8.00005-7.
Texte intégralActes de conférences sur le sujet "Model-agnostic methods"
Menon, Rakesh, Kerem Zaman et Shashank Srivastava. « MaNtLE : Model-agnostic Natural Language Explainer ». Dans Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA : Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-main.832.
Texte intégralSikder, Md Nazmul Kabir, Feras A. Batarseh, Pei Wang et Nitish Gorentala. « Model-Agnostic Scoring Methods for Artificial Intelligence Assurance ». Dans 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022. http://dx.doi.org/10.1109/stc55697.2022.00011.
Texte intégralTayal, Kshitij, Rahul Ghosh et Vipin Kumar. « Model-agnostic Methods for Text Classification with Inherent Noise ». Dans Proceedings of the 28th International Conference on Computational Linguistics : Industry Track. Stroudsburg, PA, USA : International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Texte intégralTayal, Kshitij, Rahul Ghosh et Vipin Kumar. « Model-agnostic Methods for Text Classification with Inherent Noise ». Dans Proceedings of the 28th International Conference on Computational Linguistics : Industry Track. Stroudsburg, PA, USA : International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-industry.19.
Texte intégralSandu, Marian Gabriel, et Ștefan Trăușan-Matu. « Comparing model-agnostic and model-specific XAI methods in Natural Language Processing ». Dans RoCHI - International Conference on Human-Computer Interaction. MATRIX ROM, 2022. http://dx.doi.org/10.37789/rochi.2022.1.1.19.
Texte intégralLetrache, Khadija, et Mohammed Ramdani. « Explainable Artificial Intelligence : A Review and Case Study on Model-Agnostic Methods ». Dans 2023 14th International Conference on Intelligent Systems : Theories and Applications (SITA). IEEE, 2023. http://dx.doi.org/10.1109/sita60746.2023.10373722.
Texte intégralEmelin, Denis, Ivan Titov et Rico Sennrich. « Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks ». Dans Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA : Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.616.
Texte intégralRegenwetter, Lyle, Yazan Abu Obaideh et Faez Ahmed. « Counterfactuals for Design : A Model-Agnostic Method for Design Recommendations ». Dans ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-117216.
Texte intégralOuyang, Linshu, Yongzheng Zhang, Hui Liu, Yige Chen et Yipeng Wang. « Gated POS-Level Language Model for Authorship Verification ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/557.
Texte intégralZhan, Guodong David, Mohammed J. Dossary, Trieu Phat Luu, Huang Xu, Ted Furlong et John Bomidi. « On Field Implementation of Real-Time Bit-Wear Estimation with Bit Agnostic Deep Learning Artificial Intelligence Model Along with Physics-Hybrid Features ». Dans SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/214603-ms.
Texte intégralRapports d'organisations sur le sujet "Model-agnostic methods"
Walizer, Laura, Robert Haehnel, Luke Allen et Yonghu Wenren. Application of multi-fidelity methods to rotorcraft performance assessment. Engineer Research and Development Center (U.S.), mai 2024. http://dx.doi.org/10.21079/11681/48474.
Texte intégralYu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang et Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, décembre 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.
Texte intégral