Academic literature on the topic 'Interpretable methods'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interpretable methods.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Interpretable methods"
Topin, Nicholay, Stephanie Milani, Fei Fang, and Manuela Veloso. "Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9923–31. http://dx.doi.org/10.1609/aaai.v35i11.17192.
Full textKATAOKA, Makoto. "COMPUTER-INTERPRETABLE DESCRIPTION OF CONSTRUCTION METHODS." AIJ Journal of Technology and Design 13, no. 25 (2007): 277–80. http://dx.doi.org/10.3130/aijt.13.277.
Full textMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (October 16, 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Full textAlangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. "Exploring Evaluation Methods for Interpretable Machine Learning: A Survey." Information 14, no. 8 (August 21, 2023): 469. http://dx.doi.org/10.3390/info14080469.
Full textKenesei, Tamás, and János Abonyi. "Interpretable support vector regression." Artificial Intelligence Research 1, no. 2 (October 9, 2012): 11. http://dx.doi.org/10.5430/air.v1n2p11.
Full textYe, Zhuyifan, Wenmian Yang, Yilong Yang, and Defang Ouyang. "Interpretable machine learning methods for in vitro pharmaceutical formulation development." Food Frontiers 2, no. 2 (May 5, 2021): 195–207. http://dx.doi.org/10.1002/fft2.78.
Full textMi, Jian-Xun, An-Di Li, and Li-Fang Zhou. "Review Study of Interpretation Methods for Future Interpretable Machine Learning." IEEE Access 8 (2020): 191969–85. http://dx.doi.org/10.1109/access.2020.3032756.
Full textObermann, Lennart, and Stephan Waack. "Demonstrating non-inferiority of easy interpretable methods for insolvency prediction." Expert Systems with Applications 42, no. 23 (December 2015): 9117–28. http://dx.doi.org/10.1016/j.eswa.2015.08.009.
Full textAssegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods." Proceedings of Engineering and Technology Innovation 21 (April 22, 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.
Full textBang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.
Full textDissertations / Theses on the topic "Interpretable methods"
Jalali, Khooshahr Adrin [Verfasser]. "Interpretable methods in cancer diagnostics / Adrin Jalali Khooshahr." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1240674090/34.
Full textWang, Yuchen. "Interpretable machine learning methods with applications to health care." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127295.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 131-142).
With data becoming increasingly available in recent years, black-box algorithms like boosting methods or neural networks play more important roles in the real world. However, interpretability is a severe need for several areas of applications, like health care or business. Doctors or managers often need to understand how models make predictions, in order to make their final decisions. In this thesis, we improve and propose some interpretable machine learning methods by using modern optimization. We also use two examples to illustrate how interpretable machine learning methods help to solve problems in health care. The first part of this thesis is about interpretable machine learning methods using modern optimization. In Chapter 2, we illustrate how to use robust optimization to improve the performance of SVM, Logistic Regression, and Classification Trees for imbalanced datasets. In Chapter 3, we discuss how to find optimal clusters for prediction. we use real-world datasets to illustrate this is a fast and scalable method with high accuracy. In Chapter 4, we deal with optimal regression trees with polynomial function in leaf nodes and demonstrate this method improves the out-of-sample performance. The second part of this thesis is about how interpretable machine learning methods improve the current health care system. In Chapter 5, we illustrate how we use Optimal Trees to predict the risk mortality for candidates awaiting liver transplantation. Then we develop a transplantation policy called Optimized Prediction of Mortality (OPOM), which reduces mortality significantly in simulation analysis and also improves fairness. In Chapter 6, we propose a new method based on Optimal Trees which perform better than original rules in identifying children at very low risk of clinically important traumatic brain injury (ciTBI). If this method is implemented in the electronic health record, the new rules may reduce unnecessary computed tomographies (CT).
by Yuchen Wang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Zhu, Jessica H. "Detecting food safety risks and human tracking using interpretable machine learning methods/." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122384.
Full textThesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-80).
Black box machine learning methods have allowed researchers to design accurate models using large amounts of data at the cost of interpretability. Model interpretability not only improves user buy-in, but in many cases provides users with important information. Especially in the case of the classification problems addressed in this thesis, the ideal model should not only provide accurate predictions, but should also inform users of how features affect the results. My research goal is to solve real-world problems and compare how different classification models affect the outcomes and interpretability. To this end, this thesis is divided into two parts: food safety risk analysis and human trafficking detection. The first half analyzes the characteristics of supermarket suppliers in China that indicate a high risk of food safety violations. Contrary to expectations, supply chain dispersion, internal inspections, and quality certification systems are not found to be predictive of food safety risk in our data. The second half focuses on identifying human trafficking, specifically sex trafficking, advertisements hidden amongst online classified escort service advertisements. We propose a novel but interpretable keyword detection and modeling pipeline that is more accurate and actionable than current neural network approaches. The algorithms and applications presented in this thesis succeed in providing users with not just classifications but also the characteristics that indicate food safety risk and human trafficking ads.
by Jessica H. Zhu.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Vilamala, Muñoz Albert. "Multivariate methods for interpretable analysis of magnetic resonance spectroscopy data in brain tumour diagnosis." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/336683.
Full textEls tumors cerebrals malignes representen un dels tipus de càncer més difícils de tractar degut a la sensibilitat de l’òrgan que afecten. La gestió clínica de la patologia esdevé encara més complexa quan la massa tumoral s'incrementa degut a la proliferació incontrolada de cèl·lules; suggerint que una diagnosis precoç i acurada és vital per prevenir el curs natural de desenvolupament. La pràctica clínica estàndard per a la diagnosis inclou la utilització de tècniques invasives que poden arribar a ser molt perjudicials per al pacient, factor que ha fomentat la recerca intensiva cap al descobriment de mètodes alternatius de mesurament dels teixits del cervell, tals com la ressonància magnètica nuclear. Una de les seves variants, la imatge de ressonància magnètica, ja s'està actualment utilitzant de forma regular per localitzar i delimitar el tumor. Així mateix, una variant complementària, la espectroscòpia de ressonància magnètica, malgrat la seva alta resolució espacial i la seva capacitat d'identificar metabòlits bioquímics que poden esdevenir biomarcadors de tumor en una àrea delimitada, està molt per darrera en termes d'ús clínic, principalment per la seva difícil interpretació. Per aquest motiu, la interpretació dels espectres de ressonància magnètica corresponents a teixits del cervell esdevé un interessant camp de recerca en mètodes automàtics d'extracció de coneixement tals com l'aprenentatge automàtic, sempre entesos com a una eina d'ajuda per a la presa de decisions per part d'un metge expert humà. La tesis actual té com a propòsit la contribució a l'estat de l'art en aquest camp mitjançant l'aportació de noves tècniques per a l'assistència d'experts radiòlegs, centrades en problemes complexes i proporcionant solucions interpretables. En aquest sentit, s'ha dissenyat una tècnica basada en comitè d'experts per a una discriminació acurada dels diferents tipus de tumors cerebrals agressius, anomenats glioblastomes i metàstasis; a més, es proporciona una estratègia per a incrementar l'estabilitat en la identificació de biomarcadors presents en un espectre mitjançant una ponderació d'instàncies. Des d'una perspectiva analítica diferent, s'ha desenvolupat una eina basada en la separació de fonts, guiada per informació específica de tipus de tumor per a avaluar l'existència de diferents tipus de teixits existents en una massa tumoral, quantificant-ne la seva influència a les regions tumorals veïnes. Aquest desenvolupament ha portat cap a la derivació d'una interpretació probabilística d'algunes d'aquestes tècniques de separació de fonts, proporcionant suport per a la gestió de la incertesa i estratègies d'estimació del nombre més acurat de teixits diferenciats en cada un dels volums tumorals analitzats. Les estratègies proporcionades haurien d'assistir els experts humans en l'ús d'eines automatitzades de suport a la decisió, donada la interpretabilitat i precisió que presenten des de diferents angles.
Conradsson, Emil, and Vidar Johansson. "A MODEL-INDEPENDENT METHODOLOGY FOR A ROOT CAUSE ANALYSIS SYSTEM : A STUDY INVESTIGATING INTERPRETABLE MACHINE LEARNING METHODS." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160372.
Full textIdag upplever företag som Volvo GTO en stor ökning av data och en förbättrad förmågaatt bearbeta den. Detta gör det möjligt att, med hjälp av maskininlärningsmodeller,skapa ett rotorsaksanalyssystem för att förutspå, förklara och förebygga defekter. Detfinns dock en balans mellan modellprestanda och förklaringskapacitet, där båda ärväsentliga för ett sådant system.Detta examensarbete har som mål att, med hjälp av maskininlärningsmodeller, under-söka förhållandet mellan sensordata från målningsprocessen och strukturdefektenorangepeel. Målet är även att utvärdera hur konsekventa olika förklaringsmetoder är.Efter att datat förarbetats och nya variabler skapats, t.ex. förändringar som gjorts, trä-nades och testades tre maskinlärningsmodeller. En linjär modell kan tolkas genomdess koefficienter. En vanlig metod för att globalt förklara trädbaserade modeller ärMDI. SHAP är en modern modelloberoende metod som kan förklara modeller bådeglobalt och lokalt. Dessa tre förklaringsmetoder jämfördes sedan för att utvärdera hurkonsekventa de var i sina förklaringar. Om SHAP skulle vara konsekvent med de andrapå en global nivå, kan det argumenteras för att SHAP kan användas lokalt i en rotorsak-analys.Studien visade att koefficienterna och MDI var konsekventa med SHAP då den över-gripande korrelationen mellan dem var hög samt att metoderna tenderade att viktavariablerna på ett liknande sätt. Genom denna slutsats utvecklades en rotorsakanalysal-goritm med SHAP som lokal förklaringsmetod. Slutligen går det inte att dra någonslutsats om att det finns ett samband mellan sensordatat ochorange peel, eftersom förän-dringarna i processen var de mest betydande variablerna.
Nikumbh, Sarvesh [Verfasser], and Nico [Akademischer Betreuer] Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/119008578X/34.
Full textNikumbh, Sarvesh Verfasser], and Nico [Akademischer Betreuer] [Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-281533.
Full textLoiseau, Romain. "Real-World 3D Data Analysis : Toward Efficiency and Interpretability." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0028.
Full textThis thesis explores new deep-learning approaches for modeling and analyzing real-world 3D data. 3D data processing is helpful for numerous high-impact applications such as autonomous driving, territory management, industry facilities monitoring, forest inventory, and biomass measurement. However, annotating and analyzing 3D data can be demanding. Specifically, matching constraints regarding computing resources or annotation efficiency is often challenging. The difficulty of interpreting and understanding the inner workings of deep learning models can also limit their adoption.The computer vision community has made significant efforts to design methods to analyze 3D data, to perform tasks such as shape classification, scene segmentation, and scene decomposition. Early automated analysis relied on hand-crafted descriptors and incorporated prior knowledge about real-world acquisitions. Modern deep learning techniques demonstrate the best performances but are often computationally expensive, rely on large annotated datasets, and have low interpretability. In this thesis, we propose contributions that address these limitations.The first contribution of this thesis is an efficient deep-learning architecture for analyzing LiDAR sequences in real time. Our approach explicitly considers the acquisition geometry of rotating LiDAR sensors, which many autonomous driving perception pipelines use. Compared to previous work, which considers complete LiDAR rotations individually, our model processes the acquisition in smaller increments. Our proposed architecture achieves accuracy on par with the best methods while reducing processing time by more than five times and model size by more than fifty times.The second contribution is a deep learning method to summarize extensive 3D shape collections with a small set of 3D template shapes. We learn end-to-end a small number of 3D prototypical shapes that are aligned and deformed to reconstruct input point clouds. The main advantage of our approach is that its representations are in the 3D space and can be viewed and manipulated. They constitute a compact and interpretable representation of 3D shape collections and facilitate annotation, leading to emph{state-of-the-art} results for few-shot semantic segmentation.The third contribution further expands unsupervised analysis for parsing large real-world 3D scans into interpretable parts. We introduce a probabilistic reconstruction model to decompose an input 3D point cloud using a small set of learned prototypical shapes. Our network determines the number of prototypes to use to reconstruct each scene. We outperform emph{state-of-the-art} unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. We offer significant advantages over existing approaches as our model does not require manual annotations.This thesis also introduces two open-access annotated real-world datasets, HelixNet and the Earth Parser Dataset, acquired with terrestrial and aerial LiDARs, respectively. HelixNet is the largest LiDAR autonomous driving dataset with dense annotations and provides point-level sensor metadata crucial for precisely measuring the latency of semantic segmentation methods. The Earth Parser Dataset consists of seven aerial LiDAR scenes, which can be used to evaluate 3D processing techniques' performances in diverse environments.We hope that these datasets and reliable methods considering the specificities of real-world acquisitions will encourage further research toward more efficient and interpretable models
Yoshida, Kosuke. "Interpretable machine learning approaches to high-dimensional data and their applications to biomedical engineering problems." Kyoto University, 2018. http://hdl.handle.net/2433/232416.
Full textKlinčík, Radoslav. "Měření posunů a přetvoření střešní konstrukce sportovní haly." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2021. http://www.nusl.cz/ntk/nusl-444252.
Full textBooks on the topic "Interpretable methods"
Verstraeten, Gert. Natuurwetenschappen en archeologie: Methode en interpretatie. Leuven: Acco, 2009.
Find full textWintr, Jan. Metody a zásady interpretace práva: Methoden und Grundsätze der Rechtsauslegung. Praha: Auditorium, 2013.
Find full textAndersch, Martin. Sporen, tekens, letters: Over schriften, kalligrafische experimenten en interpretatie van teksten : een methode in beld gebracht. de Bilt: Cantecleer, 1989.
Find full text1947-, Frankenberry Nancy, ed. Radical interpretation in religion. Cambridge: Cambridge University Press, 2002.
Find full textReading the New Testament: Methods of interpretation. London: SPCK, 1987.
Find full textTuckett, C. M. Reading the New Testament: Methods of interpretation. Philadelphia: Fortress Press, 1987.
Find full textAthalya, Brenner, ed. A feminist companion to reading the Bible: Approaches, methods and strategies. Sheffield, England: Sheffield Academic Press, 1997.
Find full textF, McIlwain Elizabeth, and Plotnick Gary D, eds. Handbook of echo-doppler interpretation. Armonk, NY: Futura Pub., 1996.
Find full text1949-, Yee Gale A., ed. Judges and method: New approaches in biblical studies. Minneapolis: Fortress Press, 1995.
Find full textP, Hirsch Robert, ed. Studying a study and testing a test: How to read the health science literature. 3rd ed. Boston: Little, Brown, 1996.
Find full textBook chapters on the topic "Interpretable methods"
Syed, Umar, and Golan Yona. "Enzyme Function Prediction with Interpretable Models." In Methods in Molecular Biology, 373–420. Totowa, NJ: Humana Press, 2009. http://dx.doi.org/10.1007/978-1-59745-243-4_17.
Full textPogudin, Gleb, and Xingjian Zhang. "Interpretable Exact Linear Reductions via Positivity." In Computational Methods in Systems Biology, 91–107. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85633-5_6.
Full textWeijters, Ton, and Antal van den Bosch. "Interpretable neural networks with BP-SOM." In Tasks and Methods in Applied Artificial Intelligence, 564–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-64574-8_442.
Full textGuidotti, Riccardo, Cristiano Landi, Andrea Beretta, Daniele Fadda, and Mirco Nanni. "Interpretable Data Partitioning Through Tree-Based Clustering Methods." In Discovery Science, 492–507. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_33.
Full textTerzić, Kasim, and J. M. H. du Buf. "Interpretable Feature Maps for Robot Attention." In Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, 456–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58706-6_37.
Full textCánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, Manuel Campos, Jose M. Juarez, Lucía López Rodríguez, and Francisco Palacios. "Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods." In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems, 93–107. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.
Full textvan Sonsbeek, Tom, and Veronika Cheplygina. "Predicting Scores of Medical Imaging Segmentation Methods with Meta-learning." In Interpretable and Annotation-Efficient Learning for Medical Image Computing, 242–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61166-8_26.
Full textAntonova, Elena, Gleb Guskov, Nadezhda Yarushkina, Aleksandra Chekina, Sofia Egova, and Anastasia Khambikova. "Automated ABCDE Image Analysis of a Skin Neoplasm with Interpretable Results." In Artificial Intelligence in Models, Methods and Applications, 657–68. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-22938-1_45.
Full textFisher, William P., and Stefan J. Cano. "Ideas and Methods in Person-Centered Outcome Metrology." In Springer Series in Measurement Science and Technology, 1–20. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07465-3_1.
Full textLabiod, Lazhar, and Mohamed Nadif. "Data Clustering and Representation Learning Based on Networked Data." In Studies in Classification, Data Analysis, and Knowledge Organization, 203–11. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09034-9_23.
Full textConference papers on the topic "Interpretable methods"
West, Rebecca, Khalifeh Al Jadda, Unaiza Ahsan, Huiming Qu, and Xiquan Cui. "Interpretable Methods for Identifying Product Variants." In WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366424.3386196.
Full textRodríguez-Moreno, Itsaso, José María Martínez-Otzeta, Izaro Goienetxea, and Basilio Sierra. "Towards an Interpretable Spanish Sign Language Recognizer." In 11th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0010870700003122.
Full textLuo, Hongyin, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. "Online Learning of Interpretable Word Embeddings." In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.18653/v1/d15-1196.
Full textKovalerchuk, Boris. "Interpretable Knowledge Discovery Reinforced by Visual Methods." In KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3292500.3332278.
Full textDufter, Philipp, and Hinrich Schütze. "Analytical Methods for Interpretable Ultradense Word Embeddings." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1111.
Full textWang, Yan, and Gulanbaier Tuerhong. "A Survey of Interpretable Machine Learning Methods." In 2022 International Conference on Virtual Reality, Human-Computer Interaction and Artificial Intelligence (VRHCIAI). IEEE, 2022. http://dx.doi.org/10.1109/vrhciai57205.2022.00047.
Full textAhmed, Md Sabbir, Khondoker Nazia Iqbal, and Md Golam Rabiul Alam. "Interpretable Lung Cancer Detection using Explainable AI Methods." In 2023 International Conference for Advancement in Technology (ICONAT). IEEE, 2023. http://dx.doi.org/10.1109/iconat57137.2023.10080480.
Full textAbujabal, Abdalghani, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. "QUINT: Interpretable Question Answering over Knowledge Bases." In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/d17-2011.
Full textBarbieri, Francesco, Luis Espinosa-Anke, Jose Camacho-Collados, Steven Schockaert, and Horacio Saggion. "Interpretable Emoji Prediction via Label-Wise Attention LSTMs." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1508.
Full textShi, Jihao, Xiao Ding, Li Du, Ting Liu, and Bing Qin. "Neural Natural Logic Inference for Interpretable Question Answering." In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.emnlp-main.298.
Full text