Дисертації з теми "Interpretable methods"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-30 дисертацій для дослідження на тему "Interpretable methods".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Jalali, Khooshahr Adrin [Verfasser]. "Interpretable methods in cancer diagnostics / Adrin Jalali Khooshahr." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1240674090/34.
Повний текст джерелаWang, Yuchen. "Interpretable machine learning methods with applications to health care." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127295.
Повний текст джерелаCataloged from the official PDF of thesis.
Includes bibliographical references (pages 131-142).
With data becoming increasingly available in recent years, black-box algorithms like boosting methods or neural networks play more important roles in the real world. However, interpretability is a severe need for several areas of applications, like health care or business. Doctors or managers often need to understand how models make predictions, in order to make their final decisions. In this thesis, we improve and propose some interpretable machine learning methods by using modern optimization. We also use two examples to illustrate how interpretable machine learning methods help to solve problems in health care. The first part of this thesis is about interpretable machine learning methods using modern optimization. In Chapter 2, we illustrate how to use robust optimization to improve the performance of SVM, Logistic Regression, and Classification Trees for imbalanced datasets. In Chapter 3, we discuss how to find optimal clusters for prediction. we use real-world datasets to illustrate this is a fast and scalable method with high accuracy. In Chapter 4, we deal with optimal regression trees with polynomial function in leaf nodes and demonstrate this method improves the out-of-sample performance. The second part of this thesis is about how interpretable machine learning methods improve the current health care system. In Chapter 5, we illustrate how we use Optimal Trees to predict the risk mortality for candidates awaiting liver transplantation. Then we develop a transplantation policy called Optimized Prediction of Mortality (OPOM), which reduces mortality significantly in simulation analysis and also improves fairness. In Chapter 6, we propose a new method based on Optimal Trees which perform better than original rules in identifying children at very low risk of clinically important traumatic brain injury (ciTBI). If this method is implemented in the electronic health record, the new rules may reduce unnecessary computed tomographies (CT).
by Yuchen Wang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Zhu, Jessica H. "Detecting food safety risks and human tracking using interpretable machine learning methods/." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122384.
Повний текст джерелаThesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-80).
Black box machine learning methods have allowed researchers to design accurate models using large amounts of data at the cost of interpretability. Model interpretability not only improves user buy-in, but in many cases provides users with important information. Especially in the case of the classification problems addressed in this thesis, the ideal model should not only provide accurate predictions, but should also inform users of how features affect the results. My research goal is to solve real-world problems and compare how different classification models affect the outcomes and interpretability. To this end, this thesis is divided into two parts: food safety risk analysis and human trafficking detection. The first half analyzes the characteristics of supermarket suppliers in China that indicate a high risk of food safety violations. Contrary to expectations, supply chain dispersion, internal inspections, and quality certification systems are not found to be predictive of food safety risk in our data. The second half focuses on identifying human trafficking, specifically sex trafficking, advertisements hidden amongst online classified escort service advertisements. We propose a novel but interpretable keyword detection and modeling pipeline that is more accurate and actionable than current neural network approaches. The algorithms and applications presented in this thesis succeed in providing users with not just classifications but also the characteristics that indicate food safety risk and human trafficking ads.
by Jessica H. Zhu.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
Vilamala, Muñoz Albert. "Multivariate methods for interpretable analysis of magnetic resonance spectroscopy data in brain tumour diagnosis." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/336683.
Повний текст джерелаEls tumors cerebrals malignes representen un dels tipus de càncer més difícils de tractar degut a la sensibilitat de l’òrgan que afecten. La gestió clínica de la patologia esdevé encara més complexa quan la massa tumoral s'incrementa degut a la proliferació incontrolada de cèl·lules; suggerint que una diagnosis precoç i acurada és vital per prevenir el curs natural de desenvolupament. La pràctica clínica estàndard per a la diagnosis inclou la utilització de tècniques invasives que poden arribar a ser molt perjudicials per al pacient, factor que ha fomentat la recerca intensiva cap al descobriment de mètodes alternatius de mesurament dels teixits del cervell, tals com la ressonància magnètica nuclear. Una de les seves variants, la imatge de ressonància magnètica, ja s'està actualment utilitzant de forma regular per localitzar i delimitar el tumor. Així mateix, una variant complementària, la espectroscòpia de ressonància magnètica, malgrat la seva alta resolució espacial i la seva capacitat d'identificar metabòlits bioquímics que poden esdevenir biomarcadors de tumor en una àrea delimitada, està molt per darrera en termes d'ús clínic, principalment per la seva difícil interpretació. Per aquest motiu, la interpretació dels espectres de ressonància magnètica corresponents a teixits del cervell esdevé un interessant camp de recerca en mètodes automàtics d'extracció de coneixement tals com l'aprenentatge automàtic, sempre entesos com a una eina d'ajuda per a la presa de decisions per part d'un metge expert humà. La tesis actual té com a propòsit la contribució a l'estat de l'art en aquest camp mitjançant l'aportació de noves tècniques per a l'assistència d'experts radiòlegs, centrades en problemes complexes i proporcionant solucions interpretables. En aquest sentit, s'ha dissenyat una tècnica basada en comitè d'experts per a una discriminació acurada dels diferents tipus de tumors cerebrals agressius, anomenats glioblastomes i metàstasis; a més, es proporciona una estratègia per a incrementar l'estabilitat en la identificació de biomarcadors presents en un espectre mitjançant una ponderació d'instàncies. Des d'una perspectiva analítica diferent, s'ha desenvolupat una eina basada en la separació de fonts, guiada per informació específica de tipus de tumor per a avaluar l'existència de diferents tipus de teixits existents en una massa tumoral, quantificant-ne la seva influència a les regions tumorals veïnes. Aquest desenvolupament ha portat cap a la derivació d'una interpretació probabilística d'algunes d'aquestes tècniques de separació de fonts, proporcionant suport per a la gestió de la incertesa i estratègies d'estimació del nombre més acurat de teixits diferenciats en cada un dels volums tumorals analitzats. Les estratègies proporcionades haurien d'assistir els experts humans en l'ús d'eines automatitzades de suport a la decisió, donada la interpretabilitat i precisió que presenten des de diferents angles.
Conradsson, Emil, and Vidar Johansson. "A MODEL-INDEPENDENT METHODOLOGY FOR A ROOT CAUSE ANALYSIS SYSTEM : A STUDY INVESTIGATING INTERPRETABLE MACHINE LEARNING METHODS." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160372.
Повний текст джерелаIdag upplever företag som Volvo GTO en stor ökning av data och en förbättrad förmågaatt bearbeta den. Detta gör det möjligt att, med hjälp av maskininlärningsmodeller,skapa ett rotorsaksanalyssystem för att förutspå, förklara och förebygga defekter. Detfinns dock en balans mellan modellprestanda och förklaringskapacitet, där båda ärväsentliga för ett sådant system.Detta examensarbete har som mål att, med hjälp av maskininlärningsmodeller, under-söka förhållandet mellan sensordata från målningsprocessen och strukturdefektenorangepeel. Målet är även att utvärdera hur konsekventa olika förklaringsmetoder är.Efter att datat förarbetats och nya variabler skapats, t.ex. förändringar som gjorts, trä-nades och testades tre maskinlärningsmodeller. En linjär modell kan tolkas genomdess koefficienter. En vanlig metod för att globalt förklara trädbaserade modeller ärMDI. SHAP är en modern modelloberoende metod som kan förklara modeller bådeglobalt och lokalt. Dessa tre förklaringsmetoder jämfördes sedan för att utvärdera hurkonsekventa de var i sina förklaringar. Om SHAP skulle vara konsekvent med de andrapå en global nivå, kan det argumenteras för att SHAP kan användas lokalt i en rotorsak-analys.Studien visade att koefficienterna och MDI var konsekventa med SHAP då den över-gripande korrelationen mellan dem var hög samt att metoderna tenderade att viktavariablerna på ett liknande sätt. Genom denna slutsats utvecklades en rotorsakanalysal-goritm med SHAP som lokal förklaringsmetod. Slutligen går det inte att dra någonslutsats om att det finns ett samband mellan sensordatat ochorange peel, eftersom förän-dringarna i processen var de mest betydande variablerna.
Nikumbh, Sarvesh [Verfasser], and Nico [Akademischer Betreuer] Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/119008578X/34.
Повний текст джерелаNikumbh, Sarvesh Verfasser], and Nico [Akademischer Betreuer] [Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-281533.
Повний текст джерелаLoiseau, Romain. "Real-World 3D Data Analysis : Toward Efficiency and Interpretability." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0028.
Повний текст джерелаThis thesis explores new deep-learning approaches for modeling and analyzing real-world 3D data. 3D data processing is helpful for numerous high-impact applications such as autonomous driving, territory management, industry facilities monitoring, forest inventory, and biomass measurement. However, annotating and analyzing 3D data can be demanding. Specifically, matching constraints regarding computing resources or annotation efficiency is often challenging. The difficulty of interpreting and understanding the inner workings of deep learning models can also limit their adoption.The computer vision community has made significant efforts to design methods to analyze 3D data, to perform tasks such as shape classification, scene segmentation, and scene decomposition. Early automated analysis relied on hand-crafted descriptors and incorporated prior knowledge about real-world acquisitions. Modern deep learning techniques demonstrate the best performances but are often computationally expensive, rely on large annotated datasets, and have low interpretability. In this thesis, we propose contributions that address these limitations.The first contribution of this thesis is an efficient deep-learning architecture for analyzing LiDAR sequences in real time. Our approach explicitly considers the acquisition geometry of rotating LiDAR sensors, which many autonomous driving perception pipelines use. Compared to previous work, which considers complete LiDAR rotations individually, our model processes the acquisition in smaller increments. Our proposed architecture achieves accuracy on par with the best methods while reducing processing time by more than five times and model size by more than fifty times.The second contribution is a deep learning method to summarize extensive 3D shape collections with a small set of 3D template shapes. We learn end-to-end a small number of 3D prototypical shapes that are aligned and deformed to reconstruct input point clouds. The main advantage of our approach is that its representations are in the 3D space and can be viewed and manipulated. They constitute a compact and interpretable representation of 3D shape collections and facilitate annotation, leading to emph{state-of-the-art} results for few-shot semantic segmentation.The third contribution further expands unsupervised analysis for parsing large real-world 3D scans into interpretable parts. We introduce a probabilistic reconstruction model to decompose an input 3D point cloud using a small set of learned prototypical shapes. Our network determines the number of prototypes to use to reconstruct each scene. We outperform emph{state-of-the-art} unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. We offer significant advantages over existing approaches as our model does not require manual annotations.This thesis also introduces two open-access annotated real-world datasets, HelixNet and the Earth Parser Dataset, acquired with terrestrial and aerial LiDARs, respectively. HelixNet is the largest LiDAR autonomous driving dataset with dense annotations and provides point-level sensor metadata crucial for precisely measuring the latency of semantic segmentation methods. The Earth Parser Dataset consists of seven aerial LiDAR scenes, which can be used to evaluate 3D processing techniques' performances in diverse environments.We hope that these datasets and reliable methods considering the specificities of real-world acquisitions will encourage further research toward more efficient and interpretable models
Yoshida, Kosuke. "Interpretable machine learning approaches to high-dimensional data and their applications to biomedical engineering problems." Kyoto University, 2018. http://hdl.handle.net/2433/232416.
Повний текст джерелаKlinčík, Radoslav. "Měření posunů a přetvoření střešní konstrukce sportovní haly." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2021. http://www.nusl.cz/ntk/nusl-444252.
Повний текст джерелаMohelník, Ladislav. "Kořeny moravské urbanistické struktury." Doctoral thesis, Vysoké učení technické v Brně. Fakulta architektury, 2014. http://www.nusl.cz/ntk/nusl-233261.
Повний текст джерелаNóbrega, Murilo Leite. "Explainable and Interpretable Face Presentation Attack Detection Methods." Master's thesis, 2021. https://hdl.handle.net/10216/139294.
Повний текст джерелаDrawid, Amar Mohan. "Physically interpretable machine learning methods for transcription factor binding site identification using principled energy thresholds and occupancy." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000050504.
Повний текст джерелаBalayan, Vladimir. "Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection." Master's thesis, 2020. http://hdl.handle.net/10362/130774.
Повний текст джерелаA Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações.
PADELLINI, TULLIA. "Interpretable statistics for complex modelling: quantile and topological learning." Doctoral thesis, 2019. http://hdl.handle.net/11573/1243684.
Повний текст джерелаHong, Cheng-En, and 洪晟恩. "A tree-based interpretable predictive method with FDR andtype-one error control." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/333s2f.
Повний текст джерела國立臺灣大學
統計碩士學位學程
105
Despite the abundance of the available variables, ground truth is privy to knowledge about the problem seldom revealed in practice. By discovering important features, researchers can further conduct a more targeted follow-up experiment on the selected features tailored for understanding the scientific phenomenon. A natural requirement is that we wish to discover as many relevant variables as possible and make as few mistakes as possible at the same time. We propose a modified RuleFit with FDR control by knockoff procedure and with alpha control by Neyman-Pearson method.
Mantlík, František. "Komplexní interpretace gravimetrických dat zaměřená na stanovení tektonické struktury a ekologické projekty." Doctoral thesis, 2013. http://www.nusl.cz/ntk/nusl-322640.
Повний текст джерелаVláčil, Ondřej. "Řešení problému dekonstrukce práva z pohledu metodologie interpretace práva." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-354498.
Повний текст джерелаŽák, Krzyžanková Katarzyna. "Quid iuris? (Deskriptivní teorie právní interpretace a argumentace)." Doctoral thesis, 2015. http://www.nusl.cz/ntk/nusl-351044.
Повний текст джерелаMALOTÍNOVÁ, Terezie. "Didaktická interpretace Čapkových děl v literární výchově pro druhý stupeň ZŠ." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-251510.
Повний текст джерелаZemanová, Marie. "Rozvoj interpretace v houslové hře na II. stupni ZUŠ." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-345084.
Повний текст джерелаMusilová, Nikola. "Interpretace mezistátní obchodní klauzule Nejvyšším soudem USA: srovnání Rehnquistova a Robertsova soudu." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-324199.
Повний текст джерелаGaždová, Renata. "Využití a interpretace seismických povrchových vln v širokém oboru frekvencí." Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-309478.
Повний текст джерелаPochmanová, Ilona. "Louskání oříšků aneb porovnání románu Franze Kafky "Zámek" s českými překlady." Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-379311.
Повний текст джерелаIvan, Matúš. "O kouli." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-341720.
Повний текст джерелаWessels, Jan Cornelis. "Moet vroue werklik stilbly in die kerk? : 'n Gereformeerde interpretasie van die 'Swygtekste' by Paulus in die lig van hulle sosiohistoriese, openbaringshistoriese en kerkhistoriese konteks / Jan Cornelis Wessels." Thesis, 2014. http://hdl.handle.net/10394/16692.
Повний текст джерелаMTh (New Testament), North-West University, Potchefstroom Campus, 2014
Hu, Beibei. "Překonávání potíží u žáků základních uměleckých škol studujících hru na klavír za použití moderních výukových metod." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367913.
Повний текст джерелаHoleka, Matouš. "Pohled na Písmo a hermeneutická východiska pro jeho výklad v různých křesťanských tradicích." Doctoral thesis, 2021. http://www.nusl.cz/ntk/nusl-448306.
Повний текст джерелаPokorná, Jitka. "Užití autobiografických motivů v dílech Edgara Dutky, Elišky Vlasákové a Antonína Bajaji." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-304057.
Повний текст джерелаODEHNALOVÁ, Barbora. "Filosofie pro děti jako koncepce výuky náboženství a katecheze." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-254093.
Повний текст джерела