Academic literature on the topic 'Interpretable methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interpretable methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Interpretable methods"

1

Topin, Nicholay, Stephanie Milani, Fei Fang, and Manuela Veloso. "Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9923–31. http://dx.doi.org/10.1609/aaai.v35i11.17192.

Full text
Abstract:
Current work in explainable reinforcement learning generally produces policies in the form of a decision tree over the state space. Such policies can be used for formal safety verification, agent behavior prediction, and manual inspection of important features. However, existing approaches fit a decision tree after training or use a custom learning procedure which is not compatible with new learning techniques, such as those which use neural networks. To address this limitation, we propose a novel Markov Decision Process (MDP) type for learning decision tree policies: Iterative Bounding MDPs (IBMDPs). An IBMDP is constructed around a base MDP so each IBMDP policy is guaranteed to correspond to a decision tree policy for the base MDP when using a method-agnostic masking procedure. Because of this decision tree equivalence, any function approximator can be used during training, including a neural network, while yielding a decision tree policy for the base MDP. We present the required masking procedure as well as a modified value update step which allows IBMDPs to be solved using existing algorithms. We apply this procedure to produce IBMDP variants of recent reinforcement learning methods. We empirically show the benefits of our approach by solving IBMDPs to produce decision tree policies for the base MDPs.
APA, Harvard, Vancouver, ISO, and other styles
2

KATAOKA, Makoto. "COMPUTER-INTERPRETABLE DESCRIPTION OF CONSTRUCTION METHODS." AIJ Journal of Technology and Design 13, no. 25 (2007): 277–80. http://dx.doi.org/10.3130/aijt.13.277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (October 16, 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.

Full text
Abstract:
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Alangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. "Exploring Evaluation Methods for Interpretable Machine Learning: A Survey." Information 14, no. 8 (August 21, 2023): 469. http://dx.doi.org/10.3390/info14080469.

Full text
Abstract:
In recent times, the progress of machine learning has facilitated the development of decision support systems that exhibit predictive accuracy, surpassing human capabilities in certain scenarios. However, this improvement has come at the cost of increased model complexity, rendering them black-box models that obscure their internal logic from users. These black boxes are primarily designed to optimize predictive accuracy, limiting their applicability in critical domains such as medicine, law, and finance, where both accuracy and interpretability are crucial factors for model acceptance. Despite the growing body of research on interpretability, there remains a significant dearth of evaluation methods for the proposed approaches. This survey aims to shed light on various evaluation methods employed in interpreting models. Two primary procedures are prevalent in the literature: qualitative and quantitative evaluations. Qualitative evaluations rely on human assessments, while quantitative evaluations utilize computational metrics. Human evaluation commonly manifests as either researcher intuition or well-designed experiments. However, this approach is susceptible to human biases and fatigue and cannot adequately compare two models. Consequently, there has been a recent decline in the use of human evaluation, with computational metrics gaining prominence as a more rigorous method for comparing and assessing different approaches. These metrics are designed to serve specific goals, such as fidelity, comprehensibility, or stability. The existing metrics often face challenges when scaling or being applied to different types of model outputs and alternative approaches. Another important factor that needs to be addressed is that while evaluating interpretability methods, their results may not always be entirely accurate. For instance, relying on the drop in probability to assess fidelity can be problematic, particularly when facing the challenge of out-of-distribution data. Furthermore, a fundamental challenge in the interpretability domain is the lack of consensus regarding its definition and requirements. This issue is compounded in the evaluation process and becomes particularly apparent when assessing comprehensibility.
APA, Harvard, Vancouver, ISO, and other styles
5

Kenesei, Tamás, and János Abonyi. "Interpretable support vector regression." Artificial Intelligence Research 1, no. 2 (October 9, 2012): 11. http://dx.doi.org/10.5430/air.v1n2p11.

Full text
Abstract:
This paper deals with transforming Support vector regression (SVR) models into fuzzy systems (FIS). It is highlighted that trained support vector based models can be used for the construction of fuzzy rule-based regression models. However, the transformed support vector model does not automatically result in an interpretable fuzzy model. Training of a support vector model results a complex rule base, where the number of rules are approximately 40-60% of the number of the training data, therefore reduction of the support vector model initialized fuzzy model is an essential task. For this purpose, a three-step reduction algorithm is used based on the combination of previously published model reduction techniques, namely the reduced set method to decrease number of kernel functions, then after the reduced support vector model is transformed into fuzzy rule base similarity measure based merging and orthogonal least-squares methods are utilized. The proposed approach is applied for nonlinear system identification, the identification of a Hammerstein system is used to demonstrate accuracy of the technique with fulfilling the criteria of interpretability.
APA, Harvard, Vancouver, ISO, and other styles
6

Ye, Zhuyifan, Wenmian Yang, Yilong Yang, and Defang Ouyang. "Interpretable machine learning methods for in vitro pharmaceutical formulation development." Food Frontiers 2, no. 2 (May 5, 2021): 195–207. http://dx.doi.org/10.1002/fft2.78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mi, Jian-Xun, An-Di Li, and Li-Fang Zhou. "Review Study of Interpretation Methods for Future Interpretable Machine Learning." IEEE Access 8 (2020): 191969–85. http://dx.doi.org/10.1109/access.2020.3032756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Obermann, Lennart, and Stephan Waack. "Demonstrating non-inferiority of easy interpretable methods for insolvency prediction." Expert Systems with Applications 42, no. 23 (December 2015): 9117–28. http://dx.doi.org/10.1016/j.eswa.2015.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Assegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods." Proceedings of Engineering and Technology Innovation 21 (April 22, 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.

Full text
Abstract:
This study aims to explore the effectiveness of the Shapley additive explanation (SHAP) technique in developing a transparent, interpretable, and explainable ensemble method for heart disease diagnosis using random forest algorithms. Firstly, the features with high impact on the heart disease prediction are selected by SHAP using 1025 heart disease datasets, obtained from a publicly available Kaggle data repository. After that, the features which have the greatest influence on the heart disease prediction are used to develop an interpretable ensemble learning model to automate the heart disease diagnosis by employing the SHAP technique. Finally, the performance of the developed model is evaluated. The SHAP values are used to obtain better performance of heart disease diagnosis. The experimental result shows that 100% prediction accuracy is achieved with the developed model. In addition, the experiment shows that age, chest pain, and maximum heart rate have positive impact on the prediction outcome.
APA, Harvard, Vancouver, ISO, and other styles
10

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Full text
Abstract:
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Interpretable methods"

1

Jalali, Khooshahr Adrin [Verfasser]. "Interpretable methods in cancer diagnostics / Adrin Jalali Khooshahr." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1240674090/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yuchen. "Interpretable machine learning methods with applications to health care." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127295.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 131-142).
With data becoming increasingly available in recent years, black-box algorithms like boosting methods or neural networks play more important roles in the real world. However, interpretability is a severe need for several areas of applications, like health care or business. Doctors or managers often need to understand how models make predictions, in order to make their final decisions. In this thesis, we improve and propose some interpretable machine learning methods by using modern optimization. We also use two examples to illustrate how interpretable machine learning methods help to solve problems in health care. The first part of this thesis is about interpretable machine learning methods using modern optimization. In Chapter 2, we illustrate how to use robust optimization to improve the performance of SVM, Logistic Regression, and Classification Trees for imbalanced datasets. In Chapter 3, we discuss how to find optimal clusters for prediction. we use real-world datasets to illustrate this is a fast and scalable method with high accuracy. In Chapter 4, we deal with optimal regression trees with polynomial function in leaf nodes and demonstrate this method improves the out-of-sample performance. The second part of this thesis is about how interpretable machine learning methods improve the current health care system. In Chapter 5, we illustrate how we use Optimal Trees to predict the risk mortality for candidates awaiting liver transplantation. Then we develop a transplantation policy called Optimized Prediction of Mortality (OPOM), which reduces mortality significantly in simulation analysis and also improves fairness. In Chapter 6, we propose a new method based on Optimal Trees which perform better than original rules in identifying children at very low risk of clinically important traumatic brain injury (ciTBI). If this method is implemented in the electronic health record, the new rules may reduce unnecessary computed tomographies (CT).
by Yuchen Wang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Jessica H. "Detecting food safety risks and human tracking using interpretable machine learning methods/." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122384.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-80).
Black box machine learning methods have allowed researchers to design accurate models using large amounts of data at the cost of interpretability. Model interpretability not only improves user buy-in, but in many cases provides users with important information. Especially in the case of the classification problems addressed in this thesis, the ideal model should not only provide accurate predictions, but should also inform users of how features affect the results. My research goal is to solve real-world problems and compare how different classification models affect the outcomes and interpretability. To this end, this thesis is divided into two parts: food safety risk analysis and human trafficking detection. The first half analyzes the characteristics of supermarket suppliers in China that indicate a high risk of food safety violations. Contrary to expectations, supply chain dispersion, internal inspections, and quality certification systems are not found to be predictive of food safety risk in our data. The second half focuses on identifying human trafficking, specifically sex trafficking, advertisements hidden amongst online classified escort service advertisements. We propose a novel but interpretable keyword detection and modeling pipeline that is more accurate and actionable than current neural network approaches. The algorithms and applications presented in this thesis succeed in providing users with not just classifications but also the characteristics that indicate food safety risk and human trafficking ads.
by Jessica H. Zhu.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
4

Vilamala, Muñoz Albert. "Multivariate methods for interpretable analysis of magnetic resonance spectroscopy data in brain tumour diagnosis." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/336683.

Full text
Abstract:
Malignant tumours of the brain represent one of the most difficult to treat types of cancer due to the sensitive organ they affect. Clinical management of the pathology becomes even more intricate as the tumour mass increases due to proliferation, suggesting that an early and accurate diagnosis is vital for preventing it from its normal course of development. The standard clinical practise for diagnosis includes invasive techniques that might be harmful for the patient, a fact that has fostered intensive research towards the discovery of alternative non-invasive brain tissue measurement methods, such as nuclear magnetic resonance. One of its variants, magnetic resonance imaging, is already used in a regular basis to locate and bound the brain tumour; but a complementary variant, magnetic resonance spectroscopy, despite its higher spatial resolution and its capability to identify biochemical metabolites that might become biomarkers of tumour within a delimited area, lags behind in terms of clinical use, mainly due to its difficult interpretability. The interpretation of magnetic resonance spectra corresponding to brain tissue thus becomes an interesting field of research for automated methods of knowledge extraction such as machine learning, always understanding its secondary role behind human expert medical decision making. The current thesis aims at contributing to the state of the art in this domain by providing novel techniques for assistance of radiology experts, focusing on complex problems and delivering interpretable solutions. In this respect, an ensemble learning technique to accurately discriminate amongst the most aggressive brain tumours, namely glioblastomas and metastases, has been designed; moreover, a strategy to increase the stability of biomarker identification in the spectra by means of instance weighting is provided. From a different analytical perspective, a tool based on signal source separation, guided by tumour type-specific information has been developed to assess the existence of different tissues in the tumoural mass, quantifying their influence in the vicinity of tumoural areas. This development has led to the derivation of a probabilistic interpretation of some source separation techniques, which provide support for uncertainty handling and strategies for the estimation of the most accurate number of differentiated tissues within the analysed tumour volumes. The provided strategies should assist human experts through the use of automated decision support tools and by tackling interpretability and accuracy from different angles
Els tumors cerebrals malignes representen un dels tipus de càncer més difícils de tractar degut a la sensibilitat de l’òrgan que afecten. La gestió clínica de la patologia esdevé encara més complexa quan la massa tumoral s'incrementa degut a la proliferació incontrolada de cèl·lules; suggerint que una diagnosis precoç i acurada és vital per prevenir el curs natural de desenvolupament. La pràctica clínica estàndard per a la diagnosis inclou la utilització de tècniques invasives que poden arribar a ser molt perjudicials per al pacient, factor que ha fomentat la recerca intensiva cap al descobriment de mètodes alternatius de mesurament dels teixits del cervell, tals com la ressonància magnètica nuclear. Una de les seves variants, la imatge de ressonància magnètica, ja s'està actualment utilitzant de forma regular per localitzar i delimitar el tumor. Així mateix, una variant complementària, la espectroscòpia de ressonància magnètica, malgrat la seva alta resolució espacial i la seva capacitat d'identificar metabòlits bioquímics que poden esdevenir biomarcadors de tumor en una àrea delimitada, està molt per darrera en termes d'ús clínic, principalment per la seva difícil interpretació. Per aquest motiu, la interpretació dels espectres de ressonància magnètica corresponents a teixits del cervell esdevé un interessant camp de recerca en mètodes automàtics d'extracció de coneixement tals com l'aprenentatge automàtic, sempre entesos com a una eina d'ajuda per a la presa de decisions per part d'un metge expert humà. La tesis actual té com a propòsit la contribució a l'estat de l'art en aquest camp mitjançant l'aportació de noves tècniques per a l'assistència d'experts radiòlegs, centrades en problemes complexes i proporcionant solucions interpretables. En aquest sentit, s'ha dissenyat una tècnica basada en comitè d'experts per a una discriminació acurada dels diferents tipus de tumors cerebrals agressius, anomenats glioblastomes i metàstasis; a més, es proporciona una estratègia per a incrementar l'estabilitat en la identificació de biomarcadors presents en un espectre mitjançant una ponderació d'instàncies. Des d'una perspectiva analítica diferent, s'ha desenvolupat una eina basada en la separació de fonts, guiada per informació específica de tipus de tumor per a avaluar l'existència de diferents tipus de teixits existents en una massa tumoral, quantificant-ne la seva influència a les regions tumorals veïnes. Aquest desenvolupament ha portat cap a la derivació d'una interpretació probabilística d'algunes d'aquestes tècniques de separació de fonts, proporcionant suport per a la gestió de la incertesa i estratègies d'estimació del nombre més acurat de teixits diferenciats en cada un dels volums tumorals analitzats. Les estratègies proporcionades haurien d'assistir els experts humans en l'ús d'eines automatitzades de suport a la decisió, donada la interpretabilitat i precisió que presenten des de diferents angles.
APA, Harvard, Vancouver, ISO, and other styles
5

Conradsson, Emil, and Vidar Johansson. "A MODEL-INDEPENDENT METHODOLOGY FOR A ROOT CAUSE ANALYSIS SYSTEM : A STUDY INVESTIGATING INTERPRETABLE MACHINE LEARNING METHODS." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160372.

Full text
Abstract:
Today, companies like Volvo GTO experience a vast increase in data and the ability toprocess it. This makes it possible to utilize machine learning models to construct a rootcause analysis system in order to predict, explain and prevent defects. However, thereexists a trade-off between model performance and explanation capability, both of whichare essential to such system.This thesis aims to, with the use of machine learning models, inspect the relationshipbetween sensor data from the painting process and the texture defectorange peel. Theaim is also to evaluate the consistency of different explanation methods.After the data was preprocessed, and new features were engineered, e.g. adjustments,three machine learning models were trained and tested. In order to explain a linearmodel, one can use its coefficients. In the case of a tree-based model, MDI is a commonglobal explanation method. SHAP is a state-of-the-art model-independent method thatcan explain a model globally and locally. These three methods were compared in orderto evaluate the consistency of their explanations. If SHAP would be consistent with theothers on a global level, it can be argued that SHAP can be used locally in an root causeanalysis.The study showed that the coefficients and MDI were consistent with SHAP as theoverall correlation between them were high and because they tended to weight thefeatures in a similar way. From this conclusion, a root cause analysis algorithm wasdeveloped with SHAP as a local explanation method. Finally, it cannot be concludedthat there is a relationship between the sensor data andorange peel, as the adjustments ofthe process were the most impactful features.
Idag upplever företag som Volvo GTO en stor ökning av data och en förbättrad förmågaatt bearbeta den. Detta gör det möjligt att, med hjälp av maskininlärningsmodeller,skapa ett rotorsaksanalyssystem för att förutspå, förklara och förebygga defekter. Detfinns dock en balans mellan modellprestanda och förklaringskapacitet, där båda ärväsentliga för ett sådant system.Detta examensarbete har som mål att, med hjälp av maskininlärningsmodeller, under-söka förhållandet mellan sensordata från målningsprocessen och strukturdefektenorangepeel. Målet är även att utvärdera hur konsekventa olika förklaringsmetoder är.Efter att datat förarbetats och nya variabler skapats, t.ex. förändringar som gjorts, trä-nades och testades tre maskinlärningsmodeller. En linjär modell kan tolkas genomdess koefficienter. En vanlig metod för att globalt förklara trädbaserade modeller ärMDI. SHAP är en modern modelloberoende metod som kan förklara modeller bådeglobalt och lokalt. Dessa tre förklaringsmetoder jämfördes sedan för att utvärdera hurkonsekventa de var i sina förklaringar. Om SHAP skulle vara konsekvent med de andrapå en global nivå, kan det argumenteras för att SHAP kan användas lokalt i en rotorsak-analys.Studien visade att koefficienterna och MDI var konsekventa med SHAP då den över-gripande korrelationen mellan dem var hög samt att metoderna tenderade att viktavariablerna på ett liknande sätt. Genom denna slutsats utvecklades en rotorsakanalysal-goritm med SHAP som lokal förklaringsmetod. Slutligen går det inte att dra någonslutsats om att det finns ett samband mellan sensordatat ochorange peel, eftersom förän-dringarna i processen var de mest betydande variablerna.
APA, Harvard, Vancouver, ISO, and other styles
6

Nikumbh, Sarvesh [Verfasser], and Nico [Akademischer Betreuer] Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/119008578X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nikumbh, Sarvesh Verfasser], and Nico [Akademischer Betreuer] [Pfeifer. "Interpretable Machine Learning Methods for Prediction and Analysis of Genome Regulation in 3D / Sarvesh Nikumbh ; Betreuer: Nico Pfeifer." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-281533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Loiseau, Romain. "Real-World 3D Data Analysis : Toward Efficiency and Interpretability." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0028.

Full text
Abstract:
Cette thèse explore de nouvelles approches d'apprentissage profond pour l'analyse des données 3D du monde réel. Le traitement des données 3D est utile pour de nombreuses applications telles que la conduite autonome, la gestion du territoire, la surveillance des installations industrielles, l'inventaire forestier et la mesure de biomasse. Cependant, l'annotation et l'analyse des données 3D peuvent être exigeantes. En particulier, il est souvent difficile de respecter des contraintes liées à l'utilisation des ressources de calcul ou à l'efficacité de l'annotation. La difficulté d'interpréter et de comprendre le fonctionnement interne des modèles d'apprentissage profond peut également limiter leur adoption.Des efforts considérables ont été déployés pour concevoir des méthodes d'analyse des données 3D, afin d'effectuer des tâches telles que la classification des formes ou la segmentation et la décomposition de scènes. Les premières analyses automatisées s'appuyaient sur des descripteurs créés à la main et incorporaient des connaissances préalables sur les acquisitions du monde réel. Les techniques modernes d'apprentissage profond ont de meilleures performances, mais, sont souvent coûteuses en calcul, dépendent de grands ensembles de données annotées, et sont peu interprétables. Les contributions de cette thèse répondent à ces limitations.La première contribution est une architecture d'apprentissage profond pour l’analyse efficace de séquences LiDAR en temps réel. Notre approche prend en compte la géométrie d'acquisition des capteurs LiDAR rotatifs, que de nombreuses pipelines de conduite autonome utilisent. Par rapport aux travaux antérieurs, qui considèrent les rotations complètes des capteurs LiDAR individuellement, notre modèle traite l'acquisition par petits incréments. L'architecture que nous proposons à une performance comparable à celle des meilleures méthodes, tout en réduisant le temps de traitement de plus de cinq fois, et la taille du modèle de plus de cinquante fois.La deuxième contribution est une méthode d'apprentissage profond permettant de résumer de vastes collections de formes 3D à l'aide d'un petit ensemble de formes 3D. Nous apprenons un faible nombre de formes prototypiques 3D qui sont alignées et déformées pour reconstruire les nuages de points d'entrée. Notre représentation compacte et interprétable des collections de formes 3D permet d'obtenir des résultats à l'état de l'art de la segmentation sémantique avec peu d'exemples annotés.La troisième contribution développe l'analyse non supervisée pour la décomposition de scans 3D du monde réel en parties interprétables. Nous introduisons un modèle de reconstruction probabiliste permettant de décomposer un nuage de points 3D à l'aide d'un petit ensemble de formes prototypiques apprises. Nous surpassons les méthodes non supervisées les plus récentes en termes de précision de décomposition, tout en produisant des représentations visuellement interprétables. Nous offrons des avantages significatifs par rapport aux approches existantes car notre modèle ne nécessite pas d'annotations lors de l'entraînement.Cette thèse présente également deux jeux de données annotés du monde réel en accès libre, HelixNet et Earth Parser Dataset, acquis respectivement avec des LiDAR terrestres et aériens. HelixNet est le plus grand jeu de données LiDAR de conduite autonome avec des annotations denses, et fournit les métadonnées du capteur pour chaque points, cruciales pour mesurer précisément la latence des méthodes de segmentation sémantique. Le Earth Parser Dataset se compose de sept scènes LiDAR aériennes, qui peuvent être utilisées pour évaluer les performances des techniques de traitement 3D dans divers environnements.Nous espérons que ces jeux de données, et ces méthodes fiables tenant compte des spécificités des acquisitions dans le monde réel, encourageront la poursuite de la recherche vers des modèles plus efficaces et plus interprétables
This thesis explores new deep-learning approaches for modeling and analyzing real-world 3D data. 3D data processing is helpful for numerous high-impact applications such as autonomous driving, territory management, industry facilities monitoring, forest inventory, and biomass measurement. However, annotating and analyzing 3D data can be demanding. Specifically, matching constraints regarding computing resources or annotation efficiency is often challenging. The difficulty of interpreting and understanding the inner workings of deep learning models can also limit their adoption.The computer vision community has made significant efforts to design methods to analyze 3D data, to perform tasks such as shape classification, scene segmentation, and scene decomposition. Early automated analysis relied on hand-crafted descriptors and incorporated prior knowledge about real-world acquisitions. Modern deep learning techniques demonstrate the best performances but are often computationally expensive, rely on large annotated datasets, and have low interpretability. In this thesis, we propose contributions that address these limitations.The first contribution of this thesis is an efficient deep-learning architecture for analyzing LiDAR sequences in real time. Our approach explicitly considers the acquisition geometry of rotating LiDAR sensors, which many autonomous driving perception pipelines use. Compared to previous work, which considers complete LiDAR rotations individually, our model processes the acquisition in smaller increments. Our proposed architecture achieves accuracy on par with the best methods while reducing processing time by more than five times and model size by more than fifty times.The second contribution is a deep learning method to summarize extensive 3D shape collections with a small set of 3D template shapes. We learn end-to-end a small number of 3D prototypical shapes that are aligned and deformed to reconstruct input point clouds. The main advantage of our approach is that its representations are in the 3D space and can be viewed and manipulated. They constitute a compact and interpretable representation of 3D shape collections and facilitate annotation, leading to emph{state-of-the-art} results for few-shot semantic segmentation.The third contribution further expands unsupervised analysis for parsing large real-world 3D scans into interpretable parts. We introduce a probabilistic reconstruction model to decompose an input 3D point cloud using a small set of learned prototypical shapes. Our network determines the number of prototypes to use to reconstruct each scene. We outperform emph{state-of-the-art} unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. We offer significant advantages over existing approaches as our model does not require manual annotations.This thesis also introduces two open-access annotated real-world datasets, HelixNet and the Earth Parser Dataset, acquired with terrestrial and aerial LiDARs, respectively. HelixNet is the largest LiDAR autonomous driving dataset with dense annotations and provides point-level sensor metadata crucial for precisely measuring the latency of semantic segmentation methods. The Earth Parser Dataset consists of seven aerial LiDAR scenes, which can be used to evaluate 3D processing techniques' performances in diverse environments.We hope that these datasets and reliable methods considering the specificities of real-world acquisitions will encourage further research toward more efficient and interpretable models
APA, Harvard, Vancouver, ISO, and other styles
9

Yoshida, Kosuke. "Interpretable machine learning approaches to high-dimensional data and their applications to biomedical engineering problems." Kyoto University, 2018. http://hdl.handle.net/2433/232416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Klinčík, Radoslav. "Měření posunů a přetvoření střešní konstrukce sportovní haly." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2021. http://www.nusl.cz/ntk/nusl-444252.

Full text
Abstract:
Diploma thesis describes the measurement and evaluation of displacements and deformations of the wooden roof structure of the aquapark hall in Brno – Kohoutovice. Part of the work is devoted to the preparation and testing of used devices and tools. The main part of the work consists of performing one stage of measurement using the polar method and the laser scanning method. The polar method measurement is compared with the results of the polar method of the previous stage. The next part of the work deals with the comparison of the polar method and the laser scanning method measured in the last stage. The results achieved are interpreted in the final part of the work.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Interpretable methods"

1

Verstraeten, Gert. Natuurwetenschappen en archeologie: Methode en interpretatie. Leuven: Acco, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wintr, Jan. Metody a zásady interpretace práva: Methoden und Grundsätze der Rechtsauslegung. Praha: Auditorium, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Andersch, Martin. Sporen, tekens, letters: Over schriften, kalligrafische experimenten en interpretatie van teksten : een methode in beld gebracht. de Bilt: Cantecleer, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1947-, Frankenberry Nancy, ed. Radical interpretation in religion. Cambridge: Cambridge University Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tuckett, C. M. Reading the New Testament: Methods of interpretation. London: SPCK, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tuckett, C. M. Reading the New Testament: Methods of interpretation. Philadelphia: Fortress Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Athalya, Brenner, ed. A feminist companion to reading the Bible: Approaches, methods and strategies. Sheffield, England: Sheffield Academic Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

F, McIlwain Elizabeth, and Plotnick Gary D, eds. Handbook of echo-doppler interpretation. Armonk, NY: Futura Pub., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

1949-, Yee Gale A., ed. Judges and method: New approaches in biblical studies. Minneapolis: Fortress Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

P, Hirsch Robert, ed. Studying a study and testing a test: How to read the health science literature. 3rd ed. Boston: Little, Brown, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Interpretable methods"

1

Syed, Umar, and Golan Yona. "Enzyme Function Prediction with Interpretable Models." In Methods in Molecular Biology, 373–420. Totowa, NJ: Humana Press, 2009. http://dx.doi.org/10.1007/978-1-59745-243-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pogudin, Gleb, and Xingjian Zhang. "Interpretable Exact Linear Reductions via Positivity." In Computational Methods in Systems Biology, 91–107. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85633-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Weijters, Ton, and Antal van den Bosch. "Interpretable neural networks with BP-SOM." In Tasks and Methods in Applied Artificial Intelligence, 564–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-64574-8_442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guidotti, Riccardo, Cristiano Landi, Andrea Beretta, Daniele Fadda, and Mirco Nanni. "Interpretable Data Partitioning Through Tree-Based Clustering Methods." In Discovery Science, 492–507. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Terzić, Kasim, and J. M. H. du Buf. "Interpretable Feature Maps for Robot Attention." In Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods, 456–67. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58706-6_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, Manuel Campos, Jose M. Juarez, Lucía López Rodríguez, and Francisco Palacios. "Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods." In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems, 93–107. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

van Sonsbeek, Tom, and Veronika Cheplygina. "Predicting Scores of Medical Imaging Segmentation Methods with Meta-learning." In Interpretable and Annotation-Efficient Learning for Medical Image Computing, 242–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61166-8_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Antonova, Elena, Gleb Guskov, Nadezhda Yarushkina, Aleksandra Chekina, Sofia Egova, and Anastasia Khambikova. "Automated ABCDE Image Analysis of a Skin Neoplasm with Interpretable Results." In Artificial Intelligence in Models, Methods and Applications, 657–68. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-22938-1_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fisher, William P., and Stefan J. Cano. "Ideas and Methods in Person-Centered Outcome Metrology." In Springer Series in Measurement Science and Technology, 1–20. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07465-3_1.

Full text
Abstract:
AbstractBroadly stated, this book makes the case for a different way of thinking about how to measure and manage person-centered outcomes in health care. The basic contrast is between statistical and metrological definitions of measurement. The mainstream statistical tradition focuses attention on numbers in centrally planned and executed data analyses, while metrology focuses on distributing meaningfully interpretable instruments throughout networks of end users. The former approaches impose group-level statistics from the top down in homogenizing ways. The latter tracks emergent patterns from the bottom up, feeding them back to end users in custom tailored applications, whose decisions and behaviors are coordinated by means of shared languages. New forms of information and knowledge necessitate new forms of social organization to create them and put them to use. The chapters in this book describe the analytic, design, and organizational methods that have the potential to open up exciting new possibilities for systematic and broad scale improvements in health care outcomes.
APA, Harvard, Vancouver, ISO, and other styles
10

Labiod, Lazhar, and Mohamed Nadif. "Data Clustering and Representation Learning Based on Networked Data." In Studies in Classification, Data Analysis, and Knowledge Organization, 203–11. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09034-9_23.

Full text
Abstract:
AbstractTo deal simultaneously with both, the attributed network embedding and clustering, we propose a new model exploiting both content and structure information. The proposed model relies on the approximation of the relaxed continuous embedding solution by the true discrete clustering. Thereby, we show that incorporating an embedding representation provides simpler and easier interpretable solutions. Experiment results demonstrate that the proposed algorithm performs better, in terms of clustering, than the state-of-art algorithms, including deep learning methods devoted to similar tasks.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Interpretable methods"

1

West, Rebecca, Khalifeh Al Jadda, Unaiza Ahsan, Huiming Qu, and Xiquan Cui. "Interpretable Methods for Identifying Product Variants." In WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366424.3386196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodríguez-Moreno, Itsaso, José María Martínez-Otzeta, Izaro Goienetxea, and Basilio Sierra. "Towards an Interpretable Spanish Sign Language Recognizer." In 11th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0010870700003122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Luo, Hongyin, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. "Online Learning of Interpretable Word Embeddings." In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.18653/v1/d15-1196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kovalerchuk, Boris. "Interpretable Knowledge Discovery Reinforced by Visual Methods." In KDD '19: The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3292500.3332278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dufter, Philipp, and Hinrich Schütze. "Analytical Methods for Interpretable Ultradense Word Embeddings." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yan, and Gulanbaier Tuerhong. "A Survey of Interpretable Machine Learning Methods." In 2022 International Conference on Virtual Reality, Human-Computer Interaction and Artificial Intelligence (VRHCIAI). IEEE, 2022. http://dx.doi.org/10.1109/vrhciai57205.2022.00047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Md Sabbir, Khondoker Nazia Iqbal, and Md Golam Rabiul Alam. "Interpretable Lung Cancer Detection using Explainable AI Methods." In 2023 International Conference for Advancement in Technology (ICONAT). IEEE, 2023. http://dx.doi.org/10.1109/iconat57137.2023.10080480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Abujabal, Abdalghani, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. "QUINT: Interpretable Question Answering over Knowledge Bases." In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/d17-2011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Barbieri, Francesco, Luis Espinosa-Anke, Jose Camacho-Collados, Steven Schockaert, and Horacio Saggion. "Interpretable Emoji Prediction via Label-Wise Attention LSTMs." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Jihao, Xiao Ding, Li Du, Ting Liu, and Bing Qin. "Neural Natural Logic Inference for Interpretable Question Answering." In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.emnlp-main.298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography