Artículos de revistas sobre el tema "Interpretable methods"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Interpretable methods.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Interpretable methods".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Topin, Nicholay, Stephanie Milani, Fei Fang y Manuela Veloso. "Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 11 (18 de mayo de 2021): 9923–31. http://dx.doi.org/10.1609/aaai.v35i11.17192.

Texto completo
Resumen
Current work in explainable reinforcement learning generally produces policies in the form of a decision tree over the state space. Such policies can be used for formal safety verification, agent behavior prediction, and manual inspection of important features. However, existing approaches fit a decision tree after training or use a custom learning procedure which is not compatible with new learning techniques, such as those which use neural networks. To address this limitation, we propose a novel Markov Decision Process (MDP) type for learning decision tree policies: Iterative Bounding MDPs (IBMDPs). An IBMDP is constructed around a base MDP so each IBMDP policy is guaranteed to correspond to a decision tree policy for the base MDP when using a method-agnostic masking procedure. Because of this decision tree equivalence, any function approximator can be used during training, including a neural network, while yielding a decision tree policy for the base MDP. We present the required masking procedure as well as a modified value update step which allows IBMDPs to be solved using existing algorithms. We apply this procedure to produce IBMDP variants of recent reinforcement learning methods. We empirically show the benefits of our approach by solving IBMDPs to produce decision tree policies for the base MDPs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

KATAOKA, Makoto. "COMPUTER-INTERPRETABLE DESCRIPTION OF CONSTRUCTION METHODS". AIJ Journal of Technology and Design 13, n.º 25 (2007): 277–80. http://dx.doi.org/10.3130/aijt.13.277.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl y Bin Yu. "Definitions, methods, and applications in interpretable machine learning". Proceedings of the National Academy of Sciences 116, n.º 44 (16 de octubre de 2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.

Texto completo
Resumen
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Alangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour y Ibrahim Almosallam. "Exploring Evaluation Methods for Interpretable Machine Learning: A Survey". Information 14, n.º 8 (21 de agosto de 2023): 469. http://dx.doi.org/10.3390/info14080469.

Texto completo
Resumen
In recent times, the progress of machine learning has facilitated the development of decision support systems that exhibit predictive accuracy, surpassing human capabilities in certain scenarios. However, this improvement has come at the cost of increased model complexity, rendering them black-box models that obscure their internal logic from users. These black boxes are primarily designed to optimize predictive accuracy, limiting their applicability in critical domains such as medicine, law, and finance, where both accuracy and interpretability are crucial factors for model acceptance. Despite the growing body of research on interpretability, there remains a significant dearth of evaluation methods for the proposed approaches. This survey aims to shed light on various evaluation methods employed in interpreting models. Two primary procedures are prevalent in the literature: qualitative and quantitative evaluations. Qualitative evaluations rely on human assessments, while quantitative evaluations utilize computational metrics. Human evaluation commonly manifests as either researcher intuition or well-designed experiments. However, this approach is susceptible to human biases and fatigue and cannot adequately compare two models. Consequently, there has been a recent decline in the use of human evaluation, with computational metrics gaining prominence as a more rigorous method for comparing and assessing different approaches. These metrics are designed to serve specific goals, such as fidelity, comprehensibility, or stability. The existing metrics often face challenges when scaling or being applied to different types of model outputs and alternative approaches. Another important factor that needs to be addressed is that while evaluating interpretability methods, their results may not always be entirely accurate. For instance, relying on the drop in probability to assess fidelity can be problematic, particularly when facing the challenge of out-of-distribution data. Furthermore, a fundamental challenge in the interpretability domain is the lack of consensus regarding its definition and requirements. This issue is compounded in the evaluation process and becomes particularly apparent when assessing comprehensibility.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kenesei, Tamás y János Abonyi. "Interpretable support vector regression". Artificial Intelligence Research 1, n.º 2 (9 de octubre de 2012): 11. http://dx.doi.org/10.5430/air.v1n2p11.

Texto completo
Resumen
This paper deals with transforming Support vector regression (SVR) models into fuzzy systems (FIS). It is highlighted that trained support vector based models can be used for the construction of fuzzy rule-based regression models. However, the transformed support vector model does not automatically result in an interpretable fuzzy model. Training of a support vector model results a complex rule base, where the number of rules are approximately 40-60% of the number of the training data, therefore reduction of the support vector model initialized fuzzy model is an essential task. For this purpose, a three-step reduction algorithm is used based on the combination of previously published model reduction techniques, namely the reduced set method to decrease number of kernel functions, then after the reduced support vector model is transformed into fuzzy rule base similarity measure based merging and orthogonal least-squares methods are utilized. The proposed approach is applied for nonlinear system identification, the identification of a Hammerstein system is used to demonstrate accuracy of the technique with fulfilling the criteria of interpretability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ye, Zhuyifan, Wenmian Yang, Yilong Yang y Defang Ouyang. "Interpretable machine learning methods for in vitro pharmaceutical formulation development". Food Frontiers 2, n.º 2 (5 de mayo de 2021): 195–207. http://dx.doi.org/10.1002/fft2.78.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mi, Jian-Xun, An-Di Li y Li-Fang Zhou. "Review Study of Interpretation Methods for Future Interpretable Machine Learning". IEEE Access 8 (2020): 191969–85. http://dx.doi.org/10.1109/access.2020.3032756.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Obermann, Lennart y Stephan Waack. "Demonstrating non-inferiority of easy interpretable methods for insolvency prediction". Expert Systems with Applications 42, n.º 23 (diciembre de 2015): 9117–28. http://dx.doi.org/10.1016/j.eswa.2015.08.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Assegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods". Proceedings of Engineering and Technology Innovation 21 (22 de abril de 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.

Texto completo
Resumen
This study aims to explore the effectiveness of the Shapley additive explanation (SHAP) technique in developing a transparent, interpretable, and explainable ensemble method for heart disease diagnosis using random forest algorithms. Firstly, the features with high impact on the heart disease prediction are selected by SHAP using 1025 heart disease datasets, obtained from a publicly available Kaggle data repository. After that, the features which have the greatest influence on the heart disease prediction are used to develop an interpretable ensemble learning model to automate the heart disease diagnosis by employing the SHAP technique. Finally, the performance of the developed model is evaluated. The SHAP values are used to obtain better performance of heart disease diagnosis. The experimental result shows that 100% prediction accuracy is achieved with the developed model. In addition, the experiment shows that age, chest pain, and maximum heart rate have positive impact on the prediction outcome.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu y Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 13 (18 de mayo de 2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Texto completo
Resumen
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Li, Qiaomei, Rachel Cummings y Yonatan Mintz. "Optimal Local Explainer Aggregation for Interpretable Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 12000–12007. http://dx.doi.org/10.1609/aaai.v36i11.21458.

Texto completo
Resumen
A key challenge for decision makers when incorporating black box machine learned models into practice is being able to understand the predictions provided by these models. One set of methods proposed to address this challenge is that of training surrogate explainer models which approximate how the more complex model is computing its predictions. Explainer methods are generally classified as either local or global explainers depending on what portion of the data space they are purported to explain. The improved coverage of global explainers usually comes at the expense of explainer fidelity (i.e., how well the explainer's predictions match that of the black box model). One way of trading off the advantages of both approaches is to aggregate several local explainers into a single explainer model with improved coverage. However, the problem of aggregating these local explainers is computationally challenging, and existing methods only use heuristics to form these aggregations. In this paper, we propose a local explainer aggregation method which selects local explainers using non-convex optimization. In contrast to other heuristic methods, we use an integer optimization framework to combine local explainers into a near-global aggregate explainer. Our framework allows a decision-maker to directly tradeoff coverage and fidelity of the resulting aggregation through the parameters of the optimization problem. We also propose a novel local explainer algorithm based on information filtering. We evaluate our algorithmic framework on two healthcare datasets: the Parkinson's Progression Marker Initiative (PPMI) data set and a geriatric mobility dataset from the UCI machine learning repository. Our choice of these healthcare-related datasets is motivated by the anticipated need for explainable precision medicine. We find that our method outperforms existing local explainer aggregation methods in terms of both fidelity and coverage of classification. It also improves on fidelity over existing global explainer methods, particularly in multi-class settings, where state-of-the-art methods achieve 70% and ours achieves 90%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Mahya, Parisa y Johannes Fürnkranz. "An Empirical Comparison of Interpretable Models to Post-Hoc Explanations". AI 4, n.º 2 (19 de mayo de 2023): 426–36. http://dx.doi.org/10.3390/ai4020023.

Texto completo
Resumen
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations and interpretable models on several datasets for rule-based and feature-based interpretable models. The results seem to underline that often directly learned interpretable models approximate the black-box models at least as well as their post-hoc surrogates, even though the former do not have direct access to the black-box model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lee, Franklin Langlang, Jaehong Park, Sushmit Goyal, Yousef Qaroush, Shihu Wang, Hong Yoon, Aravind Rammohan y Youngseon Shim. "Comparison of Machine Learning Methods towards Developing Interpretable Polyamide Property Prediction". Polymers 13, n.º 21 (23 de octubre de 2021): 3653. http://dx.doi.org/10.3390/polym13213653.

Texto completo
Resumen
Polyamides are often used for their superior thermal, mechanical, and chemical properties. They form a diverse set of materials that have a large variation in properties between linear to aromatic compounds, which renders the traditional quantitative structure–property relationship (QSPR) challenging. We use extended connectivity fingerprints (ECFP) and traditional QSPR fingerprints to develop machine learning models to perform high fidelity prediction of glass transition temperature (Tg), melting temperature (Tm), density (ρ), and tensile modulus (E). The non-linear model using random forest is in general found to be more accurate than linear regression; however, using feature selection or regularization, the accuracy of linear models is shown to be improved significantly to become comparable to the more complex nonlinear algorithm. We find that none of the models or fingerprints were able to accurately predict the tensile modulus E, which we hypothesize is due to heterogeneity in data and data sources, as well as inherent challenges in measuring it. Finally, QSPR models revealed that the fraction of rotatable bonds, and the rotational degree of freedom affects polyamide properties most profoundly and can be used for back of the envelope calculations for a quick estimate of the polymer attributes (glass transition temperature, melting temperature, and density). These QSPR models, although having slightly lower prediction accuracy, show the most promise for the polymer chemist seeking to develop an intuition of ways to modify the chemistry to enhance specific attributes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Li, Xiao, Zachary Serlin, Guang Yang y Calin Belta. "A formal methods approach to interpretable reinforcement learning for robotic planning". Science Robotics 4, n.º 37 (18 de diciembre de 2019): eaay6276. http://dx.doi.org/10.1126/scirobotics.aay6276.

Texto completo
Resumen
Growing interest in reinforcement learning approaches to robotic planning and control raises concerns of predictability and safety of robot behaviors realized solely through learned control policies. In addition, formally defining reward functions for complex tasks is challenging, and faulty rewards are prone to exploitation by the learning agent. Here, we propose a formal methods approach to reinforcement learning that (i) provides a formal specification language that integrates high-level, rich, task specifications with a priori, domain-specific knowledge; (ii) makes the reward generation process easily interpretable; (iii) guides the policy generation process according to the specification; and (iv) guarantees the satisfaction of the (critical) safety component of the specification. The main ingredients of our computational framework are a predicate temporal logic specifically tailored for robotic tasks and an automaton-guided, safe reinforcement learning algorithm based on control barrier functions. Although the proposed framework is quite general, we motivate it and illustrate it experimentally for a robotic cooking task, in which two manipulators worked together to make hot dogs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Skirzyński, Julian, Frederic Becker y Falk Lieder. "Automatic discovery of interpretable planning strategies". Machine Learning 110, n.º 9 (9 de abril de 2021): 2641–83. http://dx.doi.org/10.1007/s10994-021-05963-2.

Texto completo
Resumen
AbstractWhen making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that providing the decision rules generated by AI-Interpret as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Moreover, our fourth experiment revealed that this approach is significantly more effective at improving human decision-making than training people by giving them performance feedback. Finally, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making. The code for our algorithm and the experiments is available at https://github.com/RationalityEnhancement/InterpretableStrategyDiscovery.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Xu, Yixiao, Xiaolei Liu, Kangyi Ding y Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions". Sensors 22, n.º 22 (10 de noviembre de 2022): 8697. http://dx.doi.org/10.3390/s22228697.

Texto completo
Resumen
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactions of features. Based on the interpretable theorem, IBD enables defenders to detect backdoor models and poisoned examples without introducing additional information about the specific attack method. Experiments on widely used datasets and models show that IBD achieves a 78% increase in average in detection accuracy and an order-of-magnitude reduction in time cost compared with existing backdoor-detection methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Gu, Jindong. "Interpretable Graph Capsule Networks for Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de mayo de 2021): 1469–77. http://dx.doi.org/10.1609/aaai.v35i2.16237.

Texto completo
Resumen
Capsule Networks, as alternatives to Convolutional Neural Networks, have been proposed to recognize objects from images. The current literature demonstrates many advantages of CapsNets over CNNs. However, how to create explanations for individual classifications of CapsNets has not been well explored. The widely used saliency methods are mainly proposed for explaining CNN-based classifications; they create saliency map explanations by combining activation values and the corresponding gradients, e.g., Grad-CAM. These saliency methods require a specific architecture of the underlying classifiers and cannot be trivially applied to CapsNets due to the iterative routing mechanism therein. To overcome the lack of interpretability, we can either propose new post-hoc interpretation methods for CapsNets or modifying the model to have build-in explanations. In this work, we explore the latter. Specifically, we propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach. In the proposed model, individual classification explanations can be created effectively and efficiently. Our model also demonstrates some unexpected benefits, even though it replaces the fundamental part of CapsNets. Our GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets. Besides, GraCapsNets also keep other advantages of CapsNets, namely, disentangled representations and affine transformation robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Hagerty, C. G. y F. A. Sonnenberg. "Computer-Interpretable Clinical Practice Guidelines". Yearbook of Medical Informatics 15, n.º 01 (agosto de 2006): 145–58. http://dx.doi.org/10.1055/s-0038-1638486.

Texto completo
Resumen
SummaryTo provide a comprehensive overview of computerinterpretable guideline (CIG) systems aimed at non-experts. The overview includes the history of efforts to develop CIGs, features of and relationships among current major CIG systems, current status of standards developments pertinent to CIGs and identification of unsolved problems and needs for future researchLiterature re view based on PubMed, AMIA conference proceedings and key references from publications identified. Search terms included practice guidelines, decision support, controlled vocabulary and medical record systems. Papers were reviewed by both authors and summarized narratively.There is a consensus that guideline delivery systems must be integrated with electronic health records (EHRs) to be most effective. Several evolving CIG formalisms have in common, use of a task network model. There is currently no dominant CIG system. The major challenge in development of interoperable CIGs, is agreement on a standard controlled vocabulary. Such standards are under development, but not widely used, particularly in commercial EHR systems. The Virtual Medical Record (VMR) concept has been proposed as a standard that would serve as an intermediary between guideline vocabulary and that used in EHR implementation.CIG systems are in a state of evolution. Standards efforts promise to improve interoperability without compromising innovation. The VMR concept can assist guideline development even before clinical systems routinely adhere to standards. Frontiers for future work include using the principles learned by computer implementation of guidelines to improve the guideline development process and evaluation methods that isolate the effects of specific CIG features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Ge, Xiaoyi, Mingshu Zhang, Xu An Wang, Jia Liu y Bin Wei. "Emotion-Drive Interpretable Fake News Detection". International Journal of Data Warehousing and Mining 18, n.º 1 (1 de enero de 2022): 1–17. http://dx.doi.org/10.4018/ijdwm.314585.

Texto completo
Resumen
Fake news has brought significant challenges to the healthy development of social media. Although current fake news detection methods are advanced, many models directly utilize unselected user comments and do not consider the emotional connection between news content and user comments. The authors propose an emotion-driven explainable fake news detection model (EDI) to solve this problem. The model can select valuable user comments by using sentiment value, obtain the emotional correlation representation between news content and user comments by using collaborative annotation, and obtain the weighted representation of user comments by using the attention mechanism. Experimental results on Twitter and Weibo show that the detection model significantly outperforms the state-of-the-art models and provides reasonable interpretation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

H. Merritt, Sean y Alexander P. Christensen. "An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics". Advances in Artificial Intelligence and Machine Learning 03, n.º 01 (2023): 760–77. http://dx.doi.org/10.54364/aaiml.2023.1149.

Texto completo
Resumen
Developing interpretable machine learning models has become an increasingly important issue. One way in which data scientists have been able to develop interpretable models has been to use dimension reduction techniques. In this paper, we examine several dimension reduction techniques including two recent approaches developed in the network psychometrics literature called exploratory graph analysis (EGA) and unique variable analysis (UVA). We compared EGA and UVA with two other dimension reduction techniques common in the machine learning literature (principal component analysis and independent component analysis) as well as no reduction in the variables. We show that EGA and UVA perform as well as the other reduction techniques or no reduction. Consistent with previous literature, we show that dimension reduction can decrease, increase, or provide the same accuracy as no reduction of variables. Our tentative results find that dimension reduction tends to lead to better performance when used for classification tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Luan, Tao, Guoqing Liang y Pengfei Peng. "Interpretable DeepFake Detection Based on Frequency Spatial Transformer". International Journal of Emerging Technologies and Advanced Applications 1, n.º 2 (26 de marzo de 2024): 19–25. http://dx.doi.org/10.62677/ijetaa.2402108.

Texto completo
Resumen
In recent years, the rapid development of DeepFake has garnered significant attention. Traditional DeepFake detection methods have achieved 100% accuracy on certain corresponding datasets, however, these methods lack interpretability. Existing methods for learning forgery traces often rely on pre-annotated data based on supervised learning, which limits their abilities in non-corresponding detection scenarios. To address this issue, we propose an interpretable DeepFake detection approach based on unsupervised learning called Find-X. The Find-X network consists of two components: forgery trace generation network (FTG) and forgery trace discrimination network (FTD). FTG is used to extract more general inconsistent forgery traces from frequency and spatial domains. Then input the extracted forgery traces into FTD to classify real/fake. By obtaining feedback from FTD, FTG can generate more effective forgery traces. As inconsistent features are prevalent in DeepFake videos, our detection approach improves the generalization of detecting unknown forgeries. Extensive experiments show that our method outperforms state-of-the-art methods on popular benchmarks, and the visual forgery traces provide meaningful explanations for DeepFake detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Verma, Abhinav. "Verifiable and Interpretable Reinforcement Learning through Program Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 9902–3. http://dx.doi.org/10.1609/aaai.v33i01.33019902.

Texto completo
Resumen
We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically using domain knowledge for guiding the policy search. The interpretability and verifiability of these policies provides the opportunity to deploy RL based solutions in safety critical environments. This thesis draws on, and extends, work from both the machine learning and formal methods communities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Tulsani, Vijya, Prashant Sahatiya, Jignasha Parmar y Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9 (27 de octubre de 2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.

Texto completo
Resumen
Medical imaging plays a pivotal role in modern healthcare, aiding in the diagnosis, monitoring, and treatment of various medical conditions. With the advent of Artificial Intelligence (AI), medical imaging has witnessed remarkable advancements, promising more accurate and efficient analysis. However, the black-box nature of many AI models used in medical imaging has raised concerns regarding their interpretability and trustworthiness. In response to these challenges, Explainable AI (XAI) has emerged as a critical field, aiming to provide transparent and interpretable solutions for medical image analysis. This survey paper comprehensively explores the methods and challenges associated with XAI applications in medical imaging. The survey begins with an introduction to the significance of XAI in medical imaging, emphasizing the need for transparent and interpretable AI solutions in healthcare. We delve into the background of medical imaging in healthcare and discuss the increasing role of AI in this domain. The paper then presents a detailed survey of various XAI techniques, ranging from interpretable machine learning models to deep learning approaches with built-in interpretability and post hoc interpretation methods. Furthermore, the survey outlines a wide range of applications where XAI is making a substantial impact, including disease diagnosis and detection, medical image segmentation, radiology reports, surgical planning, and telemedicine. Real-world case studies illustrate successful applications of XAI in medical imaging. The challenges associated with implementing XAI in medical imaging are thoroughly examined, addressing issues related to data quality, ethics, regulation, clinical integration, model robustness, and human-AI interaction. The survey concludes by discussing emerging trends and future directions in the field, highlighting the ongoing efforts to enhance XAI methods for medical imaging and the critical role XAI will play in the future of healthcare. This survey paper serves as a comprehensive resource for researchers, clinicians, and policymakers interested in the integration of Explainable AI into medical imaging, providing insights into the latest methods, successful applications, and the challenges that lie ahead.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hayes, Sean M. S., Jeffrey R. Sachs y Carolyn R. Cho. "From complex data to biological insight: ‘DEKER’ feature selection and network inference". Journal of Pharmacokinetics and Pharmacodynamics 49, n.º 1 (17 de noviembre de 2021): 81–99. http://dx.doi.org/10.1007/s10928-021-09792-7.

Texto completo
Resumen
AbstractNetwork inference is a valuable approach for gaining mechanistic insight from high-dimensional biological data. Existing methods for network inference focus on ranking all possible relations (edges) among all measured quantities such as genes, proteins, metabolites (features) observed, which yields a dense network that is challenging to interpret. Identifying a sparse, interpretable network using these methods thus requires an error-prone thresholding step which compromises their performance. In this article we propose a new method, DEKER-NET, that addresses this limitation by directly identifying a sparse, interpretable network without thresholding, improving real-world performance. DEKER-NET uses a novel machine learning method for feature selection in an iterative framework for network inference. DEKER-NET is extremely flexible, handling linear and nonlinear relations while making no assumptions about the underlying distribution of data, and is suitable for categorical or continuous variables. We test our method on the Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenge data, demonstrating that it can directly identify sparse, interpretable networks without thresholding while maintaining performance comparable to the hypothetical best-case thresholded network of other methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Khakabimamaghani, Sahand, Yogeshwar D. Kelkar, Bruno M. Grande, Ryan D. Morin, Martin Ester y Daniel Ziemek. "SUBSTRA: Supervised Bayesian Patient Stratification". Bioinformatics 35, n.º 18 (15 de febrero de 2019): 3263–72. http://dx.doi.org/10.1093/bioinformatics/btz112.

Texto completo
Resumen
Abstract Motivation Patient stratification methods are key to the vision of precision medicine. Here, we consider transcriptional data to segment the patient population into subsets relevant to a given phenotype. Whereas most existing patient stratification methods focus either on predictive performance or interpretable features, we developed a method striking a balance between these two important goals. Results We introduce a Bayesian method called SUBSTRA that uses regularized biclustering to identify patient subtypes and interpretable subtype-specific transcript clusters. The method iteratively re-weights feature importance to optimize phenotype prediction performance by producing more phenotype-relevant patient subtypes. We investigate the performance of SUBSTRA in finding relevant features using simulated data and successfully benchmark it against state-of-the-art unsupervised stratification methods and supervised alternatives. Moreover, SUBSTRA achieves predictive performance competitive with the supervised benchmark methods and provides interpretable transcriptional features in diverse biological settings, such as drug response prediction, cancer diagnosis, or kidney transplant rejection. Availability and implementation The R code of SUBSTRA is available at https://github.com/sahandk/SUBSTRA. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sun, Lili, Xueyan Liu, Min Zhao y Bo Yang. "Interpretable Variational Graph Autoencoder with Noninformative Prior". Future Internet 13, n.º 2 (18 de febrero de 2021): 51. http://dx.doi.org/10.3390/fi13020051.

Texto completo
Resumen
Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given this, we propose a novel noninformative prior-based interpretable variational graph autoencoder (NPIVGAE). Specifically, we exploit the noninformative prior as the prior distribution of latent variables. This prior enables the posterior distribution parameters to be almost learned from the sample data. Furthermore, we regard each dimension of a latent variable as the probability that the node belongs to each block, thereby improving the interpretability of the model. The correlation within and between blocks is described by a block–block correlation matrix. We compare our model with state-of-the-art methods on three real datasets, verifying its effectiveness and superiority.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cansel, Neslihan, Fatma Hilal Yagin, Mustafa Akan y Bedriye Ilkay Aygul. "INTERPRETABLE ESTIMATION OF SUICIDE RISK AND SEVERITY FROM COMPLETE BLOOD COUNT PARAMETERS WITH EXPLAINABLE ARTIFICIAL INTELLIGENCE METHODS". PSYCHIATRIA DANUBINA 35, n.º 1 (13 de abril de 2023): 62–72. http://dx.doi.org/10.24869/psyd.2023.62.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Shi, Liushuai, Le Wang, Chengjiang Long, Sanping Zhou, Fang Zheng, Nanning Zheng y Gang Hua. "Social Interpretable Tree for Pedestrian Trajectory Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 2 (28 de junio de 2022): 2235–43. http://dx.doi.org/10.1609/aaai.v36i2.20121.

Texto completo
Resumen
Understanding the multiple socially-acceptable future behaviors is an essential task for many vision applications. In this paper, we propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task, where a hand-crafted tree is built depending on the prior information of observed trajectory to model multiple future trajectories. Specifically, a path in the tree from the root to leaf represents an individual possible future trajectory. SIT employs a coarse-to-fine optimization strategy, in which the tree is first built by high-order velocity to balance the complexity and coverage of the tree and then optimized greedily to encourage multimodality. Finally, a teacher-forcing refining operation is used to predict the final fine trajectory. Compared with prior methods which leverage implicit latent variables to represent possible future trajectories, the path in the tree can explicitly explain the rough moving behaviors (e.g., go straight and then turn right), and thus provides better interpretability. Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods. Interestingly, the experiments show that the raw built tree without training outperforms many prior deep neural network based approaches. Meanwhile, our method presents sufficient flexibility in long-term prediction and different best-of-K predictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Krumb, Henry, Dhritimaan Das, Romol Chadda y Anirban Mukhopadhyay. "CycleGAN for interpretable online EMT compensation". International Journal of Computer Assisted Radiology and Surgery 16, n.º 5 (14 de marzo de 2021): 757–65. http://dx.doi.org/10.1007/s11548-021-02324-1.

Texto completo
Resumen
Abstract Purpose Electromagnetic tracking (EMT) can partially replace X-ray guidance in minimally invasive procedures, reducing radiation in the OR. However, in this hybrid setting, EMT is disturbed by metallic distortion caused by the X-ray device. We plan to make hybrid navigation clinical reality to reduce radiation exposure for patients and surgeons, by compensating EMT error. Methods Our online compensation strategy exploits cycle-consistent generative adversarial neural networks (CycleGAN). Positions are translated from various bedside environments to their bench equivalents, by adjusting their z-component. Domain-translated points are fine-tuned on the x–y plane to reduce error in the bench domain. We evaluate our compensation approach in a phantom experiment. Results Since the domain-translation approach maps distorted points to their laboratory equivalents, predictions are consistent among different C-arm environments. Error is successfully reduced in all evaluation environments. Our qualitative phantom experiment demonstrates that our approach generalizes well to an unseen C-arm environment. Conclusion Adversarial, cycle-consistent training is an explicable, consistent and thus interpretable approach for online error compensation. Qualitative assessment of EMT error compensation gives a glimpse to the potential of our method for rotational error compensation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Bhambhoria, Rohan, Hui Liu, Samuel Dahan y Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Texto completo
Resumen
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Whiteway, Matthew R., Dan Biderman, Yoni Friedman, Mario Dipoppa, E. Kelly Buchanan, Anqi Wu, John Zhou et al. "Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders". PLOS Computational Biology 17, n.º 9 (22 de septiembre de 2021): e1009439. http://dx.doi.org/10.1371/journal.pcbi.1009439.

Texto completo
Resumen
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Walter, Nils Philipp, Jonas Fischer y Jilles Vreeken. "Finding Interpretable Class-Specific Patterns through Efficient Neural Search". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 8 (24 de marzo de 2024): 9062–70. http://dx.doi.org/10.1609/aaai.v38i8.28756.

Texto completo
Resumen
Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms. In molecular biology, for example, these bear the promise of advancing the understanding of cellular processes differing between tissues or diseases, which could lead to novel treatments. To be useful in practice, methods that tackle the problem of finding such differential patterns have to be readily interpretable by domain experts, and scalable to the extremely high-dimensional data. In this work, we propose a novel, inherently interpretable binary neural network architecture Diffnaps that extracts differential patterns from data. Diffnaps is scalable to hundreds of thousands of features and robust to noise, thus overcoming the limitations of current state-of-the-art methods in large-scale applications such as in biology. We show on synthetic and real world data, including three biological applications, that unlike its competitors, Diffnaps consistently yields accurate, succinct, and interpretable class descriptions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Meng, Fan. "Creating Interpretable Data-Driven Approaches for Tropical Cyclones Forecasting". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junio de 2022): 12892–93. http://dx.doi.org/10.1609/aaai.v36i11.21583.

Texto completo
Resumen
Tropical cyclones (TC) are extreme weather phenomena that bring heavy disasters to humans. Existing forecasting techniques contain computationally intensive dynamical models and statistical methods with complex inputs, both of which have bottlenecks in intensity forecasting, and we aim to create data-driven methods to break this forecasting bottleneck. The research goal of my PhD topic is to introduce novel methods to provide accurate and trustworthy forecasting of TC by developing interpretable machine learning models to analyze the characteristics of TC from multiple sources of data such as satellite remote sensing and observations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Wang, Min, Steven M. Kornblau y Kevin R. Coombes. "Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components". Cancer Informatics 17 (1 de enero de 2018): 117693511877108. http://dx.doi.org/10.1177/1176935118771082.

Texto completo
Resumen
Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Weiss, S. M. y N. Indurkhya. "Rule-based Machine Learning Methods for Functional Prediction". Journal of Artificial Intelligence Research 3 (1 de diciembre de 1995): 383–403. http://dx.doi.org/10.1613/jair.199.

Texto completo
Resumen
We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Feng, Aosong, Chenyu You, Shiqiang Wang y Leandros Tassiulas. "KerGNNs: Interpretable Graph Neural Networks with Graph Kernels". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junio de 2022): 6614–22. http://dx.doi.org/10.1609/aaai.v36i6.20615.

Texto completo
Resumen
Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of existing graph kernel and GNN methods, in this paper, we propose a novel GNN framework, termed Kernel Graph Neural Networks (KerGNNs), which integrates graph kernels into the message passing process of GNNs. Inspired by convolution filters in convolutional neural networks (CNNs), KerGNNs adopt trainable hidden graphs as graph filters which are combined with subgraphs to update node embeddings using graph kernels. In addition, we show that MPNNs can be viewed as special cases of KerGNNs. We apply KerGNNs to multiple graph-related tasks and use cross-validation to make fair comparisons with benchmarks. We show that our method achieves competitive performance compared with existing state-of-the-art methods, demonstrating the potential to increase the representation ability of GNNs. We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Wang, Yulong, Xiaolu Zhang, Xiaolin Hu, Bo Zhang y Hang Su. "Dynamic Network Pruning with Interpretable Layerwise Channel Selection". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6299–306. http://dx.doi.org/10.1609/aaai.v34i04.6098.

Texto completo
Resumen
Dynamic network pruning achieves runtime acceleration by dynamically determining the inference paths based on different inputs. However, previous methods directly generate continuous decision values for each weight channel, which cannot reflect a clear and interpretable pruning process. In this paper, we propose to explicitly model the discrete weight channel selections, which encourages more diverse weights utilization, and achieves more sparse runtime inference paths. Meanwhile, with the help of interpretable layerwise channel selections in the dynamic network, we can visualize the network decision paths explicitly for model interpretability. We observe that there are clear differences in the layerwise decisions between normal and adversarial examples. Therefore, we propose a novel adversarial example detection algorithm by discriminating the runtime decision features. Experiments show that our dynamic network achieves higher prediction accuracy under the similar computing budgets on CIFAR10 and ImageNet datasets compared to traditional static pruning methods and other dynamic pruning approaches. The proposed adversarial detection algorithm can significantly improve the state-of-the-art detection rate across multiple attacks, which provides an opportunity to build an interpretable and robust model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Hase, Peter, Chaofan Chen, Oscar Li y Cynthia Rudin. "Interpretable Image Recognition with Hierarchical Prototypes". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (28 de octubre de 2019): 32–40. http://dx.doi.org/10.1609/hcomp.v7i1.5265.

Texto completo
Resumen
Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ghanem, Souhila, Raphaël Couturier y Pablo Gregori. "An Accurate and Easy to Interpret Binary Classifier Based on Association Rules Using Implication Intensity and Majority Vote". Mathematics 9, n.º 12 (8 de junio de 2021): 1315. http://dx.doi.org/10.3390/math9121315.

Texto completo
Resumen
In supervised learning, classifiers range from simpler, more interpretable and generally less accurate ones (e.g., CART, C4.5, J48) to more complex, less interpretable and more accurate ones (e.g., neural networks, SVM). In this tradeoff between interpretability and accuracy, we propose a new classifier based on association rules, that is to say, both easy to interpret and leading to relevant accuracy. To illustrate this proposal, its performance is compared to other widely used methods on six open access datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Eiras-Franco, Carlos, Bertha Guijarro-Berdiñas, Amparo Alonso-Betanzos y Antonio Bahamonde. "Interpretable Market Segmentation on High Dimension Data". Proceedings 2, n.º 18 (17 de septiembre de 2018): 1171. http://dx.doi.org/10.3390/proceedings2181171.

Texto completo
Resumen
Obtaining relevant information from the vast amount of data generated by interactions in a market or, in general, from a dyadic dataset, is a broad problem of great interest both for industry and academia. Also, the interpretability of machine learning algorithms is becoming increasingly relevant and even becoming a legal requirement, all of which increases the demand for such algorithms. In this work we propose a quality measure that factors in the interpretability of results. Additionally, we present a grouping algorithm on dyadic data that returns results with a level of interpretability selected by the user and capable of handling large volumes of data. Experiments show the accuracy of the results, on par with traditional methods, as well as its scalability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Pulkkinen, Pietari y Hannu Koivisto. "Identification of interpretable and accurate fuzzy classifiers and function estimators with hybrid methods". Applied Soft Computing 7, n.º 2 (marzo de 2007): 520–33. http://dx.doi.org/10.1016/j.asoc.2006.11.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Liu, Yuekai, Tianyang Wang y Fulei Chu. "Hybrid machine condition monitoring based on interpretable dual tree methods using Wasserstein metrics". Expert Systems with Applications 235 (enero de 2024): 121104. http://dx.doi.org/10.1016/j.eswa.2023.121104.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Munir, Nimra, Ross McMorrow, Konrad Mulrennan, Darren Whitaker, Seán McLoone, Minna Kellomäki, Elina Talvitie, Inari Lyyra y Marion McAfee. "Interpretable Machine Learning Methods for Monitoring Polymer Degradation in Extrusion of Polylactic Acid". Polymers 15, n.º 17 (28 de agosto de 2023): 3566. http://dx.doi.org/10.3390/polym15173566.

Texto completo
Resumen
This work investigates real-time monitoring of extrusion-induced degradation in different grades of PLA across a range of process conditions and machine set-ups. Data on machine settings together with in-process sensor data, including temperature, pressure, and near-infrared (NIR) spectra, are used as inputs to predict the molecular weight and mechanical properties of the product. Many soft sensor approaches based on complex spectral data are essentially ‘black-box’ in nature, which can limit industrial acceptability. Hence, the focus here is on identifying an optimal approach to developing interpretable models while achieving high predictive accuracy and robustness across different process settings. The performance of a Recursive Feature Elimination (RFE) approach was compared to more common dimension reduction and regression approaches including Partial Least Squares (PLS), iterative PLS (i-PLS), Principal Component Regression (PCR), ridge regression, Least Absolute Shrinkage and Selection Operator (LASSO), and Random Forest (RF). It is shown that for medical-grade PLA processed under moisture-controlled conditions, accurate prediction of molecular weight is possible over a wide range of process conditions and different machine settings (different nozzle types for downstream fibre spinning) with an RFE-RF algorithm. Similarly, for the prediction of yield stress, RFE-RF achieved excellent predictive performance, outperforming the other approaches in terms of simplicity, interpretability, and accuracy. The features selected by the RFE model provide important insights to the process. It was found that change in molecular weight was not an important factor affecting the mechanical properties of the PLA, which is primarily related to the pressure and temperature at the latter stages of the extrusion process. The temperature at the extruder exit was also the most important predictor of degradation of the polymer molecular weight, highlighting the importance of accurate melt temperature control in the process. RFE not only outperforms more established methods as a soft sensor method, but also has significant advantages in terms of computational efficiency, simplicity, and interpretability. RFE-based soft sensors are promising for better quality control in processing thermally sensitive polymers such as PLA, in particular demonstrating for the first time the ability to monitor molecular weight degradation during processing across various machine settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Qiao, Zuqiang, Shengzhi Dong, Qing Li, Xiangming Lu, Renjie Chen, Shuai Guo, Aru Yan y Wei Li. "Performance prediction models for sintered NdFeB using machine learning methods and interpretable studies". Journal of Alloys and Compounds 963 (noviembre de 2023): 171250. http://dx.doi.org/10.1016/j.jallcom.2023.171250.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ragazzo, Michele, Stefano Melchiorri, Laura Manzo, Valeria Errichiello, Giulio Puleri, Fabio Nicastro y Emiliano Giardina. "Comparative Analysis of ANDE 6C Rapid DNA Analysis System and Traditional Methods". Genes 11, n.º 5 (22 de mayo de 2020): 582. http://dx.doi.org/10.3390/genes11050582.

Texto completo
Resumen
Rapid DNA analysis is an ultrafast and fully automated DNA-typing system, which can produce interpretable genetic profiles from biological samples within 90 minutes. This “swab in—profile out” method comprises DNA extraction, amplification by PCR multiplex, separation and detection of DNA fragments by capillary electrophoresis. The aim of study was the validation of the Accelerated Nuclear DNA Equipment (ANDE) 6C system as a typing method for reference samples according to the ISO/IEC 17025 standard. Here, we report the evaluation of the validity and reproducibility of results by the comparison of the genetic profiles generated by the ANDE 6C System with those generated by standard technologies. A quantity of 104 buccal swabs were analyzed both through the ANDE 6C technology and the traditional method (DNA extraction and quantification, amplification and separation by capillary electrophoresis). Positive typing was observed in 97% of cases for ANDE 6C technology with only three buccal swabs failing to reveal interpretable signals. Concordance was determined by comparing the allele calls generated by ANDE 6C and conventional technology. Comparison of 2800 genotypes revealed a concordance rate of 99.96%. These results met the ISO/IEC 17025 requirements, enabling us to receive the accreditation for this method. Finally, rapid technology has certainly reached a level of reliability which has made its use in laboratories of forensic genetics a reality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Wu, Bozhi, Sen Chen, Cuiyun Gao, Lingling Fan, Yang Liu, Weiping Wen y Michael R. Lyu. "Why an Android App Is Classified as Malware". ACM Transactions on Software Engineering and Methodology 30, n.º 2 (marzo de 2021): 1–29. http://dx.doi.org/10.1145/3423096.

Texto completo
Resumen
Machine learning–(ML) based approach is considered as one of the most promising techniques for Android malware detection and has achieved high accuracy by leveraging commonly used features. In practice, most of the ML classifications only provide a binary label to mobile users and app security analysts. However, stakeholders are more interested in the reason why apps are classified as malicious in both academia and industry. This belongs to the research area of interpretable ML but in a specific research domain (i.e., mobile malware detection). Although several interpretable ML methods have been exhibited to explain the final classification results in many cutting-edge Artificial Intelligent–based research fields, until now, there is no study interpreting why an app is classified as malware or unveiling the domain-specific challenges. In this article, to fill this gap, we propose a novel and interpretable ML-based approach (named XMal ) to classify malware with high accuracy and explain the classification result meanwhile. (1) The first classification phase of XMal hinges multi-layer perceptron and attention mechanism and also pinpoints the key features most related to the classification result. (2) The second interpreting phase aims at automatically producing neural language descriptions to interpret the core malicious behaviors within apps. We evaluate the behavior description results by leveraging a human study and an in-depth quantitative analysis. Moreover, we further compare XMal with the existing interpretable ML-based methods (i.e., Drebin and LIME) to demonstrate the effectiveness of XMal . We find that XMal is able to reveal the malicious behaviors more accurately. Additionally, our experiments show that XMal can also interpret the reason why some samples are misclassified by ML classifiers. Our study peeks into the interpretable ML through the research of Android malware detection and analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Xiang, Ziyu, Mingzhou Fan, Guillermo Vázquez Tovar, William Trehern, Byung-Jun Yoon, Xiaofeng Qian, Raymundo Arroyave y Xiaoning Qian. "Physics-constrained Automatic Feature Engineering for Predictive Modeling in Materials Science". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de mayo de 2021): 10414–21. http://dx.doi.org/10.1609/aaai.v35i12.17247.

Texto completo
Resumen
Automatic Feature Engineering (AFE) aims to extract useful knowledge for interpretable predictions given data for the machine learning tasks. Here, we develop AFE to extract dependency relationships that can be interpreted with functional formulas to discover physics meaning or new hypotheses for the problems of interest. We focus on materials science applications, where interpretable predictive modeling may provide principled understanding of materials systems and guide new materials discovery. It is often computationally prohibitive to exhaust all the potential relationships to construct and search the whole feature space to identify interpretable and predictive features. We develop and evaluate new AFE strategies by exploring a feature generation tree (FGT) with deep Q-network (DQN) for scalable and efficient exploration policies. The developed DQN-based AFE strategies are benchmarked with the existing AFE methods on several materials science datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Abafogi, Abdo Ababor. "Survey on Interpretable Semantic Textual Similarity, and its Applications". International Journal of Innovative Technology and Exploring Engineering 10, n.º 3 (10 de enero de 2021): 14–18. http://dx.doi.org/10.35940/ijitee.b8294.0110321.

Texto completo
Resumen
Both semantic representation and related natural language processing(NLP) tasks has become more popular due to the introduction of distributional semantics. Semantic textual similarity (STS)is one of a task in NLP, it determinesthe similarity based onthe meanings of two shorttexts (sentences). Interpretable STS is the way of giving explanation to semantic similarity between short texts. Giving interpretation is indeedpossible tohuman, but, constructing computational modelsthat explain as human level is challenging. The interpretable STS task give output in natural way with a continuous value on the scale from [0, 5] that represents the strength of semantic relation between pair sentences, where 0 is no similarity and 5 is complete similarity. This paper review all available methods were used in interpretable STS computation, classify them, specifyan existing limitations, and finally give directions for future work. This paper is organized the survey into nine sections as follows: firstly introduction at glance, then chunking techniques and available tools, the next one is rule based approach, the fourth section focus on machine learning approach, after that about works done via neural network, and the finally hybrid approach concerned. Application of interpretable STS, conclusion and future direction is also part of this paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Nan, Tianlong, Yuan Gao y Christian Kroer. "Fast and Interpretable Dynamics for Fisher Markets via Block-Coordinate Updates". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 5 (26 de junio de 2023): 5832–40. http://dx.doi.org/10.1609/aaai.v37i5.25723.

Texto completo
Resumen
We consider the problem of large-scale Fisher market equilibrium computation through scalable first-order optimization methods. It is well-known that market equilibria can be captured using structured convex programs such as the Eisenberg-Gale and Shmyrev convex programs. Highly performant deterministic full-gradient first-order methods have been developed for these programs. In this paper, we develop new block-coordinate first-order methods for computing Fisher market equilibria, and show that these methods have interpretations as tâtonnement-style or proportional response-style dynamics where either buyers or items show up one at a time. We reformulate these convex programs and solve them using proximal block coordinate descent methods, a class of methods that update only a small number of coordinates of the decision variable in each iteration. Leveraging recent advances in the convergence analysis of these methods and structures of the equilibrium-capturing convex programs, we establish fast convergence rates of these methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Zhao, Mingyang, Junchang Xin, Zhongyang Wang, Xinlei Wang y Zhiqiong Wang. "Interpretable Model Based on Pyramid Scene Parsing Features for Brain Tumor MRI Image Segmentation". Computational and Mathematical Methods in Medicine 2022 (31 de enero de 2022): 1–10. http://dx.doi.org/10.1155/2022/8000781.

Texto completo
Resumen
Due to the black box model nature of convolutional neural networks, computer-aided diagnosis methods based on depth learning are usually poorly interpretable. Therefore, the diagnosis results obtained by these unexplained methods are difficult to gain the trust of patients and doctors, which limits their application in the medical field. To solve this problem, an interpretable depth learning image segmentation framework is proposed in this paper for processing brain tumor magnetic resonance images. A gradient-based class activation mapping method is introduced into the segmentation model based on pyramid structure to visually explain it. The pyramid structure constructs global context information with features after multiple pooling layers to improve image segmentation performance. Therefore, class activation mapping is used to visualize the features concerned by each layer of pyramid structure and realize the interpretation of PSPNet. After training and testing the model on the public dataset BraTS2018, several sets of visualization results were obtained. By analyzing these visualization results, the effectiveness of pyramid structure in brain tumor segmentation task is proved, and some improvements are made to the structure of pyramid model based on the shortcomings of the model shown in the visualization results. In summary, the interpretable brain tumor image segmentation method proposed in this paper can well explain the role of pyramid structure in brain tumor image segmentation, which provides a certain idea for the application of interpretable method in brain tumor segmentation and has certain practical value for the evaluation and optimization of brain tumor segmentation model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía