Статті в журналах з теми "Black-Box Classifier"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Black-Box Classifier.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Black-Box Classifier".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lee, Hansoo, and Sungshin Kim. "Black-Box Classifier Interpretation Using Decision Tree and Fuzzy Logic-Based Classifier Implementation." International Journal of Fuzzy Logic and Intelligent Systems 16, no. 1 (March 31, 2016): 27–35. http://dx.doi.org/10.5391/ijfis.2016.16.1.27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rajabi, Arezoo, Mahdieh Abbasi, Rakesh B. Bobba, and Kimia Tajik. "Adversarial Images Against Super-Resolution Convolutional Neural Networks for Free." Proceedings on Privacy Enhancing Technologies 2022, no. 3 (July 2022): 120–39. http://dx.doi.org/10.56553/popets-2022-0065.

Повний текст джерела
Анотація:
Super-Resolution Convolutional Neural Networks (SRCNNs) with their ability to generate highresolution images from low-resolution counterparts, exacerbate the privacy concerns emerging from automated Convolutional Neural Networks (CNNs)-based image classifiers. In this work, we hypothesize and empirically show that adversarial examples learned over CNN image classifiers can survive processing by SRCNNs and lead them to generate poor quality images that are hard to classify correctly. We demonstrate that a user with a small CNN is able to learn adversarial noise without requiring any customization for SRCNNs and thwart the privacy threat posed by a pipeline of SRCNN and CNN classifiers (95.8% fooling rate for Fast Gradient Sign with ε = 0.03). We evaluate the survivability of adversarial images generated in both black-box and white-box settings and show that black-box adversarial learning (when both CNN classifier and SRCNN are unknown) is at least as effective as white-box adversarial learning (when only CNN classifier is known). We also assess our hypothesis on adversarial robust CNNs and observe that the supper-resolved white-box adversarial examples can fool these CNNs more than 71.5% of the time.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ji, Disi, Robert L. Logan, Padhraic Smyth, and Mark Steyvers. "Active Bayesian Assessment of Black-Box Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7935–44. http://dx.doi.org/10.1609/aaai.v35i9.16968.

Повний текст джерела
Анотація:
Recent advances in machine learning have led to increased deployment of black-box classifiers across a wide variety of applications. In many such situations there is a critical need to both reliably assess the performance of these pre-trained models and to perform this assessment in a label-efficient manner (given that labels may be scarce and costly to collect). In this paper, we introduce an active Bayesian approach for assessment of classifier performance to satisfy the desiderata of both reliability and label-efficiency. We begin by developing inference strategies to quantify uncertainty for common assessment metrics such as accuracy, misclassification cost, and calibration error. We then propose a general framework for active Bayesian assessment using inferred uncertainty to guide efficient selection of instances for labeling, enabling better performance assessment with fewer labels. We demonstrate significant gains from our proposed active Bayesian approach via a series of systematic empirical experiments assessing the performance of modern neural classifiers (e.g., ResNet and BERT) on several standard image and text classification datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tran, Thien Q., Kazuto Fukuchi, Youhei Akimoto, and Jun Sakuma. "Unsupervised Causal Binary Concepts Discovery with VAE for Black-Box Model Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9614–22. http://dx.doi.org/10.1609/aaai.v36i9.21195.

Повний текст джерела
Анотація:
We aim to explain a black-box classifier with the form: "data X is classified as class Y because X has A, B and does not have C" in which A, B, and C are high-level concepts. The challenge is that we have to discover in an unsupervised manner a set of concepts, i.e., A, B and C, that is useful for explaining the classifier. We first introduce a structural generative model that is suitable to express and discover such concepts. We then propose a learning process that simultaneously learns the data distribution and encourages certain concepts to have a large causal influence on the classifier output. Our method also allows easy integration of user's prior knowledge to induce high interpretability of concepts. Finally, using multiple datasets, we demonstrate that the proposed method can discover useful concepts for explanation in this form.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (October 14, 2020): 7168. http://dx.doi.org/10.3390/app10207168.

Повний текст джерела
Анотація:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lou, Chenlu, and Xiang Pan. "Detect Black Box Signals with Enhanced Spectrum and Support Vector Classifier." Journal of Physics: Conference Series 1438 (January 2020): 012003. http://dx.doi.org/10.1088/1742-6596/1438/1/012003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, Pengpeng, Hailong Sun, Yongqiang Yang, and Zhijun Chen. "Adversarial Learning from Crowds." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5304–12. http://dx.doi.org/10.1609/aaai.v36i5.20467.

Повний текст джерела
Анотація:
Learning from Crowds (LFC) seeks to induce a high-quality classifier from training instances, which are linked to a range of possible noisy annotations from crowdsourcing workers under their various levels of skills and their own preconditions. Recent studies on LFC focus on designing new methods to improve the performance of the classifier trained from crowdsourced labeled data. To this day, however, there remain under-explored security aspects of LFC systems. In this work, we seek to bridge this gap. We first show that LFC models are vulnerable to adversarial examples---small changes to input data can cause classifiers to make prediction mistakes. Second, we propose an approach, A-LFC for training a robust classifier from crowdsourced labeled data. Our empirical results on three real-world datasets show that the proposed approach can substantially improve the performance of the trained classifier even with the existence of adversarial examples. On average, A-LFC has 10.05% and 11.34% higher test robustness than the state-of-the-art in the white-box and black-box attack settings, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mahmood, Kaleel, Deniz Gurevin, Marten van Dijk, and Phuoung Ha Nguyen. "Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples." Entropy 23, no. 10 (October 18, 2021): 1359. http://dx.doi.org/10.3390/e23101359.

Повний текст джерела
Анотація:
Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do not properly examine black-box attacks. In this paper, we expand upon the analyses of these defenses to include adaptive black-box adversaries. Our evaluation is done on nine defenses including Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense, K-Winner Take All and Buffer Zones. Our investigation is done using two black-box adversarial models and six widely studied adversarial attacks for CIFAR-10 and Fashion-MNIST datasets. Our analyses show most recent defenses (7 out of 9) provide only marginal improvements in security (<25%), as compared to undefended networks. For every defense, we also show the relationship between the amount of data the adversary has at their disposal, and the effectiveness of adaptive black-box attacks. Overall, our results paint a clear picture: defenses need both thorough white-box and black-box analyses to be considered secure. We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hartono, Pitoyo. "A transparent cancer classifier." Health Informatics Journal 26, no. 1 (December 31, 2018): 190–204. http://dx.doi.org/10.1177/1460458218817800.

Повний текст джерела
Анотація:
Recently, many neural network models have been successfully applied for histopathological analysis, including for cancer classifications. While some of them reach human–expert level accuracy in classifying cancers, most of them have to be treated as black box, in which they do not offer explanation on how they arrived at their decisions. This lack of transparency may hinder the further applications of neural networks in realistic clinical settings where not only decision but also explainability is important. This study proposes a transparent neural network that complements its classification decisions with visual information about the given problem. The auxiliary visual information allows the user to some extent understand how the neural network arrives at its decision. The transparency potentially increases the usability of neural networks in realistic histopathological analysis. In the experiment, the accuracy of the proposed neural network is compared against some existing classifiers, and the visual information is compared against some dimensional reduction methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Masuda, Haruki, Tsunato Nakai, Kota Yoshida, Takaya Kubota, Mitsuru Shiozaki, and Takeshi Fujino. "Black-Box Adversarial Attack against Deep Neural Network Classifier Utilizing Quantized Probability Output." Journal of Signal Processing 24, no. 4 (July 15, 2020): 145–48. http://dx.doi.org/10.2299/jsp.24.145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Park, Dongjin, and Kyungkoo Jun. "Vehicle Plate Detection in Car Black Box Video." Advances in Multimedia 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/7587841.

Повний текст джерела
Анотація:
Internet services that share vehicle black box videos need a way to obfuscate license plates in uploaded videos because of privacy issues. Thus, plate detection is one of the critical functions that such services rely on. Even though various types of detection methods are available, they are not suitable for black box videos because no assumption about size, number of plates, and lighting conditions can be made. We propose a method to detect Korean vehicle plates from black box videos. It works in two stages: the first stage aims to locate a set of candidate plate regions and the second stage identifies only actual plates from candidates by using a support vector machine classifier. The first stage consists of five sequential substeps. At first, it produces candidate regions by combining single character areas and then eliminates candidate regions that fail to meet plate conditions through the remaining substeps. For the second stage, we propose a feature vector that captures the characteristics of plates in texture and color. For performance evaluation, we compiled our dataset which contains 2,627 positive and negative images. The evaluation results show that the proposed method improves accuracy and sensitivity by at least 5% and is 30 times faster compared with an existing method.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kim, Jae Myung, Hyungjin Kim, Chanwoo Park, and Jungwoo Lee. "REST: Performance Improvement of a Black Box Model via RL-Based Spatial Transformation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11262–69. http://dx.doi.org/10.1609/aaai.v34i07.6786.

Повний текст джерела
Анотація:
In recent years, deep neural networks (DNN) have become a highly active area of research, and shown remarkable achievements on a variety of computer vision tasks. DNNs, however, are known to often make overconfident yet incorrect predictions on out-of-distribution samples, which can be a major obstacle to real-world deployments because the training dataset is always limited compared to diverse real-world samples. Thus, it is fundamental to provide guarantees of robustness to the distribution shift between training and test time when we construct DNN models in practice. Moreover, in many cases, the deep learning models are deployed as black boxes and the performance has been already optimized for a training dataset, thus changing the black box itself can lead to performance degradation. We here study the robustness to the geometric transformations in a specific condition where the black-box image classifier is given. We propose an additional learner, REinforcement Spatial Transform learner (REST), that transforms the warped input data into samples regarded as in-distribution by the black-box models. Our work aims to improve the robustness by adding a REST module in front of any black boxes and training only the REST module without retraining the original black box model in an end-to-end manner, i.e. we try to convert the real-world data into training distribution which the performance of the black-box model is best suited for. We use a confidence score that is obtained from the black-box model to determine whether the transformed input is drawn from in-distribution. We empirically show that our method has an advantage in generalization to geometric transformations and sample efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lin, Zhen, Lucas Glass, M. Brandon Westover, Cao Xiao, and Jimeng Sun. "SCRIB: Set-Classifier with Class-Specific Risk Bounds for Blackbox Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7497–505. http://dx.doi.org/10.1609/aaai.v36i7.20714.

Повний текст джерела
Анотація:
Despite deep learning (DL) success in classification problems, DL classifiers do not provide a sound mechanism to decide when to refrain from predicting. Recent works tried to control the overall prediction risk with classification with rejection options. However, existing works overlook the different significance of different classes. We introduce Set-classifier with class-specific RIsk Bounds (SCRIB) to tackle this problem, assigning multiple labels to each example. Given the output of a black-box model on the validation set, SCRIB constructs a set-classifier that controls the class-specific prediction risks. The key idea is to reject when the set classifier returns more than one label. We validated SCRIB on several medical applications, including sleep staging on electroencephalogram(EEG) data, X-ray COVID image classification, and atrial fibrillation detection based on electrocardiogram (ECG) data.SCRIB obtained desirable class-specific risks, which are 35%-88% closer to the target risks than baseline methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Nauta, Meike, Ricky Walsh, Adam Dubowski, and Christin Seifert. "Uncovering and Correcting Shortcut Learning in Machine Learning Models for Skin Cancer Diagnosis." Diagnostics 12, no. 1 (December 24, 2021): 40. http://dx.doi.org/10.3390/diagnostics12010040.

Повний текст джерела
Анотація:
Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Mayr, Franz, Sergio Yovine, and Ramiro Visca. "Property Checking with Interpretable Error Characterization for Recurrent Neural Networks." Machine Learning and Knowledge Extraction 3, no. 1 (February 12, 2021): 205–27. http://dx.doi.org/10.3390/make3010010.

Повний текст джерела
Анотація:
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lapid, Raz, Zvika Haramaty, and Moshe Sipper. "An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Convolutional Neural Networks." Algorithms 15, no. 11 (October 31, 2022): 407. http://dx.doi.org/10.3390/a15110407.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often using gradient estimation or training a replacement network. This paper introduces Qu ery-Efficient Evolutionary Attack—QuEry Attack—an untargeted, score-based, black-box attack. QuEry Attack is based on a novel objective function that can be used in gradient-free optimization problems. The attack only requires access to the output logits of the classifier and is thus not affected by gradient masking. No additional information is needed, rendering our method more suitable to real-life situations. We test its performance with three different, commonly used, pretrained image-classifications models—Inception-v3, ResNet-50, and VGG-16-BN—against three benchmark datasets: MNIST, CIFAR10 and ImageNet. Furthermore, we evaluate QuEry Attack’s performance on non-differential transformation defenses and robust models. Our results demonstrate the superior performance of QuEry Attack, both in terms of accuracy score and query efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Повний текст джерела
Анотація:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Alahmed, Shahad, Qutaiba Alasad, Maytham M. Hammood, Jiann-Shiun Yuan, and Mohammed Alawad. "Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML." Computers 11, no. 7 (July 20, 2022): 115. http://dx.doi.org/10.3390/computers11070115.

Повний текст джерела
Анотація:
Intrusion detection systems (IDS) are a very vital part of network security, as they can be used to protect the network from illegal intrusions and communications. To detect malicious network traffic, several IDS based on machine learning (ML) methods have been developed in the literature. Machine learning models, on the other hand, have recently been proved to be effective, since they are vulnerable to adversarial perturbations, which allows the opponent to crash the system while performing network queries. This motivated us to present a defensive model that uses adversarial training based on generative adversarial networks (GANs) as a defense strategy to offer better protection for the system against adversarial perturbations. The experiment was carried out using random forest as a classifier. In addition, both principal component analysis (PCA) and recursive features elimination (Rfe) techniques were leveraged as a feature selection to diminish the dimensionality of the dataset, and this led to enhancing the performance of the model significantly. The proposal was tested on a realistic and recent public network dataset: CSE-CICIDS2018. The simulation results showed that GAN-based adversarial training enhanced the resilience of the IDS model and mitigated the severity of the black-box attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Rostami, Mehrdad, and Mourad Oussalah. "Cancer prediction using graph-based gene selection and explainable classifier." Finnish Journal of eHealth and eWelfare 14, no. 1 (April 14, 2022): 61–78. http://dx.doi.org/10.23996/fjhw.111772.

Повний текст джерела
Анотація:
Several Artificial Intelligence-based models have been developed for cancer prediction. In spite of the promise of artificial intelligence, there are very few models which bridge the gap between traditional human-centered prediction and the potential future of machine-centered cancer prediction. In this study, an efficient and effective model is developed for gene selection and cancer prediction. Moreover, this study proposes an artificial intelligence decision system to provide physicians with a simple and human-interpretable set of rules for cancer prediction. In contrast to previous deep learning-based cancer prediction models, which are difficult to explain to physicians due to their black-box nature, the proposed prediction model is based on a transparent and explainable decision forest model. The performance of the developed approach is compared to three state-of-the-art cancer prediction including TAGA, HPSO and LL. The reported results on five cancer datasets indicate that the developed model can improve the accuracy of cancer prediction and reduce the execution time.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Du, Xiaohu, Jie Yu, Zibo Yi, Shasha Li, Jun Ma, Yusong Tan, and Qinbo Wu. "A Hybrid Adversarial Attack for Different Application Scenarios." Applied Sciences 10, no. 10 (May 21, 2020): 3559. http://dx.doi.org/10.3390/app10103559.

Повний текст джерела
Анотація:
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with the vulnerability and security of deep learning systems. According to whether the attacker understands the deep learning model structure, the adversarial attack is divided into black-box attack and white-box attack. In this paper, we propose a hybrid adversarial attack for different application scenarios. Firstly, we propose a novel black-box attack method of generating adversarial examples to trick the word-level sentiment classifier, which is based on differential evolution (DE) algorithm to generate semantically and syntactically similar adversarial examples. Compared with existing genetic algorithm based adversarial attacks, our algorithm can achieve a higher attack success rate while maintaining a lower word replacement rate. At the 10% word substitution threshold, we have increased the attack success rate from 58.5% to 63%. Secondly, when we understand the model architecture and parameters, etc., we propose a white-box attack with gradient-based perturbation against the same sentiment classifier. In this attack, we use a Euclidean distance and cosine distance combined metric to find the most semantically and syntactically similar substitution, and we introduce the coefficient of variation (CV) factor to control the dispersion of the modified words in the adversarial examples. More dispersed modifications can increase human imperceptibility and text readability. Compared with the existing global attack, our attack can increase the attack success rate and make modification positions in generated examples more dispersed. We’ve increased the global search success rate from 75.8% to 85.8%. Finally, we can deal with different application scenarios by using these two attack methods, that is, whether we understand the internal structure and parameters of the model, we can all generate good adversarial examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bruni, Renato, Gianpiero Bianchi, and Pasquale Papa. "Hyperparameter Black-Box Optimization to Improve the Automatic Classification of Support Tickets." Algorithms 16, no. 1 (January 10, 2023): 46. http://dx.doi.org/10.3390/a16010046.

Повний текст джерела
Анотація:
User requests to a customer service, also known as tickets, are essentially short texts in natural language. They should be grouped by topic to be answered efficiently. The effectiveness increases if this semantic categorization becomes automatic. We pursue this goal by using text mining to extract the features from the tickets, and classification to perform the categorization. This is however a difficult multi-class problem, and the classification algorithm needs a suitable hyperparameter configuration to produce a practically useful categorization. As recently highlighted by several researchers, the selection of these hyperparameters is often the crucial aspect. Therefore, we propose to view the hyperparameter choice as a higher-level optimization problem where the hyperparameters are the decision variables and the objective is the predictive performance of the classifier. However, an explicit analytical model of this problem cannot be defined. Therefore, we propose to solve it as a black-box model by means of derivative-free optimization techniques. We conduct experiments on a relevant application: the categorization of the requests received by the Contact Center of the Italian National Statistics Institute (Istat). Results show that the proposed approach is able to effectively categorize the requests, and that its performance is increased by the proposed hyperparameter optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fong, Simon. "Using Hierarchical Time Series Clustering Algorithm and Wavelet Classifier for Biometric Voice Classification." Journal of Biomedicine and Biotechnology 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/215019.

Повний текст джерела
Анотація:
Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers’ gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box) have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Huaijun, Ruomeng Ke, Junhuai Li, Yang An, Kan Wang, and Lei Yu. "A correlation-based binary particle swarm optimization method for feature selection in human activity recognition." International Journal of Distributed Sensor Networks 14, no. 4 (April 2018): 155014771877278. http://dx.doi.org/10.1177/1550147718772785.

Повний текст джерела
Анотація:
Effective feature selection determines the efficiency and accuracy of a learning process, which is essential in human activity recognition. In existing works, for simplification purposes, feature selection algorithms are mostly based on the assumption of feature independence. However, in some scenarios, the optimization method based on this independence hypothesis results in poor recognition performance. This article proposes a correlation-based binary particle swarm optimization method for feature selection in human activity recognition. In the proposed algorithm, the particle swarm optimization algorithm is no longer used as a black box. Meanwhile, correlation coefficients among the features are added to binary particle swarm optimization as a feature correlation factor to determine the position of particles, so that the feature with more information is more likely to be selected. The k-nearest neighbor classifier is then used as the fitness function in the particle swarm optimization to evaluate the performance of the feature subset, that is, feature combination with the highest k-nearest neighbor classifier recognition rate would be picked as the eigenvector. Experimental results show that the proposed method can work well with six classifiers, namely, J48, random forest, k-nearest neighbor, multilayer perceptron, naïve Bayesian, and support vector machine, and the new algorithm can improve the classification accuracy in the OPPORTUNITY Activity Recognition dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Alharbi, Basma, Zhenwen Liang, Jana M. Aljindan, Ammar K. Agnia, and Xiangliang Zhang. "Explainable and Interpretable Anomaly Detection Models for Production Data." SPE Journal 27, no. 01 (November 30, 2021): 349–63. http://dx.doi.org/10.2118/208586-pa.

Повний текст джерела
Анотація:
Summary Trusting a machine-learning model is a critical factor that will speed the spread of the fourth industrial revolution. Trust can be achieved by understanding how a model is making decisions. For white-box models, it is easy to “see” the model and examine its prediction. For black-box models, the explanation of the decision process is not straightforward. In this work, we compare the performance of several white- and black-box models on two production data sets in an anomaly detection task. The presence of anomalies in production data can significantly influence business decisions and misrepresent the results of the analysis, if not identified. Therefore, identifying anomalies is a crucial and necessary step to maintain safety and ensure that the wells perform at full capacity. To achieve this, we compare the performance of K-nearest neighbor (KNN), logistic regression (Logit), support vector machines (SVMs), decision tree (DT), random forest (RF), and rule fit classifier (RFC). F1 and complexity are the two main metrics used to compare the prediction performance and interpretability of these models. In one data set, RFC outperformed the remaining models in both F1 and complexity, where F1 = 0.92, and complexity = 0.5. In the second data set, RF outperformed the rest in prediction performance with F1 = 0.84, yet it had the lowest complexity metric (0.04). We further analyzed the best performing models by explaining their predictions using local interpretable model-agnostic explanations, which provide justification for decisions made for each instance. Additionally, we evaluated the global rules learned from white-box models. Local and global analysis enable decision makers to understand how and why models are making certain decisions, which in turn allows trusting the models.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zarnelly, Zarnelly. "KLASIFIKASI PERMASALAHAN AGENSTOK MENGGUNAKAN ALGORITMA NAIVE BAYES CLASSIFIER PADA PT. HPAI-PEKANBARU." Jurnal Ilmiah Rekayasa dan Manajemen Sistem Informasi 5, no. 2 (August 15, 2019): 208. http://dx.doi.org/10.24014/rmsi.v5i2.7611.

Повний текст джерела
Анотація:
Abstrak—Kecendrungan seseorang untuk mengakses informasi khususnya permasalahan agenstok melalui dunia maya pun menjadi semakin tinggi. Informasi merupakan hal yang sangat penting dalam kehidupan masyarakat. Salah satu sumber infomasi adalah media sosial. Klasifikasi ini ditekankan untuk data permasalahan agenstok. Pada umumnya permasalahan yang disampaikan terdiri dari beberapa kategori seperti permasalahan mengenai kesehatan, konsultasi produk dan marketing. Namun dalam membagi permasalahan kedalan kategori-kategori tersebut untuk saat ini masih dilakukan secara manual.hal ini sangat merepotkan apabila permasalahan yang ingin di unggah berjumlah banyak. Oleh karena itu, perlu adanya sistem yang bisa mengklasifikasikan permasalahan secara otomatis. Text mining merupakan metode klasifikasi yang merupakan variasi dari data mining yang berusaha menemukan pola menarik dari sekumpulan data tekstual yang berjumlah banyak. Sedangkan algoritma naive bayes classsifier merupakan logartitma pendukung utuk melakukan klasifikasi. Kategori memiliki jumlah data permasalahan yang sama dan terdiri dari 400 data permasalahan; 360 data permasalahan digunakan untuk proses training dan 40 data permasalahan digunakan untuk proses testing. Pada penelitian ini metode yang digunakan yaitu waterfall dan pengujian perfomance measure, uji black box, dan uji sistem oleh pengguna. Adapun pengujian perfomance measure memperoleh nilai akurasi 97,5%, precision 97,6%, recall 97,5% dan f-measure 97,4%. Dari hasil-hasil tersebut dapat disimpulkan bahwa sistem yang menerapkan algoritma naive bayes classifier dapat digunakan untuk mengklasifikasikan permasalahan agenstok berbasis web, dengan menggunakan bahasa pemrograman PHP dan Database Management System (DBMS) menunjukkan bahwa klasifikasi permasalahan agenstok bisa terklasifikasi secara otomatis.Kata Kunci: agenstok, akurasi, klasifikasi, naïve bayes, text mining
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sudars, Kaspars, Ivars Namatēvs, and Kaspars Ozols. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach." Journal of Imaging 8, no. 2 (January 30, 2022): 30. http://dx.doi.org/10.3390/jimaging8020030.

Повний текст джерела
Анотація:
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Y. A. Amer, Ahmed, Julie Vranken, Femke Wouters, Dieter Mesotten, Pieter Vandervoort, Valerie Storms, Stijn Luca, Bart Vanrumste, and Jean-Marie Aerts. "Feature Engineering for ICU Mortality Prediction Based on Hourly to Bi-Hourly Measurements." Applied Sciences 9, no. 17 (August 27, 2019): 3525. http://dx.doi.org/10.3390/app9173525.

Повний текст джерела
Анотація:
Mortality prediction for intensive care unit (ICU) patients is a challenging problem that requires extracting discriminative and informative features. This study presents a proof of concept for exploring features that can provide clinical insight. Through a feature engineering approach, it is attempted to improve ICU mortality prediction in field conditions with low frequently measured data (i.e., hourly to bi-hourly). Features are explored by investigating the vital signs measurements of ICU patients, labelled with mortality or survival at discharge. The vital signs of interest in this study are heart and respiration rate, oxygen saturation and blood pressure. The latter comprises systolic, diastolic and mean arterial pressure. In the feature exploration process, it is aimed to extract simple and interpretable features that can provide clinical insight. For this purpose, a classifier is required that maximises the margin between the two classes (i.e., survival and mortality) with minimum tolerance to misclassification errors. Moreover, it preferably has to provide a linear decision surface in the original feature space without mapping to an unlimited dimensionality feature space. Therefore, a linear hard margin support vector machine (SVM) classifier is suggested. The extracted features are grouped in three categories: statistical, dynamic and physiological. Each category plays an important role in enhancing classification error performance. After extracting several features within the three categories, a manual feature fine-tuning is applied to consider only the most efficient features. The final classification, considering mortality as the positive class, resulted in an accuracy of 91.56 % , sensitivity of 90.59 % , precision of 86.52 % and F 1 -score of 88.50 % . The obtained results show that the proposed feature engineering approach and the extracted features are valid to be considered and further enhanced for the mortality prediction purpose. Moreover, the proposed feature engineering approach moved the modelling methodology from black-box modelling to grey-box modelling in combination with the powerful classifier of SVMs.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wei, Chih-Chiang, Gene Jiing-Yun You, Li Chen, Chien-Chang Chou, and Jinsheng Roan. "Diagnosing Rain Occurrences Using Passive Microwave Imagery: A Comparative Study on Probabilistic Graphical Models and “Black Box” Models." Journal of Atmospheric and Oceanic Technology 32, no. 10 (October 2015): 1729–44. http://dx.doi.org/10.1175/jtech-d-14-00164.1.

Повний текст джерела
Анотація:
AbstractRainfall is a fundamental process in the hydrologic cycle. This study investigated the cause–effect relationship in which precipitation at lower frequencies affects the amount of emitted radiation and at higher frequencies affects the amount of backscattered terrestrial radiation. Because the advantage of a probabilistic graphical model is its graphical representation, which allows easy causality interpretation using the arc directions, two Bayesian networks (BNs) were used, namely, a naïve Bayes classifier and a tree-augmented naïve Bayes model. To empirically evaluate and compare BN-based models, “black box”–based models, including nearest-neighbor searches and artificial neural network (ANN)-based multilayer perceptron and logistic regression, were used as benchmarks. For the two study regions—namely, the Tanshui River basin in northern Taiwan and Chianan Plain in southern Taiwan—rain occurrences during typhoon seasons were examined using passive microwave imagery recorded using the Special Sensor Microwave Imager/Sounder. The results show that although black box models exhibit excellent prediction ability, interpretation of their behavior is unsatisfactory. By contrast, probabilistic graphical models can explicitly reveal the causal relationship between brightness temperatures and nonrain/rain discrimination. For the Tanshui River basin, 19.35-, 22.23-, 37.0-, and 85.5-GHz vertically polarized brightness temperatures were found to diagnose rain occurrences. For the Chianan Plain, a more sensitive indicator of rain-scattering signals was obtained using 85-GHz measurements. The results demonstrate the potential use of BNs in identifying rain occurrences in regions with land features comprising various absorbing and scattering materials.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Alfarra, Motasem, Juan C. Perez, Ali Thabet, Adel Bibi, Philip H. S. Torr, and Bernard Ghanem. "Combating Adversaries with Anti-adversaries." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 5992–6000. http://dx.doi.org/10.1609/aaai.v36i6.20545.

Повний текст джерела
Анотація:
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input perturbation in the opposite direction of the adversarial one and feeds the classifier a perturbed version of the input. Our approach is training-free and theoretically supported. We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models and conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and ImageNet. Our layer significantly enhances model robustness while coming at no cost on clean accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Повний текст джерела
Анотація:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Jaafreh, Russlan, Jung-Gu Kim, and Kotiba Hamad. "Interpretable Machine Learning Analysis of Stress Concentration in Magnesium: An Insight beyond the Black Box of Predictive Modeling." Crystals 12, no. 9 (September 2, 2022): 1247. http://dx.doi.org/10.3390/cryst12091247.

Повний текст джерела
Анотація:
In the present work, machine learning (ML) was employed to build a model, and through it, the microstructural features (parameters) affecting the stress concentration (SC) during plastic deformation of magnesium (Mg)-based materials are determined. As a descriptor for the SC, the kernel average misorientation (KAM) was used, and starting from the microstructural features of pure Mg and AZ31 Mg alloy, as recorded using electron backscattered diffraction (EBSD), the ML model was trained and constructed using various types of ML algorithms, including Logistic Regression (LR), Decision Trees (DT), Random Forest (RF), Naive Bayes Classifier (NBC), K-Nearest Neighbor (KNN), Multilayer Perceptron (MLP), and Extremely Randomized Trees (ERT). The results show that the accuracy of the ERT-based model was higher compared to other models, and accordingly, the nine most-important features in the ERT-based model, those with a Gini impurity higher than 0.025, were extracted. The feature importance showed that the grain size is the most effective microstructural parameter for controlling the SC in Mg-based materials, and according to the relative Accumulated Local Effects (ALE) plot, calculated to show the relationship between KAM and grain size, it was found that SC occurs with a lower probability in the fine range of grain size. All findings from the ML-based model built in the present work were experimentally confirmed through EBSD observations.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zhou, Yuxuan, Huangxun Chen, Chenyu Huang, and Qian Zhang. "WiAdv." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 2 (July 4, 2022): 1–25. http://dx.doi.org/10.1145/3534618.

Повний текст джерела
Анотація:
WiFi-based gesture recognition systems have attracted enormous interest owing to the non-intrusive of WiFi signals and the wide adoption of WiFi for communication. Despite boosted performance via integrating advanced deep neural network (DNN) classifiers, there lacks sufficient investigation on their security vulnerabilities, which are rooted in the open nature of the wireless medium and the inherent defects (e.g., adversarial attacks) of classifiers. To fill this gap, we aim to study adversarial attacks to DNN-powered WiFi-based gesture recognition to encourage proper countermeasures. We design WiAdv to construct physically realizable adversarial examples to fool these systems. WiAdv features a signal synthesis scheme to craft adversarial signals with desired motion features based on the fundamental principle of WiFi-based gesture recognition, and a black-box attack scheme to handle the inconsistency between the perturbation space and the input space of the classifier caused by the in-between non-differentiable processing modules. We realize and evaluate our attack strategies against a representative state-of-the-art system, Widar3.0 in realistic settings. The experimental results show that the adversarial wireless signals generated by WiAdv achieve over 70% attack success rate on average, and remain robust and effective across different physical settings. Our attack case study and analysis reveal the vulnerability of WiFi-based gesture recognition systems, and we hope WiAdv could help promote the improvement of the relevant systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Xu, Zhiwu, Cheng Wen, Shengchao Qin, and Mengda He. "Extracting automata from neural networks using active learning." PeerJ Computer Science 7 (April 19, 2021): e436. http://dx.doi.org/10.7717/peerj-cs.436.

Повний текст джерела
Анотація:
Deep learning is one of the most advanced forms of machine learning. Most modern deep learning models are based on an artificial neural network, and benchmarking studies reveal that neural networks have produced results comparable to and in some cases superior to human experts. However, the generated neural networks are typically regarded as incomprehensible black-box models, which not only limits their applications, but also hinders testing and verifying. In this paper, we present an active learning framework to extract automata from neural network classifiers, which can help users to understand the classifiers. In more detail, we use Angluin’s L* algorithm as a learner and the neural network under learning as an oracle, employing abstraction interpretation of the neural network for answering membership and equivalence queries. Our abstraction consists of value, symbol and word abstractions. The factors that may affect the abstraction are also discussed in the paper. We have implemented our approach in a prototype. To evaluate it, we have performed the prototype on a MNIST classifier and have identified that the abstraction with interval number 2 and block size 1 × 28 offers the best performance in terms of F1 score. We also have compared our extracted DFA against the DFAs learned via the passive learning algorithms provided in LearnLib and the experimental results show that our DFA gives a better performance on the MNIST dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Suri, Anshuman, and David Evans. "Formalizing and Estimating Distribution Inference Risks." Proceedings on Privacy Enhancing Technologies 2022, no. 4 (October 2022): 528–51. http://dx.doi.org/10.56553/popets-2022-0121.

Повний текст джерела
Анотація:
Distribution inference, sometimes called property inference, infers statistical properties about a training set from access to a model trained on that data. Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning—namely, to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal definition of distribution inference attacks general enough to describe a broad class of attacks distinguishing between possible training distributions. We show how our definition captures previous ratio-based inference attacks as well as new kinds of attack including revealing the average node degree or clustering coefficient of training graphs. To understand distribution inference risks, we introduce a metric that quantifies observed leakage by relating it to the leakage that would occur if samples from the training distribution were provided directly to the adversary. We report on a series of experiments across a range of different distributions using both novel black-box attacks and improved versions of the state-of-the-art white-box attacks. Our results show that inexpensive attacks are often as effective as expensive meta-classifier attacks, and that there are surprising asymmetries in the effectiveness of attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

De Falco, Ivanoe, Giuseppe De Pietro, and Giovanna Sannino. "A Two-Step Approach for Classification in Alzheimer’s Disease." Sensors 22, no. 11 (May 24, 2022): 3966. http://dx.doi.org/10.3390/s22113966.

Повний текст джерела
Анотація:
The classification of images is of high importance in medicine. In this sense, Deep learning methodologies show excellent performance with regard to accuracy. The drawback of these methodologies is the fact that they are black boxes, so no explanation is given to users on the reasons underlying their choices. In the medical domain, this lack of transparency and information, typical of black box models, brings practitioners to raise concerns, and the result is a resistance to the use of deep learning tools. In order to overcome this problem, a different Machine Learning approach to image classification is used here that is based on interpretability concepts thanks to the use of an evolutionary algorithm. It relies on the application of two steps in succession. The first receives a set of images in the inut and performs image filtering on them so that a numerical data set is generated. The second is a classifier, the kernel of which is an evolutionary algorithm. This latter, at the same time, classifies and automatically extracts explicit knowledge as a set of IF–THEN rules. This method is investigated with respect to a data set of MRI brain imagery referring to Alzheimer’s disease. Namely, a two-class data set (non-demented and moderate demented) and a three-class data set (non-demented, mild demented, and moderate demented) are extracted. The methodology shows good results in terms of accuracy (100% for the best run over the two-class problem and 91.49% for the best run over the three-class one), F_score (1.0000 and 0.9149, respectively), and Matthews Correlation Coefficient (1.0000 and 0.8763, respectively). To ascertain the quality of these results, they are contrasted against those from a wide set of well-known classifiers. The outcome of this comparison is that, in both problems, the methodology achieves the best results in terms of accuracy and F_score, whereas, for the Matthews Correlation Coefficient, it has the best result over the two-class problem and the second over the three-class one.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Повний текст джерела
Анотація:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Corley, Becky, Sofia Koukoura, James Carroll, and Alasdair McDonald. "Combination of Thermal Modelling and Machine Learning Approaches for Fault Detection in Wind Turbine Gearboxes." Energies 14, no. 5 (March 3, 2021): 1375. http://dx.doi.org/10.3390/en14051375.

Повний текст джерела
Анотація:
This research aims to bring together thermal modelling and machine learning approaches to improve the understanding on the operation and fault detection of a wind turbine gearbox. Recent fault detection research has focused on machine learning, black box approaches. Although it can be successful, it provides no indication of the physical behaviour. In this paper, thermal network modelling was applied to two datasets using SCADA (Supervisory Control and Data Acquisition) temperature data, with the aim of detecting a fault one month before failure. A machine learning approach was used on the same data to compare the results to thermal modelling. The results found that thermal network modelling could successfully detect a fault in many of the turbines examined and was validated by the machine learning approach for one of the datasets. For that same dataset, it was found that combining the thermal model losses and the machine learning approach by using the modelled losses as a feature in the classifier resulted in the engineered feature becoming the most important feature in the classifier. It was also found that the results from thermal modelling had a significantly greater effect on successfully classifying the health of a turbine compared to temperature data. The other dataset gave less conclusive results, suggesting that the location of the fault and the temperature sensors could impact the fault-detection ability.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Kuan-Yung, Hsi-Chieh Lee, Tsung-Chieh Lin, Chih-Ying Lee, and Zih-Ping Ho. "Deep Learning Algorithms with LIME and Similarity Distance Analysis on COVID-19 Chest X-ray Dataset." International Journal of Environmental Research and Public Health 20, no. 5 (February 28, 2023): 4330. http://dx.doi.org/10.3390/ijerph20054330.

Повний текст джерела
Анотація:
In the last few years, many types of research have been conducted on the most harmful pandemic, COVID-19. Machine learning approaches have been applied to investigate chest X-rays of COVID-19 patients in many respects. This study focuses on the deep learning algorithm from the standpoint of feature space and similarity analysis. Firstly, we utilized Local Interpretable Model-agnostic Explanations (LIME) to justify the necessity of the region of interest (ROI) process and further prepared ROI via U-Net segmentation that masked out non-lung areas of images to prevent the classifier from being distracted by irrelevant features. The experimental results were promising, with detection performance reaching an overall accuracy of 95.5%, a sensitivity of 98.4%, a precision of 94.7%, and an F1 score of 96.5% on the COVID-19 category. Secondly, we applied similarity analysis to identify outliers and further provided an objective confidence reference specific to the similarity distance to centers or boundaries of clusters while inferring. Finally, the experimental results suggested putting more effort into enhancing the low-accuracy subspace locally, which is identified by the similarity distance to the centers. The experimental results were promising, and based on those perspectives, our approach could be more flexible to deploy dedicated classifiers specific to different subspaces instead of one rigid end-to-end black box model for all feature space.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Cofre-Martel, Sergio, Enrique Lopez Droguett, and Mohammad Modarres. "Remaining Useful Life Estimation through Deep Learning Partial Differential Equation Models: A Framework for Degradation Dynamics Interpretation Using Latent Variables." Shock and Vibration 2021 (May 27, 2021): 1–15. http://dx.doi.org/10.1155/2021/9937846.

Повний текст джерела
Анотація:
Remaining useful life (RUL) estimation is one of the main objectives of prognostics and health management (PHM) frameworks. For the past decade, researchers have explored the application of deep learning (DL) regression algorithms to predict the system’s health state behavior based on sensor readings from the monitoring system. Although the state-of-art results have been achieved in benchmark problems, most DL-PHM algorithms are treated as black-box functions, giving little-to-no control over data interpretation. This becomes an issue when the models unknowingly break the governing laws of physics when no constraints are imposed. The latest research efforts have focused on applying complex DL models to achieve low prediction errors rather than studying how they interpret the data’s behavior and the system itself. This paper proposes an open-box approach using a deep neural network framework to explore the physics of a complex system’s degradation through partial differential equations (PDEs). This proposed framework is an attempt to bridge the gap between statistic-based PHM and physics-based PHM. The framework has three stages, and it aims to discover the health state of the system through a latent variable while still providing a RUL estimation. Results show that the latent variable can capture the failure modes of the system. A latent space representation can also be used as a health state estimator through a random forest classifier with up to a 90% performance on new unseen data.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Jung, Jiyoon, Eunsu Kim, Hyeseong Lee, Sung Hak Lee, and Sangjeong Ahn. "Automated Hybrid Model for Detecting Perineural Invasion in the Histology of Colorectal Cancer." Applied Sciences 12, no. 18 (September 13, 2022): 9159. http://dx.doi.org/10.3390/app12189159.

Повний текст джерела
Анотація:
Perineural invasion (PNI) is a well-established independent prognostic factor for poor outcomes in colorectal cancer (CRC). However, PNI detection in CRC is a cumbersome and time-consuming process, with low inter-and intra-rater agreement. In this study, a deep-learning-based approach was proposed for detecting PNI using histopathological images. We collected 530 regions of histology from 77 whole-slide images (PNI, 100 regions; non-PNI, 430 regions) for training. The proposed hybrid model consists of two components: a segmentation network for tumor and nerve tissues, and a PNI classifier. Unlike a “black-box” model that is unable to account for errors, the proposed approach enables false predictions to be explained and addressed. We presented a high performance, automated PNI detector, with the area under the curve (AUC) for the receiver operating characteristic (ROC) curve of 0.92. Thus, the potential for the use of deep neural networks in PNI screening was proved, and a possible alternative to conventional methods for the pathologic diagnosis of CRC was provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Candelieri, Antonio, and Francesco Archetti. "Sparsifying to optimize over multiple information sources: an augmented Gaussian process based algorithm." Structural and Multidisciplinary Optimization 64, no. 1 (April 5, 2021): 239–55. http://dx.doi.org/10.1007/s00158-021-02882-7.

Повний текст джерела
Анотація:
AbstractOptimizing a black-box, expensive, and multi-extremal function, given multiple approximations, is a challenging task known as multi-information source optimization (MISO), where each source has a different cost and the level of approximation (aka fidelity) of each source can change over the search space. While most of the current approaches fuse the Gaussian processes (GPs) modelling each source, we propose to use GP sparsification to select only “reliable” function evaluations performed over all the sources. These selected evaluations are used to create an augmented Gaussian process (AGP), whose name is implied by the fact that the evaluations on the most expensive source are augmented with the reliable evaluations over less expensive sources. A new acquisition function, based on confidence bound, is also proposed, including both cost of the next source to query and the location-dependent approximation of that source. This approximation is estimated through a model discrepancy measure and the prediction uncertainty of the GPs. MISO-AGP and the MISO-fused GP counterpart are compared on two test problems and hyperparameter optimization of a machine learning classifier on a large dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Hildebrandt, Marcel, Jorge Andres Quintero Serna, Yunpu Ma, Martin Ringsquandl, Mitchell Joblin, and Volker Tresp. "Reasoning on Knowledge Graphs with Debate Dynamics." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4123–31. http://dx.doi.org/10.1609/aaai.v34i04.6600.

Повний текст джерела
Анотація:
We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments – paths in the knowledge graph – with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Shao, Xiaoting, Arseny Skryagin, Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. "Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9533–40. http://dx.doi.org/10.1609/aaai.v35i11.17148.

Повний текст джерела
Анотація:
Explaining black-box models such as deep neural networks is becoming increasingly important as it helps to boost trust and debugging. Popular forms of explanations map the features to a vector indicating their individual importance to a decision on the instance-level. They can then be used to prevent the model from learning the wrong bias in data possibly due to ambiguity. For instance, Ross et al.'s ``right for the right reasons'' propagates user explanations backwards to the network by formulating differentiable constraints based on input gradients. Unfortunately, input gradients as well as many other widely used explanation methods form an approximation of the decision boundary and assume the underlying model to be fixed. Here, we demonstrate how to make use of influence functions---a well known robust statistic---in the constraints to correct the model’s behaviour more effectively. Our empirical evidence demonstrates that this ``right for better reasons''(RBR) considerably reduces the time to correct the classifier at training time and boosts the quality of explanations at inference time compared to input gradients. Besides, we also showcase the effectiveness of RBR in correcting "Clever Hans"-like behaviour in real, high-dimensional domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Moskalenko, V. V., A. S. Moskalenko, A. G. Korobov, and M. O. Zaretsky. "IMAGE CLASSIFIER RESILIENT TO ADVERSARIAL ATTACKS, FAULT INJECTIONS AND CONCEPT DRIFT – MODEL ARCHITECTURE AND TRAINING ALGORITHM." Radio Electronics, Computer Science, Control, no. 3 (October 16, 2022): 86. http://dx.doi.org/10.15588/1607-3274-2022-3-9.

Повний текст джерела
Анотація:
Context. The problem of image classification algorithms vulnerability to destructive perturbations has not yet been definitively resolved and is quite relevant for safety-critical applications. Therefore, object of research is the process of training and inference for image classifier that functioning under influences of destructive perturbations. The subjects of the research are model architecture and training algorithm of image classifier that provide resilience to adversarial attacks, fault injection attacks and concept drift. Objective. Stated research goal is to develop effective model architecture and training algorithm that provide resilience to adversarial attacks, fault injections and concept drift. Method. New training algorithm which combines self-knowledge distillation, information measure maximization, class distribution compactness and interclass gap maximization, data compression based on discretization of feature representation and semi-supervised learning based on consistency regularization is proposed. Results. The model architecture and training algorithm of image classifier were developed. The obtained classifier was tested on the Cifar10 dataset to evaluate its resilience over an interval of 200 mini-batches with a training and test size of mini-batch equals to 128 examples for such perturbations: adversarial black-box L∞-attacks with perturbation levels equal to 1, 3, 5 and 10; inversion of one randomly selected bit in a tensor for 10%, 30%, 50% and 60% randomly selected tensors; addition of one new class; real concept drift between a pair of classes. The effect of the feature space dimensionality on the value of the information criterion of the model performance without perturbations and the value of the integral metric of resilience during the exposure to perturbations is considered. Conclusions. The proposed model architecture and learning algorithm provide absorption of part of the disturbing influence, graceful degradation due to hierarchical classes and adaptive computation, and fast adaptation on a limited amount of labeled data. It is shown that adaptive computation saves up to 40% of resources due to early decision-making in the lower sections of the model, but perturbing influence leads to slowing down, which can be considered as graceful degradation. A multi-section structure trained using knowledge self-distillation principles has been shown to provide more than 5% improvement in the value of the integral mectric of resilience compared to an architecture where the decision is made on the last layer of the model. It is observed that the dimensionality of the feature space noticeably affects the resilience to adversarial attacks and can be chosen as a tradeoff between resilience to perturbations and efficiency without perturbations.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gozzi, Noemi, Edoardo Giacomello, Martina Sollini, Margarita Kirienko, Angela Ammirabile, Pierluca Lanzi, Daniele Loiacono, and Arturo Chiti. "Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs." Diagnostics 12, no. 9 (August 28, 2022): 2084. http://dx.doi.org/10.3390/diagnostics12092084.

Повний текст джерела
Анотація:
To identify the best transfer learning approach for the identification of the most frequent abnormalities on chest radiographs (CXRs), we used embeddings extracted from pretrained convolutional neural networks (CNNs). An explainable AI (XAI) model was applied to interpret black-box model predictions and assess its performance. Seven CNNs were trained on CheXpert. Three transfer learning approaches were thereafter applied to a local dataset. The classification results were ensembled using simple and entropy-weighted averaging. We applied Grad-CAM (an XAI model) to produce a saliency map. Grad-CAM maps were compared to manually extracted regions of interest, and the training time was recorded. The best transfer learning model was that which used image embeddings and random forest with simple averaging, with an average AUC of 0.856. Grad-CAM maps showed that the models focused on specific features of each CXR. CNNs pretrained on a large public dataset of medical images can be exploited as feature extractors for tasks of interest. The extracted image embeddings contain relevant information that can be used to train an additional classifier with satisfactory performance on an independent dataset, demonstrating it to be the optimal transfer learning strategy and overcoming the need for large private datasets, extensive computational resources, and long training times.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Steiner, Margaret C., Keylie M. Gibson, and Keith A. Crandall. "Drug Resistance Prediction Using Deep Learning Techniques on HIV-1 Sequence Data." Viruses 12, no. 5 (May 19, 2020): 560. http://dx.doi.org/10.3390/v12050560.

Повний текст джерела
Анотація:
The fast replication rate and lack of repair mechanisms of human immunodeficiency virus (HIV) contribute to its high mutation frequency, with some mutations resulting in the evolution of resistance to antiretroviral therapies (ART). As such, studying HIV drug resistance allows for real-time evaluation of evolutionary mechanisms. Characterizing the biological process of drug resistance is also critically important for sustained effectiveness of ART. Investigating the link between “black box” deep learning methods applied to this problem and evolutionary principles governing drug resistance has been overlooked to date. Here, we utilized publicly available HIV-1 sequence data and drug resistance assay results for 18 ART drugs to evaluate the performance of three architectures (multilayer perceptron, bidirectional recurrent neural network, and convolutional neural network) for drug resistance prediction, jointly with biological analysis. We identified convolutional neural networks as the best performing architecture and displayed a correspondence between the importance of biologically relevant features in the classifier and overall performance. Our results suggest that the high classification performance of deep learning models is indeed dependent on drug resistance mutations (DRMs). These models heavily weighted several features that are not known DRM locations, indicating the utility of model interpretability to address causal relationships in viral genotype-phenotype data.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mahmood, Asad, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. "A Girl Has No Name: Automated Authorship Obfuscation using Mutant-X." Proceedings on Privacy Enhancing Technologies 2019, no. 4 (October 1, 2019): 54–71. http://dx.doi.org/10.2478/popets-2019-0058.

Повний текст джерела
Анотація:
Abstract Stylometric authorship attribution aims to identify an anonymous or disputed document’s author by examining its writing style. The development of powerful machine learning based stylometric authorship attribution methods presents a serious privacy threat for individuals such as journalists and activists who wish to publish anonymously. Researchers have proposed several authorship obfuscation approaches that try to make appropriate changes (e.g. word/phrase replacements) to evade attribution while preserving semantics. Unfortunately, existing authorship obfuscation approaches are lacking because they either require some manual effort, require significant training data, or do not work for long documents. To address these limitations, we propose a genetic algorithm based random search framework called Mutant-X which can automatically obfuscate text to successfully evade attribution while keeping the semantics of the obfuscated text similar to the original text. Specifically, Mutant-X sequentially makes changes in the text using mutation and crossover techniques while being guided by a fitness function that takes into account both attribution probability and semantic relevance. While Mutant-X requires black-box knowledge of the adversary’s classifier, it does not require any additional training data and also works on documents of any length. We evaluate Mutant-X against a variety of authorship attribution methods on two different text corpora. Our results show that Mutant-X can decrease the accuracy of state-of-the-art authorship attribution methods by as much as 64% while preserving the semantics much better than existing automated authorship obfuscation approaches. While Mutant-X advances the state-of-the-art in automated authorship obfuscation, we find that it does not generalize to a stronger threat model where the adversary uses a different attribution classifier than what Mutant-X assumes. Our findings warrant the need for future research to improve the generalizability (or transferability) of automated authorship obfuscation approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yamashkin, Anatoliy, and Stanislav Yamashkin. "Analysis of the Inerka polygon metageosystems by means of Ensembles of machine learning models." InterCarto. InterGIS 28, no. 1 (2022): 613–28. http://dx.doi.org/10.35595/2414-9179-2022-1-28-613-628.

Повний текст джерела
Анотація:
The article describes a geoinformation algorithm for interpreting Earth remote sensing data based on the Ensemble Learning methodology. The proposed solution can be used to assess the stability of geosystems and predict natural (including exogeodynamic) processes. The difference of the created approach is determined by a fundamentally new organization scheme of the metaclassifier as a decision-making unit, as well as the use of a geosystem approach to preparing data for automated analysis using deep neural network models. The article shows that the use of ensembles, built according to the proposed method, makes it possible to carry out an operational automated analysis of spatial data for solving the problem of thematic mapping of metageosystems and natural processes. At the same time, combining models into an ensemble based on the proposed architecture of the metaclassifier makes it possible to increase the stability of the analyzing system: the accuracy of decisions made by the ensemble tends to tend to the accuracy of the most efficient monoclassifier of the system. The integration of individual classifiers into ensembles makes it possible to approach the solution of the scientific problem of finding classifier hyperparameters through the combined use of models of the same type with different configurations. The formation of a metaclassifier according to the proposed algorithm is an opportunity to add an element of predictability and control to the use of neural network models, which are traditionally a “black box”. Mapping of the geosystems of the Inerka test site shows their weak resistance to recreational development. The main limiting factors are the composition of Quaternary deposits, the nature of the relief, the mechanical composition of soils, soil moisture, the thickness of the humus horizon of the soil, the genesis and composition of vegetation.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Siino, Marco, Elisa Di Nuovo, Ilenia Tinnirello, and Marco La Cascia. "Fake News Spreaders Detection: Sometimes Attention Is Not All You Need." Information 13, no. 9 (September 9, 2022): 426. http://dx.doi.org/10.3390/info13090426.

Повний текст джерела
Анотація:
Guided by a corpus linguistics approach, in this article we present a comparative evaluation of State-of-the-Art (SotA) models, with a special focus on Transformers, to address the task of Fake News Spreaders (i.e., users that share Fake News) detection. First, we explore the reference multilingual dataset for the considered task, exploiting corpus linguistics techniques, such as chi-square test, keywords and Word Sketch. Second, we perform experiments on several models for Natural Language Processing. Third, we perform a comparative evaluation using the most recent Transformer-based models (RoBERTa, DistilBERT, BERT, XLNet, ELECTRA, Longformer) and other deep and non-deep SotA models (CNN, MultiCNN, Bayes, SVM). The CNN tested outperforms all the models tested and, to the best of our knowledge, any existing approach on the same dataset. Fourth, to better understand this result, we conduct a post-hoc analysis as an attempt to investigate the behaviour of the presented best performing black-box model. This study highlights the importance of choosing a suitable classifier given the specific task. To make an educated decision, we propose the use of corpus linguistics techniques. Our results suggest that large pre-trained deep models like Transformers are not necessarily the first choice when addressing a text classification task as the one presented in this article. All the code developed to run our tests is publicly available on GitHub.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Повний текст джерела
Анотація:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії