Добірка наукової літератури з теми "Black-Box Classifier"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Black-Box Classifier".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Black-Box Classifier"

1

Lee, Hansoo, and Sungshin Kim. "Black-Box Classifier Interpretation Using Decision Tree and Fuzzy Logic-Based Classifier Implementation." International Journal of Fuzzy Logic and Intelligent Systems 16, no. 1 (March 31, 2016): 27–35. http://dx.doi.org/10.5391/ijfis.2016.16.1.27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rajabi, Arezoo, Mahdieh Abbasi, Rakesh B. Bobba, and Kimia Tajik. "Adversarial Images Against Super-Resolution Convolutional Neural Networks for Free." Proceedings on Privacy Enhancing Technologies 2022, no. 3 (July 2022): 120–39. http://dx.doi.org/10.56553/popets-2022-0065.

Повний текст джерела
Анотація:
Super-Resolution Convolutional Neural Networks (SRCNNs) with their ability to generate highresolution images from low-resolution counterparts, exacerbate the privacy concerns emerging from automated Convolutional Neural Networks (CNNs)-based image classifiers. In this work, we hypothesize and empirically show that adversarial examples learned over CNN image classifiers can survive processing by SRCNNs and lead them to generate poor quality images that are hard to classify correctly. We demonstrate that a user with a small CNN is able to learn adversarial noise without requiring any customization for SRCNNs and thwart the privacy threat posed by a pipeline of SRCNN and CNN classifiers (95.8% fooling rate for Fast Gradient Sign with ε = 0.03). We evaluate the survivability of adversarial images generated in both black-box and white-box settings and show that black-box adversarial learning (when both CNN classifier and SRCNN are unknown) is at least as effective as white-box adversarial learning (when only CNN classifier is known). We also assess our hypothesis on adversarial robust CNNs and observe that the supper-resolved white-box adversarial examples can fool these CNNs more than 71.5% of the time.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ji, Disi, Robert L. Logan, Padhraic Smyth, and Mark Steyvers. "Active Bayesian Assessment of Black-Box Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7935–44. http://dx.doi.org/10.1609/aaai.v35i9.16968.

Повний текст джерела
Анотація:
Recent advances in machine learning have led to increased deployment of black-box classifiers across a wide variety of applications. In many such situations there is a critical need to both reliably assess the performance of these pre-trained models and to perform this assessment in a label-efficient manner (given that labels may be scarce and costly to collect). In this paper, we introduce an active Bayesian approach for assessment of classifier performance to satisfy the desiderata of both reliability and label-efficiency. We begin by developing inference strategies to quantify uncertainty for common assessment metrics such as accuracy, misclassification cost, and calibration error. We then propose a general framework for active Bayesian assessment using inferred uncertainty to guide efficient selection of instances for labeling, enabling better performance assessment with fewer labels. We demonstrate significant gains from our proposed active Bayesian approach via a series of systematic empirical experiments assessing the performance of modern neural classifiers (e.g., ResNet and BERT) on several standard image and text classification datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tran, Thien Q., Kazuto Fukuchi, Youhei Akimoto, and Jun Sakuma. "Unsupervised Causal Binary Concepts Discovery with VAE for Black-Box Model Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9614–22. http://dx.doi.org/10.1609/aaai.v36i9.21195.

Повний текст джерела
Анотація:
We aim to explain a black-box classifier with the form: "data X is classified as class Y because X has A, B and does not have C" in which A, B, and C are high-level concepts. The challenge is that we have to discover in an unsupervised manner a set of concepts, i.e., A, B and C, that is useful for explaining the classifier. We first introduce a structural generative model that is suitable to express and discover such concepts. We then propose a learning process that simultaneously learns the data distribution and encourages certain concepts to have a large causal influence on the classifier output. Our method also allows easy integration of user's prior knowledge to induce high interpretability of concepts. Finally, using multiple datasets, we demonstrate that the proposed method can discover useful concepts for explanation in this form.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (October 14, 2020): 7168. http://dx.doi.org/10.3390/app10207168.

Повний текст джерела
Анотація:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lou, Chenlu, and Xiang Pan. "Detect Black Box Signals with Enhanced Spectrum and Support Vector Classifier." Journal of Physics: Conference Series 1438 (January 2020): 012003. http://dx.doi.org/10.1088/1742-6596/1438/1/012003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, Pengpeng, Hailong Sun, Yongqiang Yang, and Zhijun Chen. "Adversarial Learning from Crowds." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5304–12. http://dx.doi.org/10.1609/aaai.v36i5.20467.

Повний текст джерела
Анотація:
Learning from Crowds (LFC) seeks to induce a high-quality classifier from training instances, which are linked to a range of possible noisy annotations from crowdsourcing workers under their various levels of skills and their own preconditions. Recent studies on LFC focus on designing new methods to improve the performance of the classifier trained from crowdsourced labeled data. To this day, however, there remain under-explored security aspects of LFC systems. In this work, we seek to bridge this gap. We first show that LFC models are vulnerable to adversarial examples---small changes to input data can cause classifiers to make prediction mistakes. Second, we propose an approach, A-LFC for training a robust classifier from crowdsourced labeled data. Our empirical results on three real-world datasets show that the proposed approach can substantially improve the performance of the trained classifier even with the existence of adversarial examples. On average, A-LFC has 10.05% and 11.34% higher test robustness than the state-of-the-art in the white-box and black-box attack settings, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mahmood, Kaleel, Deniz Gurevin, Marten van Dijk, and Phuoung Ha Nguyen. "Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples." Entropy 23, no. 10 (October 18, 2021): 1359. http://dx.doi.org/10.3390/e23101359.

Повний текст джерела
Анотація:
Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do not properly examine black-box attacks. In this paper, we expand upon the analyses of these defenses to include adaptive black-box adversaries. Our evaluation is done on nine defenses including Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense, K-Winner Take All and Buffer Zones. Our investigation is done using two black-box adversarial models and six widely studied adversarial attacks for CIFAR-10 and Fashion-MNIST datasets. Our analyses show most recent defenses (7 out of 9) provide only marginal improvements in security (<25%), as compared to undefended networks. For every defense, we also show the relationship between the amount of data the adversary has at their disposal, and the effectiveness of adaptive black-box attacks. Overall, our results paint a clear picture: defenses need both thorough white-box and black-box analyses to be considered secure. We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hartono, Pitoyo. "A transparent cancer classifier." Health Informatics Journal 26, no. 1 (December 31, 2018): 190–204. http://dx.doi.org/10.1177/1460458218817800.

Повний текст джерела
Анотація:
Recently, many neural network models have been successfully applied for histopathological analysis, including for cancer classifications. While some of them reach human–expert level accuracy in classifying cancers, most of them have to be treated as black box, in which they do not offer explanation on how they arrived at their decisions. This lack of transparency may hinder the further applications of neural networks in realistic clinical settings where not only decision but also explainability is important. This study proposes a transparent neural network that complements its classification decisions with visual information about the given problem. The auxiliary visual information allows the user to some extent understand how the neural network arrives at its decision. The transparency potentially increases the usability of neural networks in realistic histopathological analysis. In the experiment, the accuracy of the proposed neural network is compared against some existing classifiers, and the visual information is compared against some dimensional reduction methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Masuda, Haruki, Tsunato Nakai, Kota Yoshida, Takaya Kubota, Mitsuru Shiozaki, and Takeshi Fujino. "Black-Box Adversarial Attack against Deep Neural Network Classifier Utilizing Quantized Probability Output." Journal of Signal Processing 24, no. 4 (July 15, 2020): 145–48. http://dx.doi.org/10.2299/jsp.24.145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Black-Box Classifier"

1

Mena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.

Повний текст джерела
Анотація:
Currently, thanks to the Big Data boom, the excellent results obtained by deep learning models and the strong digital transformation experienced over the last years, many companies have decided to incorporate machine learning models into their systems. Some companies have detected this opportunity and are making a portfolio of artificial intelligence services available to third parties in the form of application programming interfaces (APIs). Subsequently, developers include calls to these APIs to incorporate AI functionalities in their products. Although it is an option that saves time and resources, it is true that, in most cases, these APIs are displayed in the form of blackboxes, the details of which are unknown to their clients. The complexity of such products typically leads to a lack of control and knowledge of the internal components, which, in turn, can drive to potential uncontrolled risks. Therefore, it is necessary to develop methods capable of evaluating the performance of these black-boxes when applied to a specific application. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. In the first part of the thesis, we develop a thorough revision of the concept of uncertainty, focusing on the uncertainty of classification systems. We review the existingrelated literature, describing the different approaches for modelling this uncertainty, its application to different use cases and some of its desirable properties. Next, we introduce the proposed method for modelling uncertainty in black-box settings. Moreover, in the last chapters of the thesis, we showcase the method applied to different domains, including NLP and computer vision problems. Finally, we include two reallife applications of the method: classification of overqualification in job descriptions and readability assessment of texts.
La tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Olofsson, Nina. "A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210565.

Повний текст джерела
Анотація:
Churn prediction methods are widely used in Customer Relationship Management and have proven to be valuable for retaining customers. To obtain a high predictive performance, recent studies rely on increasingly complex machine learning methods, such as ensemble or hybrid models. However, the more complex a model is, the more difficult it becomes to understand how decisions are actually made. Previous studies on machine learning interpretability have used a global perspective for understanding black-box models. This study explores the use of local explanation models for explaining the individual predictions of a Random Forest ensemble model. The churn prediction was studied on the users of Tink – a finance app. This thesis aims to take local explanations one step further by making comparisons between churn indicators of different user groups. Three sets of groups were created based on differences in three user features. The importance scores of all globally found churn indicators were then computed for each group with the help of local explanation models. The results showed that the groups did not have any significant differences regarding the globally most important churn indicators. Instead, differences were found for globally less important churn indicators, concerning the type of information that users stored in the app. In addition to comparing churn indicators between user groups, the result of this study was a well-performing Random Forest ensemble model with the ability of explaining the reason behind churn predictions for individual users. The model proved to be significantly better than a number of simpler models, with an average AUC of 0.93.
Metoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.

Повний текст джерела
Анотація:
Cardiovascular diseases are the leading global death cause. Their treatment and prevention rely on electrocardiogram interpretation, which is dependent on the physician’s variability. Subjectiveness is intrinsic to electrocardiogram interpretation and hence, prone to errors. To assist physicians in making precise and thoughtful decisions, artificial intelligence is being deployed to develop models that can interpret extent datasets and provide accurate decisions. However, the lack of interpretability of most machine learning models stands as one of the drawbacks of their deployment, particularly in the medical domain. Furthermore, most of the currently deployed explainable artificial intelligence methods assume independence between features, which means temporal independence when dealing with time series. The inherent characteristic of time series cannot be ignored as it carries importance for the human decision making process. This dissertation focuses on the explanation of heartbeat classification using several adaptations of state-of-the-art model-agnostic methods, to locally explain time series classification. To address the explanation of time series classifiers, a preliminary conceptual framework is proposed, and the use of the derivative is suggested as a complement to add temporal dependency between samples. The results were validated on an extent public dataset, through the 1-D Jaccard’s index, which consists of the comparison of the subsequences extracted from an interpretable model and the explanation methods used. Secondly, through the performance’s decrease, to evaluate whether the explanation fits the model’s behaviour. To assess models with distinct internal logic, the validation was conducted on a more transparent model and more opaque one in both binary and multiclass situation. The results show the promising use of including the signal’s derivative to introduce temporal dependency between samples in the explanations, for models with simpler internal logic.
As doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Black-Box Classifier"

1

Liu, Xinghan, and Emiliano Lorini. "A Logic of “Black Box” Classifier Systems." In Logic, Language, Information, and Computation, 158–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15298-6_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ali, Abdullah, and Birhanu Eshete. "Best-Effort Adversarial Approximation of Black-Box Malware Classifiers." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 318–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63086-7_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Panigutti, Cecilia, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi. "Explaining Multi-label Black-Box Classifiers for Health Applications." In Precision Health and Medicine, 97–110. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24409-5_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jung, Hyungsik, Youngrock Oh, Jeonghyung Park, and Min Soo Kim. "Jointly Optimize Positive and Negative Saliencies for Black Box Classifiers." In Pattern Recognition. ICPR International Workshops and Challenges, 76–89. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68796-0_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.

Повний текст джерела
Анотація:
Abstract We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vijayaraghavan, Prashanth, and Deb Roy. "Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model." In Machine Learning and Knowledge Discovery in Databases, 711–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46147-8_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Brandl, Julius, Nicolas Breinl, Maximilian Demmler, Lukas Hartmann, Jörg Hähner, and Anthony Stein. "Reducing Search Space of Genetic Algorithms for Fast Black Box Attacks on Image Classifiers." In KI 2019: Advances in Artificial Intelligence, 115–22. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30179-8_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rosenberg, Ishai, Asaf Shabtai, Lior Rokach, and Yuval Elovici. "Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers." In Research in Attacks, Intrusions, and Defenses, 490–510. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00470-5_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Uzunova, Hristina, Jan Ehrhardt, Timo Kepp, and Heinz Handels. "Abstract: Interpretable Explanations of Black Box Classifiers Applied on Medical Images by Meaningful Perturbations Using Variational Autoencoders." In Informatik aktuell, 197. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-25326-4_42.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rabold, Johannes, Michael Siebers, and Ute Schmid. "Explaining Black-Box Classifiers with ILP – Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules." In Inductive Logic Programming, 105–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99960-9_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Black-Box Classifier"

1

Alufaisan, Yasmeen, Murat Kantarcioglu, and Yan Zhou. "Detecting Discrimination in a Black-Box Classifier." In 2016 IEEE 2nd International Conference on Collaboration and Internet Computing (CIC). IEEE, 2016. http://dx.doi.org/10.1109/cic.2016.051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gitiaux, Xavier, and Huzefa Rangwala. "mdfa: Multi-Differential Fairness Auditor for Black Box Classifiers." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/814.

Повний текст джерела
Анотація:
Machine learning algorithms are increasingly involved in sensitive decision-making processes with adversarial implications on individuals. This paper presents a new tool, mdfa that identifies the characteristics of the victims of a classifier's discrimination. We measure discrimination as a violation of multi-differential fairness. Multi-differential fairness is a guarantee that a black box classifier's outcomes do not leak information on the sensitive attributes of a small group of individuals. We reduce the problem of identifying worst-case violations to matching distributions and predicting where sensitive attributes and classifier's outcomes coincide. We apply mdfa to a recidivism risk assessment classifier widely used in the United States and demonstrate that for individuals with little criminal history, identified African-Americans are three-times more likely to be considered at high risk of violent recidivism than similar non-African-Americans.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lam, Jonathan, Pengrui Quan, Jiamin Xu, Jeya Vikranth Jeyakumar, and Mani Srivastava. "Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier." In SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3417312.3431827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yu, Jia ao, and Lei Peng. "Black-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples." In 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP). IEEE, 2020. http://dx.doi.org/10.1109/icsip49896.2020.9339329.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Santos, Samara Silva, Marcos Antonio Alves, Leonardo Augusto Ferreira, and Frederico Gadelha Guimarães. "PDTX: A novel local explainer based on the Perceptron Decision Tree." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-50.

Повний текст джерела
Анотація:
Artificial Intelligence (AI) approaches that achieve good results and generalization are often opaque models and the decision-maker has no clear explanation about the final classification. As a result, there is an increasing demand for Explainable AI (XAI) models, whose main goal is to provide understandable solutions for human beings and to elucidate the relationship between the features and the black-box model. In this paper, we introduce a novel explainer method, named PDTX, based on the Perceptron Decision Tree (PDT). The evolutionary algorithm jSO is employed to fit the weights of the PDT to approximate the predictions of the black-box model. Then, it is possible to extract valuable information that explains the behavior of the machine learning method. The PDTX was tested in 10 different datasets from a public repository as an explainer for three classifiers: Multi-Layer Perceptron, Random Forest and Support Vector Machine. Decision-Tree and LIME were used as baselines for comparison. The results showed promising performance in the majority of the experiments, achieving 87.34% of average accuracy, against 64.23% from DT and 37.44% from LIME. The PDTX can be used for black-box classifier explanations, for local instances and it is model-agnostic.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Oliveira-Junior, Robinson A. A. de. "Credit scoring development in the light of the new Brazilian General Data Protection Law." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/kdmile.2021.17462.

Повний текст джерела
Анотація:
With the advent of the new Brazilian General Data Protection Law (LGPD) which determines the right to the explanation of automated decisions, the use of non-interpretable models for human beings, known as black boxes, for the purposes of credit risk assessment may remain unfeasible. Thus, three different methods commonly applied to credit scoring – logistic regression, decision tree, and support vector machine (SVM) – were adjusted to an anonymized sample of a consumer credit portfolio from a credit union. Their results were compared and the adequacy of the explanation achieved for each classifier was assessed. Particularly for the SVM, which generated a black box model, a local interpretation method – the SHapley Additive exPlanation (SHAP) – was incorporated, enabling this machine learning classifier to fulfill the requirements imposed by the new LGPD, in equivalence to the inherent comprehensibility of the white box models.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sooksatra, Korn, Pablo Rivas, and Bikram Khanal. "On Adversarial Examples for Text Classification By Perturbing Latent Representations." In LatinX in AI at Neural Information Processing Systems Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202211284.

Повний текст джерела
Анотація:
Recently, with the advancement of deep learning, several applications in text classification have advanced significantly. However, this improvement comes with a cost because deep learning is vulnerable to adversarial examples. This weakness indicates that deep learning is not very robust. Fortunately, the input of a text classifier is discrete. Hence, it can prevent the classifier from state-of-the-art attacks. Nonetheless, previous works have generated black-box attacks that successfully manipulate the discrete values of the input to find adversarial examples. Therefore, instead of changing the discrete values, we transform the input into its embedding vector containing real values to perform the state-of-the-art white-box attacks. Then, we convert the perturbed embedding vector back into a text and name it an adversarial example. In summary, we create a framework that measures the robustness of a text classifier by using the gradients of the classifier.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Iwasawa, Yusuke, Kotaro Nakayama, Ikuko Yairi, and Yutaka Matsuo. "Privacy Issues Regarding the Application of DNNs to Activity-Recognition using Wearables and Its Countermeasures by Use of Adversarial Training." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/268.

Повний текст джерела
Анотація:
Deep neural networks have been successfully applied to activity recognition with wearables in terms of recognition performance. However, the black-box nature of neural networks could lead to privacy concerns. Namely, generally it is hard to expect what neural networks learn from data, and so they possibly learn features that highly discriminate user-information unintentionally, which increases the risk of information-disclosure. In this study, we analyzed the features learned by conventional deep neural networks when applied to data of wearables to confirm this phenomenon.Based on the results of our analysis, we propose the use of an adversarial training framework to suppress the risk of sensitive/unintended information disclosure. Our proposed model considers both an adversarial user classifier and a regular activity-classifier during training, which allows the model to learn representations that help the classifier to distinguish the activities but which, at the same time, prevents it from accessing user-discriminative information. This paper provides an empirical validation of the privacy issue and efficacy of the proposed method using three activity recognition tasks based on data of wearables. The empirical validation shows that our proposed method suppresses the concerns without any significant performance degradation, compared to conventional deep nets on all three tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Radulovic, Nedeljko, Albert Bifet, and Fabian Suchanek. "Confident Interpretations of Black Box Classifiers." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Yu, Kun Shao, Junan Yang, and Hui Liu. "Black-Box Universal Adversarial Attack on Text Classifiers." In 2021 2nd Asia Conference on Computers and Communications (ACCC). IEEE, 2021. http://dx.doi.org/10.1109/accc54619.2021.00007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії