Добірка наукової літератури з теми "Black-Box Classifier"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Black-Box Classifier".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Black-Box Classifier"
Lee, Hansoo, and Sungshin Kim. "Black-Box Classifier Interpretation Using Decision Tree and Fuzzy Logic-Based Classifier Implementation." International Journal of Fuzzy Logic and Intelligent Systems 16, no. 1 (March 31, 2016): 27–35. http://dx.doi.org/10.5391/ijfis.2016.16.1.27.
Повний текст джерелаRajabi, Arezoo, Mahdieh Abbasi, Rakesh B. Bobba, and Kimia Tajik. "Adversarial Images Against Super-Resolution Convolutional Neural Networks for Free." Proceedings on Privacy Enhancing Technologies 2022, no. 3 (July 2022): 120–39. http://dx.doi.org/10.56553/popets-2022-0065.
Повний текст джерелаJi, Disi, Robert L. Logan, Padhraic Smyth, and Mark Steyvers. "Active Bayesian Assessment of Black-Box Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7935–44. http://dx.doi.org/10.1609/aaai.v35i9.16968.
Повний текст джерелаTran, Thien Q., Kazuto Fukuchi, Youhei Akimoto, and Jun Sakuma. "Unsupervised Causal Binary Concepts Discovery with VAE for Black-Box Model Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9614–22. http://dx.doi.org/10.1609/aaai.v36i9.21195.
Повний текст джерелаPark, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (October 14, 2020): 7168. http://dx.doi.org/10.3390/app10207168.
Повний текст джерелаLou, Chenlu, and Xiang Pan. "Detect Black Box Signals with Enhanced Spectrum and Support Vector Classifier." Journal of Physics: Conference Series 1438 (January 2020): 012003. http://dx.doi.org/10.1088/1742-6596/1438/1/012003.
Повний текст джерелаChen, Pengpeng, Hailong Sun, Yongqiang Yang, and Zhijun Chen. "Adversarial Learning from Crowds." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5304–12. http://dx.doi.org/10.1609/aaai.v36i5.20467.
Повний текст джерелаMahmood, Kaleel, Deniz Gurevin, Marten van Dijk, and Phuoung Ha Nguyen. "Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples." Entropy 23, no. 10 (October 18, 2021): 1359. http://dx.doi.org/10.3390/e23101359.
Повний текст джерелаHartono, Pitoyo. "A transparent cancer classifier." Health Informatics Journal 26, no. 1 (December 31, 2018): 190–204. http://dx.doi.org/10.1177/1460458218817800.
Повний текст джерелаMasuda, Haruki, Tsunato Nakai, Kota Yoshida, Takaya Kubota, Mitsuru Shiozaki, and Takeshi Fujino. "Black-Box Adversarial Attack against Deep Neural Network Classifier Utilizing Quantized Probability Output." Journal of Signal Processing 24, no. 4 (July 15, 2020): 145–48. http://dx.doi.org/10.2299/jsp.24.145.
Повний текст джерелаДисертації з теми "Black-Box Classifier"
Mena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.
Повний текст джерелаLa tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
Olofsson, Nina. "A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210565.
Повний текст джерелаMetoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.
Повний текст джерелаAs doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Частини книг з теми "Black-Box Classifier"
Liu, Xinghan, and Emiliano Lorini. "A Logic of “Black Box” Classifier Systems." In Logic, Language, Information, and Computation, 158–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15298-6_10.
Повний текст джерелаAli, Abdullah, and Birhanu Eshete. "Best-Effort Adversarial Approximation of Black-Box Malware Classifiers." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 318–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63086-7_18.
Повний текст джерелаPanigutti, Cecilia, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi. "Explaining Multi-label Black-Box Classifiers for Health Applications." In Precision Health and Medicine, 97–110. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-24409-5_9.
Повний текст джерелаJung, Hyungsik, Youngrock Oh, Jeonghyung Park, and Min Soo Kim. "Jointly Optimize Positive and Negative Saliencies for Black Box Classifiers." In Pattern Recognition. ICPR International Workshops and Challenges, 76–89. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68796-0_6.
Повний текст джерелаLampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.
Повний текст джерелаVijayaraghavan, Prashanth, and Deb Roy. "Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model." In Machine Learning and Knowledge Discovery in Databases, 711–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46147-8_43.
Повний текст джерелаBrandl, Julius, Nicolas Breinl, Maximilian Demmler, Lukas Hartmann, Jörg Hähner, and Anthony Stein. "Reducing Search Space of Genetic Algorithms for Fast Black Box Attacks on Image Classifiers." In KI 2019: Advances in Artificial Intelligence, 115–22. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30179-8_9.
Повний текст джерелаRosenberg, Ishai, Asaf Shabtai, Lior Rokach, and Yuval Elovici. "Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers." In Research in Attacks, Intrusions, and Defenses, 490–510. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00470-5_23.
Повний текст джерелаUzunova, Hristina, Jan Ehrhardt, Timo Kepp, and Heinz Handels. "Abstract: Interpretable Explanations of Black Box Classifiers Applied on Medical Images by Meaningful Perturbations Using Variational Autoencoders." In Informatik aktuell, 197. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-25326-4_42.
Повний текст джерелаRabold, Johannes, Michael Siebers, and Ute Schmid. "Explaining Black-Box Classifiers with ILP – Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules." In Inductive Logic Programming, 105–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99960-9_7.
Повний текст джерелаТези доповідей конференцій з теми "Black-Box Classifier"
Alufaisan, Yasmeen, Murat Kantarcioglu, and Yan Zhou. "Detecting Discrimination in a Black-Box Classifier." In 2016 IEEE 2nd International Conference on Collaboration and Internet Computing (CIC). IEEE, 2016. http://dx.doi.org/10.1109/cic.2016.051.
Повний текст джерелаGitiaux, Xavier, and Huzefa Rangwala. "mdfa: Multi-Differential Fairness Auditor for Black Box Classifiers." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/814.
Повний текст джерелаLam, Jonathan, Pengrui Quan, Jiamin Xu, Jeya Vikranth Jeyakumar, and Mani Srivastava. "Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier." In SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3417312.3431827.
Повний текст джерелаYu, Jia ao, and Lei Peng. "Black-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples." In 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP). IEEE, 2020. http://dx.doi.org/10.1109/icsip49896.2020.9339329.
Повний текст джерелаSantos, Samara Silva, Marcos Antonio Alves, Leonardo Augusto Ferreira, and Frederico Gadelha Guimarães. "PDTX: A novel local explainer based on the Perceptron Decision Tree." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-50.
Повний текст джерелаOliveira-Junior, Robinson A. A. de. "Credit scoring development in the light of the new Brazilian General Data Protection Law." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/kdmile.2021.17462.
Повний текст джерелаSooksatra, Korn, Pablo Rivas, and Bikram Khanal. "On Adversarial Examples for Text Classification By Perturbing Latent Representations." In LatinX in AI at Neural Information Processing Systems Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202211284.
Повний текст джерелаIwasawa, Yusuke, Kotaro Nakayama, Ikuko Yairi, and Yutaka Matsuo. "Privacy Issues Regarding the Application of DNNs to Activity-Recognition using Wearables and Its Countermeasures by Use of Adversarial Training." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/268.
Повний текст джерелаRadulovic, Nedeljko, Albert Bifet, and Fabian Suchanek. "Confident Interpretations of Black Box Classifiers." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534234.
Повний текст джерелаZhang, Yu, Kun Shao, Junan Yang, and Hui Liu. "Black-Box Universal Adversarial Attack on Text Classifiers." In 2021 2nd Asia Conference on Computers and Communications (ACCC). IEEE, 2021. http://dx.doi.org/10.1109/accc54619.2021.00007.
Повний текст джерела