Rozprawy doktorskie na temat „Black-box learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 47 najlepszych rozpraw doktorskich naukowych na temat „Black-box learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Hussain, Jabbar. "Deep Learning Black Box Problem". Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393479.
Pełny tekst źródłaKamp, Michael [Verfasser]. "Black-Box Parallelization for Machine Learning / Michael Kamp". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1200020057/34.
Pełny tekst źródłaVerì, Daniele. "Empirical Model Learning for Constrained Black Box Optimization". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25704/.
Pełny tekst źródłaRowan, Adriaan. "Unravelling black box machine learning methods using biplots". Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31124.
Pełny tekst źródłaMena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems". Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.
Pełny tekst źródłaLa tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
Siqueira, Gomes Hugo. "Meta learning for population-based algorithms in black-box optimization". Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/68764.
Pełny tekst źródłaOptimization problems appear in almost any scientific field. However, the laborious process to design a suitable optimizer may lead to an unsuccessful outcome. Perhaps the most ambitious question in optimization is how we can design optimizers that can be flexible enough to adapt to a vast number of scenarios while at the same time reaching state-of-the-art performance. In this work, we aim to give a potential answer to this question by investigating how to metalearn population-based optimizers. We motivate and describe a common structure for most population-based algorithms, which present principles for general adaptation. This structure can derive a meta-learning framework based on a Partially observable Markov decision process (POMDP). Our conceptual formulation provides a general methodology to learn the optimizer algorithm itself, framed as a meta-learning or learning-to-optimize problem using black-box benchmarking datasets to train efficient general-purpose optimizers. We estimate a meta-loss training function based on stochastic algorithms’ performance. Our experimental analysis indicates that this new meta-loss function encourages the learned algorithm to be sample efficient and robust to premature convergence. Besides, we show that our approach can alter an algorithm’s search behavior to fit easily in a new context and be sample efficient compared to state-of-the-art algorithms, such as CMA-ES.
Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.
Pełny tekst źródłaThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 45-47).
We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.
by Michael Sun.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.
Pełny tekst źródłaThis PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.
Pełny tekst źródłaRecent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison". Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.
Pełny tekst źródłaRentschler, Tobias [Verfasser]. "Explainable machine learning in soil mapping : Peeking into the black box / Tobias Rentschler". Tübingen : Universitätsbibliothek Tübingen, 2021. http://d-nb.info/1236994000/34.
Pełny tekst źródłaCazzaro, Lorenzo <1997>. "AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models". Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19980.
Pełny tekst źródłaLöfström, Helena. "Time to Open the Black Box : Explaining the Predictions of Text Classification". Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-14194.
Pełny tekst źródłaKarim, Abdul. "Molecular toxicity prediction using deep learning". Thesis, Griffith University, 2021. http://hdl.handle.net/10072/406981.
Pełny tekst źródłaThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Corinaldesi, Marianna. "Explainable AI: tassonomia e analisi di modelli spiegabili per il Machine Learning". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Znajdź pełny tekst źródłaOlofsson, Nina. "A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210565.
Pełny tekst źródłaMetoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
Kovaleva, Svetlana. "Entrepreneurial Behavior is Still a Black Box. Three Essays on How Entrepreneurial Learning and Perceptions Can Influence Entrepreneurial Behavior and Firm Performance". Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/369012.
Pełny tekst źródłaKovaleva, Svetlana. "Entrepreneurial Behavior is Still a Black Box. Three Essays on How Entrepreneurial Learning and Perceptions Can Influence Entrepreneurial Behavior and Firm Performance". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1476/1/Kovaleva_Doctoral_thesis.pdf.
Pełny tekst źródłaEastwood, Clare. "Unpacking the black box of voice therapy: Exploration of motor learning and gestural components used in the treatment of muscle tension voice disorder". Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25759.
Pełny tekst źródłaKarlsson, Linda. "Opening up the 'black box' of Competence Development Implementation : - How the process of Competence Development Implementation is structured in the Swedish debt-collection industry". Thesis, Högskolan i Halmstad, Sektionen för ekonomi och teknik (SET), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-23522.
Pełny tekst źródłaMalmberg, Jacob, Öhman Marcus Nystad i Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME". Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.
Pełny tekst źródłaFör att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
O'Shea, Amanda Jane. "Exploring the black box : a multi-case study of assessment for learning in mathematics and the development of autonomy with 9-10 year old children". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709287.
Pełny tekst źródłaLapuschkin, Sebastian Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] [Müller, Thomas [Gutachter] Wiegand i Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller". Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.
Pełny tekst źródłaLapuschkin, Sebastian [Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] Müller, Thomas [Gutachter] Wiegand i Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller". Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.
Pełny tekst źródłaAuernhammer, Katja [Verfasser], Felix [Akademischer Betreuer] Freiling, Kolagari Ramin [Akademischer Betreuer] Tavakoli, Felix [Gutachter] Freiling, Kolagari Ramin [Gutachter] Tavakoli i Dominique [Gutachter] Schröder. "Mask-based Black-box Attacks on Safety-Critical Systems that Use Machine Learning / Katja Auernhammer ; Gutachter: Felix Freiling, Ramin Tavakoli Kolagari, Dominique Schröder ; Felix Freiling, Ramin Tavakoli Kolagari". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1238358292/34.
Pełny tekst źródłaBeillevaire, Marc. "Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model : How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229667.
Pełny tekst źródłaMaskininlärningsmodellerna blir mer och mer kraftfulla och noggranna, men deras goda förutsägelser kommer ofta med en hög komplexitet. Beroende på situationen kan en sådan brist på tolkning vara ett viktigt och blockerande problem. Särskilt är det fallet när man behöver kunna lita på användarsidan för att fatta ett beslut baserat på modellprediktionen. Till exempel, ett försäkringsbolag kan använda en maskininlärningsalgoritm för att upptäcka bedrägerier, men företaget vill vara säkert på att modellen är baserad på meningsfulla variabler innan man faktiskt vidtar åtgärder och undersöker en viss individ. I denna avhandling beskrivs och förklaras flera förklaringsmetoder, på många dataset av typerna textdata och numeriska data, på klassificerings- och regressionsproblem.
Torres, Padilla Juan Pablo. "Inductive Program Synthesis with a Type System". Thesis, Uppsala universitet, Informationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385282.
Pełny tekst źródłaLindström, Sofia, Sebastian Edemalm i Erik Reinholdsson. "Marketers are Watching You : An exploration of AI in relation to marketing, existential threats, and opportunities". Thesis, Jönköping University, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52744.
Pełny tekst źródłaTruong, Nghi Khue Dinh. "A web-based programming environment for novice programmers". Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16471/1/Nghi_Truong_Thesis.pdf.
Pełny tekst źródłaTruong, Nghi Khue Dinh. "A web-based programming environment for novice programmers". Queensland University of Technology, 2007. http://eprints.qut.edu.au/16471/.
Pełny tekst źródłaIrfan, Muhammad Naeem. "Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00767894.
Pełny tekst źródłaPešán, Michele. "Modelování zvukových signálů pomocí neuronových sítí". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442569.
Pełny tekst źródłaBonantini, Andrea. "Analisi di dati e sviluppo di modelli predittivi per sistemi di saldatura". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24664/.
Pełny tekst źródłaDubois, Amaury. "Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre". Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.
Pełny tekst źródłaThe subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
Santos, João André Agostinho dos. "Residential mortgage default risk estimation: cracking the machine learning black box". Master's thesis, 2021. http://hdl.handle.net/10362/122851.
Pełny tekst źródłaVorm, Eric Stephen. "Into the Black Box: Designing for Transparency in Artificial Intelligence". Diss., 2019. http://hdl.handle.net/1805/21600.
Pełny tekst źródłaThe rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build.
Balayan, Vladimir. "Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection". Master's thesis, 2020. http://hdl.handle.net/10362/130774.
Pełny tekst źródłaA Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações.
Huang, Cong-Ren, i 黃琮仁. "The Study of Black-box SQL Injection Security Detection Mechanisms Based on Machine Learning". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7thwhz.
Pełny tekst źródła國立高雄第一科技大學
資訊管理系碩士班
106
With the increasing emphasis on information security, financial industries are more willing to have security inspection for their websites. Black Box Testing can be divided into Software Automation Testing and Manually Testing. Software Automation Testing inspects the weakness policies database preinstalled by manufacturers. It cannot find security problems precisely when the network environment is protected by a web application firewall or an intrusion-detection system. The testing report may have misdetection or cannot find the problem of the system. Manually Testing will generate different testing reports that may be depended on tester's professional ability and the limited time. In this thesis, we design a black-box testing mechanism for detecting SQL injection based on Machine Learning. Our result can improve the drawbacks of automatic testing, and provide the advantages of high scalability and high accuracy.
Lee, Bo-Yin, i 李柏穎. "The Adjustment of Control Parameters for a Black-Box Machine by Deep Reinforcement Learning". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/xk4dz6.
Pełny tekst źródła國立中山大學
電機工程學系研究所
107
The development of Artificial Intelligence is changing with each passing day, All kinds of occupations are looking forward to introduce this technology. AI technology which can increase the production value. However, due to the advent of Industry 4.0, Smart Manufacturing becomes an important project in automated production. We can analyze the needs of different products and consider its production strategy to raise the product quality and reduce labor cost by Smart Manufacturing. In this study, we propose an adjustment method of the control parameters for a black-box Machine by Deep Reinforcement Learning. We use Supervised Learning to train a black-box machine model which can simulate the machine characteristics by neural networks. Next, all needs are to let the agent interact with the machine and associate with the goal of the mission that is expected to be accomplish by an appropriate reward function. After that, the method uses this to guide the Inversion of neural networks to adjust the parameters of the machine. Further, in adjusting these parameters we consider not only the current state but also the trajectory of the state variable to make associated decisions. By this method, it makes the process more efficient and achieve the desire output with fewer adjustments.
CURIA, FRANCESCO. "Explainable clinical decision support system: opening black-box meta-learner algorithm expert's based". Doctoral thesis, 2021. http://hdl.handle.net/11573/1538472.
Pełny tekst źródła(6561242), Piyush Pandita. "BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY". Thesis, 2019.
Znajdź pełny tekst źródłaNeves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems". Master's thesis, 2021. http://hdl.handle.net/10362/126699.
Pełny tekst źródłaAs doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Gvozdetska, Nataliia. "Transfer Learning for Multi-surrogate-model Optimization". 2020. https://tud.qucosa.de/id/qucosa%3A73313.
Pełny tekst źródłaBaasch, Gaby. "Identification of thermal building properties using gray box and deep learning methods". Thesis, 2020. http://hdl.handle.net/1828/12585.
Pełny tekst źródłaGraduate
Repický, Jakub. "Evoluční algoritmy a aktivní učení". Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-355988.
Pełny tekst źródłaEngster, David. "Local- and Cluster Weighted Modeling for Prediction and State Estimation of Nonlinear Dynamical Systems". Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B4FD-1.
Pełny tekst źródłaDittmar, Jörg. "Modellierung dynamischer Prozesse mit radialen Basisfunktionen". Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B4DD-9.
Pełny tekst źródła