Gotowa bibliografia na temat „Black-box learning”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Black-box learning”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Black-box learning"

1

Nax, Heinrich H., Maxwell N. Burton-Chellew, Stuart A. West i H. Peyton Young. "Learning in a black box". Journal of Economic Behavior & Organization 127 (lipiec 2016): 1–15. http://dx.doi.org/10.1016/j.jebo.2016.04.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Battaile, Bennett. "Black-box electronics and passive learning". Physics Today 67, nr 2 (luty 2014): 11. http://dx.doi.org/10.1063/pt.3.2258.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hess, Karl. "Black-box electronics and passive learning". Physics Today 67, nr 2 (luty 2014): 11–12. http://dx.doi.org/10.1063/pt.3.2259.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Katrutsa, Alexandr, Talgat Daulbaev i Ivan Oseledets. "Black-box learning of multigrid parameters". Journal of Computational and Applied Mathematics 368 (kwiecień 2020): 112524. http://dx.doi.org/10.1016/j.cam.2019.112524.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

The Lancet Respiratory Medicine. "Opening the black box of machine learning". Lancet Respiratory Medicine 6, nr 11 (listopad 2018): 801. http://dx.doi.org/10.1016/s2213-2600(18)30425-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Rudnick, Abraham. "The Black Box Myth". International Journal of Extreme Automation and Connectivity in Healthcare 1, nr 1 (styczeń 2019): 1–3. http://dx.doi.org/10.4018/ijeach.2019010101.

Pełny tekst źródła
Streszczenie:
Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.
Style APA, Harvard, Vancouver, ISO itp.
7

Pintelas, Emmanuel, Ioannis E. Livieris i Panagiotis Pintelas. "A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability". Algorithms 13, nr 1 (5.01.2020): 17. http://dx.doi.org/10.3390/a13010017.

Pełny tekst źródła
Streszczenie:
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model.
Style APA, Harvard, Vancouver, ISO itp.
8

Kirsch, Louis, Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh i Yutian Chen. "Introducing Symmetries to Black Box Meta Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 7202–10. http://dx.doi.org/10.1609/aaai.v36i7.20681.

Pełny tekst źródła
Streszczenie:
Meta reinforcement learning (RL) attempts to discover new RL algorithms automatically from environment interaction. In so-called black-box approaches, the policy and the learning algorithm are jointly represented by a single neural network. These methods are very flexible, but they tend to underperform compared to human-engineered RL algorithms in terms of generalisation to new, unseen environments. In this paper, we explore the role of symmetries in meta-generalisation. We show that a recent successful meta RL approach that meta-learns an objective for backpropagation-based learning exhibits certain symmetries (specifically the reuse of the learning rule, and invariance to input and output permutations) that are not present in typical black-box meta RL systems. We hypothesise that these symmetries can play an important role in meta-generalisation. Building off recent work in black-box supervised meta learning, we develop a black-box meta RL system that exhibits these same symmetries. We show through careful experimentation that incorporating these symmetries can lead to algorithms with a greater ability to generalise to unseen action & observation spaces, tasks, and environments.
Style APA, Harvard, Vancouver, ISO itp.
9

Taub, Simon, i Oleg S. Pianykh. "An alternative to the black box: Strategy learning". PLOS ONE 17, nr 3 (18.03.2022): e0264485. http://dx.doi.org/10.1371/journal.pone.0264485.

Pełny tekst źródła
Streszczenie:
In virtually any practical field or application, discovering and implementing near-optimal decision strategies is essential for achieving desired outcomes. Workflow planning is one of the most common and important problems of this kind, as sub-optimal decision-making may create bottlenecks and delays that decrease efficiency and increase costs. Recently, machine learning has been used to attack this problem, but unfortunately, most proposed solutions are “black box” algorithms with underlying logic unclear to humans. This makes them hard to implement and impossible to trust, significantly limiting their practical use. In this work, we propose an alternative approach: using machine learning to generate optimal, comprehensible strategies which can be understood and used by humans directly. Through three common decision-making problems found in scheduling, we demonstrate the implementation and feasibility of this approach, as well as its great potential to attain near-optimal results.
Style APA, Harvard, Vancouver, ISO itp.
10

Hargreaves, Eleanore. "Assessment for learning? Thinking outside the (black) box". Cambridge Journal of Education 35, nr 2 (czerwiec 2005): 213–24. http://dx.doi.org/10.1080/03057640500146880.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Black-box learning"

1

Hussain, Jabbar. "Deep Learning Black Box Problem". Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393479.

Pełny tekst źródła
Streszczenie:
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Kamp, Michael [Verfasser]. "Black-Box Parallelization for Machine Learning / Michael Kamp". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1200020057/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Verì, Daniele. "Empirical Model Learning for Constrained Black Box Optimization". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25704/.

Pełny tekst źródła
Streszczenie:
Black box optimization is a field of the global optimization which consists in a family of methods intended to minimize or maximize an objective function that doesn’t allow the exploitation of gradients, linearity or convexity information. Beside that the objective is often a problem that requires a significant amount of time/resources to query a point and thus the goal is to go as close as possible to the optimum with the less number of iterations possible. The Emprical Model Learning is a methodology for merging Machine Learning and optimization techniques like Constraint Programming and Mixed Integer Linear Programming by extracting decision models from the data. This work aims to close the gap between Empirical Model Learning optimization and Black Box optimization methods (which have a strong literature) via active learning. At each iteration of the optimization loop a ML model is fitted on the data points and it is embedded in a prescriptive model using the EML. The encoded model is then enriched with domain specific constraints and is optimized selecting the next point to query and add to the collection of samples.
Style APA, Harvard, Vancouver, ISO itp.
4

Rowan, Adriaan. "Unravelling black box machine learning methods using biplots". Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31124.

Pełny tekst źródła
Streszczenie:
Following the development of new mathematical techniques, the improvement of computer processing power and the increased availability of possible explanatory variables, the financial services industry is moving toward the use of new machine learning methods, such as neural networks, and away from older methods such as generalised linear models. However, their use is currently limited because they are seen as “black box” models, which gives predictions without justifications and which are therefore not understood and cannot be trusted. The goal of this dissertation is to expand on the theory and use of biplots to visualise the impact of the various input factors on the output of the machine learning black box. Biplots are used because they give an optimal two-dimensional representation of the data set on which the machine learning model is based.The biplot allows every point on the biplot plane to be converted back to the original ��-dimensions – in the same format as is used by the machine learning model. This allows the output of the model to be represented by colour coding each point on the biplot plane according to the output of an independently calibrated machine learning model. The interaction of the changing prediction probabilities – represented by the coloured output – in relation to the data points and the variable axes and category level points represented on the biplot, allows the machine learning model to be globally and locally interpreted. By visualing the models and their predictions, this dissertation aims to remove the stigma of calling non-linear models “black box” models and encourage their wider application in the financial services industry.
Style APA, Harvard, Vancouver, ISO itp.
5

Mena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems". Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.

Pełny tekst źródła
Streszczenie:
Currently, thanks to the Big Data boom, the excellent results obtained by deep learning models and the strong digital transformation experienced over the last years, many companies have decided to incorporate machine learning models into their systems. Some companies have detected this opportunity and are making a portfolio of artificial intelligence services available to third parties in the form of application programming interfaces (APIs). Subsequently, developers include calls to these APIs to incorporate AI functionalities in their products. Although it is an option that saves time and resources, it is true that, in most cases, these APIs are displayed in the form of blackboxes, the details of which are unknown to their clients. The complexity of such products typically leads to a lack of control and knowledge of the internal components, which, in turn, can drive to potential uncontrolled risks. Therefore, it is necessary to develop methods capable of evaluating the performance of these black-boxes when applied to a specific application. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. In the first part of the thesis, we develop a thorough revision of the concept of uncertainty, focusing on the uncertainty of classification systems. We review the existingrelated literature, describing the different approaches for modelling this uncertainty, its application to different use cases and some of its desirable properties. Next, we introduce the proposed method for modelling uncertainty in black-box settings. Moreover, in the last chapters of the thesis, we showcase the method applied to different domains, including NLP and computer vision problems. Finally, we include two reallife applications of the method: classification of overqualification in job descriptions and readability assessment of texts.
La tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
Style APA, Harvard, Vancouver, ISO itp.
6

Siqueira, Gomes Hugo. "Meta learning for population-based algorithms in black-box optimization". Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/68764.

Pełny tekst źródła
Streszczenie:
Les problèmes d’optimisation apparaissent dans presque tous les domaines scientifiques. Cependant, le processus laborieux de conception d’un optimiseur approprié peut demeurer infructueux. La question la plus ambitieuse de l’optimisation est peut-être de savoir comment concevoir des optimiseurs suffisamment flexibles pour s’adapter à un grand nombre de scénarios, tout en atteignant des performances de pointe. Dans ce travail, nous visons donner une réponse potentielle à cette question en étudiant comment faire un méta-apprentissage d’optimiseurs à base de population. Nous motivons et décrivons une modélisation commune pour la plupart des algorithmes basés sur la population, qui présentent des principes d’adaptation générale. Cette structure permet de dériver un cadre de méta-apprentissage basé sur un processus de décision de Markov partiellement observable (POMDP). Notre formulation conceptuelle fournit une méthodologie générale pour apprendre l’algorithme d’optimisation lui-même, présenté comme un problème de méta-apprentissage ou d’apprentissage pour optimiser à l’aide d’ensembles de données d’analyse comparative en boîte noire, pour former des optimiseurs polyvalents efficaces. Nous estimons une fonction d’apprentissage de méta-perte basée sur les performances d’algorithmes stochastiques. Notre analyse expérimentale indique que cette nouvelle fonction de méta-perte encourage l’algorithme appris à être efficace et robuste à une convergence prématurée. En outre, nous montrons que notre approche peut modifier le comportement de recherche d’un algorithme pour s’adapter facilement à un nouveau contexte et être efficace par rapport aux algorithmes de pointe, tels que CMA-ES.
Optimization problems appear in almost any scientific field. However, the laborious process to design a suitable optimizer may lead to an unsuccessful outcome. Perhaps the most ambitious question in optimization is how we can design optimizers that can be flexible enough to adapt to a vast number of scenarios while at the same time reaching state-of-the-art performance. In this work, we aim to give a potential answer to this question by investigating how to metalearn population-based optimizers. We motivate and describe a common structure for most population-based algorithms, which present principles for general adaptation. This structure can derive a meta-learning framework based on a Partially observable Markov decision process (POMDP). Our conceptual formulation provides a general methodology to learn the optimizer algorithm itself, framed as a meta-learning or learning-to-optimize problem using black-box benchmarking datasets to train efficient general-purpose optimizers. We estimate a meta-loss training function based on stochastic algorithms’ performance. Our experimental analysis indicates that this new meta-loss function encourages the learned algorithm to be sample efficient and robust to premature convergence. Besides, we show that our approach can alter an algorithm’s search behavior to fit easily in a new context and be sample efficient compared to state-of-the-art algorithms, such as CMA-ES.
Style APA, Harvard, Vancouver, ISO itp.
7

Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.

Pełny tekst źródła
Streszczenie:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 45-47).
We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.
by Michael Sun.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
8

Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur la configurationAutomatisée des algorithmes qui vise à trouver le meilleur paramétrage à un problème donné ou une catégorie deproblèmes.Le problème de configuration de l'algorithme revient doncà un problème de métaFoptimisation dans l'espace desparamètres, dont le métaFobjectif est la mesure deperformance de l’algorithme donné avec une configuration de paramètres donnée.Des approches plus récentes reposent sur une description des problèmes et ont pour but d’apprendre la relationentre l’espace des caractéristiques des problèmes etl’espace des configurations de l’algorithme à paramétrer.Cette thèse de doctorat porter le CAPI (Configurationd'Algorithme Par Instance) pour résoudre des problèmesd'optimisation de boîte noire continus, où seul un budgetlimité d'évaluations de fonctions est disponible. Nous étudions d'abord' les algorithmes évolutionnairesPour l'optimisation continue, en mettant l'accent sur deux algorithmes que nous avons utilisés comme algorithmecible pour CAPI,DE et CMAFES.Ensuite, nous passons en revue l'état de l'art desapproches de configuration d'algorithme, et lesdifférentes fonctionnalités qui ont été proposées dansla littérature pour décrire les problèmesd'optimisation de boîte noire continue.Nous introduisons ensuite une méthodologie générale Pour étudier empiriquement le CAPI pour le domainecontinu, de sorte que toutes les composantes du CAPIpuissent être explorées dans des conditions réelles.À cette fin, nous introduisons également un nouveau Banc d'essai de boîte noire continue, distinct ducélèbre benchmark BBOB, qui est composé deplusieurs fonctions de test multidimensionnelles avec'différentes propriétés problématiques, issues de lalittérature.La méthodologie proposée est finalement appliquée 'àdeux AES. La méthodologie est ainsi, validéempiriquement sur le nouveau banc d’essaid’optimisation boîte noire pour des dimensions allant jusqu’à 100
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Style APA, Harvard, Vancouver, ISO itp.
9

REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.

Pełny tekst źródła
Streszczenie:
I recenti algoritmi di apprendimento automatico ad alte prestazioni sono convincenti ma opachi, quindi spesso è difficile capire come arrivano alle loro previsioni, dando origine a problemi di interpretabilità. Questi problemi sono particolarmente rilevanti nell'apprendimento supervisionato, dove questi modelli "black-box" non sono facilmente comprensibili per le parti interessate. Un numero crescente di lavori si concentra sul rendere più interpretabili i modelli di apprendimento automatico, in particolare quelli di apprendimento profondo. Gli approcci attualmente proposti si basano su un'interpretazione post-hoc, utilizzando metodi come la mappatura della salienza e le dipendenze parziali. Nonostante i progressi compiuti, l'interpretabilità è ancora un'area di ricerca attiva e non esiste una soluzione definitiva. Inoltre, nei processi decisionali ad alto rischio, l'interpretabilità post-hoc può essere subottimale. Un esempio è il campo della modellazione del rischio di credito aziendale. In questi campi, i modelli di classificazione discriminano tra buoni e cattivi mutuatari. Di conseguenza, gli istituti di credito possono utilizzare questi modelli per negare le richieste di prestito. Il rifiuto di un prestito può essere particolarmente dannoso quando il mutuatario non può appellarsi o avere una spiegazione e una motivazione della decisione. In questi casi, quindi, è fondamentale capire perché questi modelli producono un determinato risultato e orientare il processo di apprendimento verso previsioni basate sui fondamentali. Questa tesi si concentra sul concetto di Interpretable Machine Learning, con particolare attenzione al contesto della modellazione del rischio di credito. In particolare, la tesi ruota attorno a tre argomenti: l'interpretabilità agnostica del modello, l'interpretazione post-hoc nel rischio di credito e l'apprendimento guidato dall'interpretabilità. Più specificamente, il primo capitolo è un'introduzione guidata alle tecniche model-agnostic che caratterizzano l'attuale panorama del Machine Learning e alle loro implementazioni. Il secondo capitolo si concentra su un'analisi empirica del rischio di credito delle piccole e medie imprese italiane. Propone una pipeline analitica in cui l'interpretabilità post-hoc gioca un ruolo cruciale nel trovare le basi rilevanti che portano un'impresa al fallimento. Il terzo e ultimo articolo propone una nuova metodologia di iniezione di conoscenza multicriteriale. La metodologia si basa sulla doppia retropropagazione e può migliorare le prestazioni del modello, soprattutto in caso di scarsità di dati. Il vantaggio essenziale di questa metodologia è che permette al decisore di imporre le sue conoscenze pregresse all'inizio del processo di apprendimento, facendo previsioni che si allineano con i fondamentali.
Recent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Style APA, Harvard, Vancouver, ISO itp.
10

Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison". Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.

Pełny tekst źródła
Streszczenie:
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Black-box learning"

1

Group, Assessment Reform, i University of Cambridge. Faculty of Education., red. Assessment for learning: Beyond the black box. [Cambridge?]: Assessment Reform Group, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Pardalos, Panos M., Varvara Rasskazova i Michael N. Vrahatis, red. Black Box Optimization, Machine Learning, and No-Free Lunch Theorems. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

1979-, Nashat Bidjan, i World Bank, red. The black box of governmental learning: The learning spiral -- a concept to organize learning in governments. Washington, D.C: World Bank, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

King's College, London. Department of Education and Professional Studies., red. Working inside the black box: Assessment for learning in the classroom. London: nferNelson, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

1930-, Black P. J., i King's College, London. Department of Education and Professional Studies., red. Working inside the black box: Assessment for learning in the classroom. London: Department of Education and Professional Studies, Kings College, London, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Russell, David W. The BOXES Methodology: Black Box Dynamic Control. London: Springer London, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Black, Paul. Working inside the black box: An assessment for learning in the classroom. London: Department of Education and Professional Studies, Kings College, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

J, Cox Margaret, i King's College London. Department of Education and Professional Studies, red. Information and communication technology inside the black box: Assessment for learning in the ICT classroom. London: NferNelson, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

English Inside The Black Box Assessment For Learning In The English Classroom. GL Assessment, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Pardalos, P. M. Black Box Optimization, Machine Learning, and No-Free Lunch Theorems. Springer International Publishing AG, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Black-box learning"

1

Howard, Sarah, Kate Thompson i Abelardo Pardo. "Opening the black box". W Learning Analytics in the Classroom, 152–64. Abingdon, Oxon ; New York, NY : Routledge, 2019.: Routledge, 2018. http://dx.doi.org/10.4324/9781351113038-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Dinov, Ivo D. "Black Box Machine Learning Methods". W The Springer Series in Applied Machine Learning, 341–83. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-17483-4_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sudmann, Andreas. "On Computer creativity. Machine learning and the arts of artificial intelligences". W The Black Box Book, 264–80. Brno: Masaryk University Press, 2022. http://dx.doi.org/10.5817/cz.muni.m280-0225-2022-11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fournier-Viger, Philippe, Mehdi Najjar, André Mayers i Roger Nkambou. "From Black-Box Learning Objects to Glass-Box Learning Objects". W Intelligent Tutoring Systems, 258–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11774303_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

TV, Vishnu, Pankaj Malhotra, Jyoti Narwariya, Lovekesh Vig i Gautam Shroff. "Meta-Learning for Black-Box Optimization". W Machine Learning and Knowledge Discovery in Databases, 366–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46147-8_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Archetti, F., A. Candelieri, B. G. Galuzzi i R. Perego. "Learning Enabled Constrained Black-Box Optimization". W Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, 1–33. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kampakis, Stylianos. "Machine Learning: Inside the Black Box". W Predicting the Unknown, 113–31. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9505-2_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Stachowiak-Szymczak, Katarzyna. "Interpreting: Different Approaches Towards the ‘Black Box’". W Second Language Learning and Teaching, 1–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19443-7_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Cai, Jinghui, Boyang Wang, Xiangfeng Wang i Bo Jin. "Accelerate Black-Box Attack with White-Box Prior Knowledge". W Intelligence Science and Big Data Engineering. Big Data and Machine Learning, 394–405. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36204-1_33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kuri-Morales, Angel Fernando. "Removing the Black-Box from Machine Learning". W Lecture Notes in Computer Science, 36–46. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33783-3_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Black-box learning"

1

Gao, Jingyue, Xiting Wang, Yasha Wang, Yulan Yan i Xing Xie. "Learning Groupwise Explanations for Black-Box Models". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/330.

Pełny tekst źródła
Streszczenie:
We study two user demands that are important during the exploitation of explanations in practice: 1) understanding the overall model behavior faithfully with limited cognitive load and 2) predicting the model behavior accurately on unseen instances. We illustrate that the two user demands correspond to two major sub-processes in the human cognitive process and propose a unified framework to fulfill them simultaneously. Given a local explanation method, our framework jointly 1) learns a limited number of groupwise explanations that interpret the model behavior on most instances with high fidelity and 2) specifies the region where each explanation applies. Experiments on six datasets demonstrate the effectiveness of our method.
Style APA, Harvard, Vancouver, ISO itp.
2

Papernot, Nicolas, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik i Ananthram Swami. "Practical Black-Box Attacks against Machine Learning". W ASIA CCS '17: ACM Asia Conference on Computer and Communications Security. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3052973.3053009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wajahat, Muhammad, Anshul Gandhi, Alexei Karve i Andrzej Kochut. "Using machine learning for black-box autoscaling". W 2016 Seventh International Green and Sustainable Computing Conference (IGSC). IEEE, 2016. http://dx.doi.org/10.1109/igcc.2016.7892598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Aggarwal, Aniya, Pranay Lohia, Seema Nagar, Kuntal Dey i Diptikalyan Saha. "Black box fairness testing of machine learning models". W ESEC/FSE '19: 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3338906.3338937.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Rasouli, Peyman, i Ingrid Chieh Yu. "Explainable Debugger for Black-box Machine Learning Models". W 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533944.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Pengcheng, Li, Jinfeng Yi i Lijun Zhang. "Query-Efficient Black-Box Attack by Active Learning". W 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 2018. http://dx.doi.org/10.1109/icdm.2018.00159.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Nikoloska, Ivana, i Osvaldo Simeone. "Bayesian Active Meta-Learning for Black-Box Optimization". W 2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC). IEEE, 2022. http://dx.doi.org/10.1109/spawc51304.2022.9833993.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Fu, Junjie, Jian Sun i Gang Wang. "Boosting Black-Box Adversarial Attacks with Meta Learning". W 2022 41st Chinese Control Conference (CCC). IEEE, 2022. http://dx.doi.org/10.23919/ccc55666.2022.9901576.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Huang, Chen, Shuangfei Zhai, Pengsheng Guo i Josh Susskind. "MetricOpt: Learning to Optimize Black-Box Evaluation Metrics". W 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00024.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Han, Gyojin, Jaehyun Choi, Haeil Lee i Junmo Kim. "Reinforcement Learning-Based Black-Box Model Inversion Attacks". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01964.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Black-box learning"

1

Zhang, Guannan, Matt Bement i Hoang Tran. Final Report on Field Work Proposal ERKJ358: Black-Box Training for Scientific Machine Learning Models. Office of Scientific and Technical Information (OSTI), grudzień 2022. http://dx.doi.org/10.2172/1905375.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hauzenberger, Niko, Florian Huber, Gary Koop i James Mitchell. Bayesian modeling of time-varying parameters using regression trees. Federal Reserve Bank of Cleveland, styczeń 2023. http://dx.doi.org/10.26509/frbc-wp-202305.

Pełny tekst źródła
Streszczenie:
In light of widespread evidence of parameter instability in macroeconomic models, many time-varying parameter (TVP) models have been proposed. This paper proposes a nonparametric TVP-VAR model using Bayesian additive regression trees (BART). The novelty of this model stems from the fact that the law of motion driving the parameters is treated nonparametrically. This leads to great flexibility in the nature and extent of parameter change, both in the conditional mean and in the conditional variance. In contrast to other nonparametric and machine learning methods that are black box, inference using our model is straightforward because, in treating the parameters rather than the variables nonparametrically, the model remains conditionally linear in the mean. Parsimony is achieved through adopting nonparametric factor structures and use of shrinkage priors. In an application to US macroeconomic data, we illustrate the use of our model in tracking both the evolving nature of the Phillips curve and how the effects of business cycle shocks on inflationary measures vary nonlinearly with movements in uncertainty.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii