Academic literature on the topic 'Black-box learning algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Black-box learning algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Black-box learning algorithm"

1

Hwangbo, Jemin, Christian Gehring, Hannes Sommer, Roland Siegwart, and Jonas Buchli. "Policy Learning with an Efficient Black-Box Optimization Algorithm." International Journal of Humanoid Robotics 12, no. 03 (September 2015): 1550029. http://dx.doi.org/10.1142/s0219843615500292.

Full text
Abstract:
Robotic learning on real hardware requires an efficient algorithm which minimizes the number of trials needed to learn an optimal policy. Prolonged use of hardware causes wear and tear on the system and demands more attention from an operator. To this end, we present a novel black-box optimization algorithm, Reward Optimization with Compact Kernels and fast natural gradient regression (ROCK⋆). Our algorithm immediately updates knowledge after a single trial and is able to extrapolate in a controlled manner. These features make fast and safe learning on real hardware possible. The performance of our method is evaluated with standard benchmark functions that are commonly used to test optimization algorithms. We also present three different robotic optimization examples using ROCK⋆. The first robotic example is on a simulated robot arm, the second is on a real articulated legged system, and the third is on a simulated quadruped robot with 12 actuated joints. ROCK⋆ outperforms the current state-of-the-art algorithms in all tasks sometimes even by an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
2

Kirsch, Louis, Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh, and Yutian Chen. "Introducing Symmetries to Black Box Meta Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7202–10. http://dx.doi.org/10.1609/aaai.v36i7.20681.

Full text
Abstract:
Meta reinforcement learning (RL) attempts to discover new RL algorithms automatically from environment interaction. In so-called black-box approaches, the policy and the learning algorithm are jointly represented by a single neural network. These methods are very flexible, but they tend to underperform compared to human-engineered RL algorithms in terms of generalisation to new, unseen environments. In this paper, we explore the role of symmetries in meta-generalisation. We show that a recent successful meta RL approach that meta-learns an objective for backpropagation-based learning exhibits certain symmetries (specifically the reuse of the learning rule, and invariance to input and output permutations) that are not present in typical black-box meta RL systems. We hypothesise that these symmetries can play an important role in meta-generalisation. Building off recent work in black-box supervised meta learning, we develop a black-box meta RL system that exhibits these same symmetries. We show through careful experimentation that incorporating these symmetries can lead to algorithms with a greater ability to generalise to unseen action & observation spaces, tasks, and environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiang, Fengtao, Jiahui Xu, Wanpeng Zhang, and Weidong Wang. "A Distributed Biased Boundary Attack Method in Black-Box Attack." Applied Sciences 11, no. 21 (November 8, 2021): 10479. http://dx.doi.org/10.3390/app112110479.

Full text
Abstract:
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios. Research on black-box attack methods and the generation of adversarial samples is helpful to discover the defects of machine learning models. It can strengthen the robustness of machine learning algorithms models. Such methods require queries frequently, which are less efficient. This paper has made improvements in the initial generation and the search for the most effective adversarial examples. Besides, it is found that some indicators can be used to detect attacks, which is a new foundation compared with our previous studies. Firstly, the paper proposed an algorithm to generate initial adversarial samples with a smaller L2 norm; secondly, a combination between particle swarm optimization (PSO) and biased boundary adversarial attack (BBA) is proposed. It is the PSO-BBA. Experiments are conducted on the ImageNet. The PSO-BBA is compared with the baseline method. Experimental comparison results certificate that: (1) A distributed framework for adversarial attack methods is proposed; (2) The proposed initial point selection method can reduces query numbers effectively; (3) Compared to the original BBA, the proposed PSO-BBA algorithm accelerates the convergence speed and improves the accuracy of attack accuracy; (4) The improved PSO-BBA algorithm has preferable performance on targeted and non-targeted attacks; (5) The mean structural similarity (MSSIM) can be used as the indicators of adversarial attack.
APA, Harvard, Vancouver, ISO, and other styles
4

LIU, Yanhe, Michael AFNAN, Vincent CONTIZER, Cynthia RUDIN, Abhishek MISHRA, Julian SAVULESCU, and Masoud AFNAN. "Embryo Selection by “Black-Box” Artificial Intelligence: The Ethical and Epistemic Considerations." Fertility & Reproduction 04, no. 03n04 (September 2022): 147. http://dx.doi.org/10.1142/s2661318222740590.

Full text
Abstract:
Background: The combination of time-lapse imaging and artificial intelligence (AI) offers novel potential for embryo assessment by allowing a vast quantity of image data to be analysed via machine learning. Most algorithms developed to date have used neural networks which are uninterpretable (“black-box”) and cannot be understood by doctors, embryologists and patients, which raises ethical and epistemic concerns for embryo selection in a clinical setting. Aim: This study aims to discuss ethical and epistemic considerations surrounding clinical implementation of “black-box” based embryo selection algorithms. Method: A scoping review was performed by evaluating publications reporting “black-box” embryo selection algorithms. Potential ethical and epistemic issues were identified and discussed. Results: No randomised controlled trial was identified in the literature evaluating clinical effectiveness of “black-box” embryo selection algorithms. Several ethical and epistemic concerns were identified. Potential ethical issues included (1) lack of randomised controlled trials, (2) impact on the shared decision-making process in embryo selection between clinicians and patients, (3) misrepresentation of patient values due to hidden reasoning process in “black-box” algorithms, (4) social impacts if algorithm subsequently proven to be biased, and (5) unclear responsibility when algorithm makes obviously poor choices of embryos. Potential epistemic issues included (1) information asymmetries between algorithm developers and doctors, embryologists and patients; (2) risk of biased prediction due to data selection during training process; (3) inability to troubleshoot for data training purposes due to limited interpretability; and (4) the economics of buying into commercial proprietary add-ons. Conclusion: There are significant epistemic and ethical concerns with “black-box” embryo selection. No published randomised controlled trial is available to support its clinical implementation. AI embryo selection in general, however, is potentially useful but must be done carefully and transparently. Interpretable AI would be preferred alternative in causing fewer issues.
APA, Harvard, Vancouver, ISO, and other styles
5

Bausch, Johannes. "Fast Black-Box Quantum State Preparation." Quantum 6 (August 4, 2022): 773. http://dx.doi.org/10.22331/q-2022-08-04-773.

Full text
Abstract:
Quantum state preparation is an important ingredient for other higher-level quantum algorithms, such as Hamiltonian simulation, or for loading distributions into a quantum device to be used e.g. in the context of optimization tasks such as machine learning. Starting with a generic "black box" method devised by Grover in 2000, which employs amplitude amplification to load coefficients calculated by an oracle, there has been a long series of results and improvements with various additional conditions on the amplitudes to be loaded, culminating in Sanders et al.'s work which avoids almost all arithmetic during the preparation stage.In this work, we construct an optimized black box state loading scheme with which various important sets of coefficients can be loaded significantly faster than in O(N) rounds of amplitude amplification, up to only O(1) many. We achieve this with two variants of our algorithm. The first employs a modification of the oracle from Sanders et al., which requires fewer ancillas (log2⁡g vs g+2 in the bit precision g), and fewer non-Clifford operations per amplitude amplification round within the context of our algorithm. The second utilizes the same oracle, but at slightly increased cost in terms of ancillas (g+log2⁡g) and non-Clifford operations per amplification round. As the number of amplitude amplification rounds enters as multiplicative factor, our black box state loading scheme yields an up to exponential speedup as compared to prior methods. This speedup translates beyond the black box case.
APA, Harvard, Vancouver, ISO, and other styles
6

MIKE, KOBY, and ORIT HAZZAN. "MACHINE LEARNING FOR NON-MAJORS: A WHITE BOX APPROACH." STATISTICS EDUCATION RESEARCH JOURNAL 21, no. 2 (July 4, 2022): 10. http://dx.doi.org/10.52041/serj.v21i2.45.

Full text
Abstract:
Data science is a new field of research, with growing interest in recent years, that focuses on extracting knowledge and value from data. New data science education programs, which are being launched at a growing rate, are designed for multiple levels, beginning with elementary school pupils. Machine learning is an important element of data science that requires an extensive background in mathematics. While it is possible to teach the principles of machine learning as a black box, it might be difficult to improve algorithm performance without a white box understanding of the underlaying learning algorithms. In this paper, we suggest pedagogical methods to support white box understanding of machine learning algorithms for learners who lack the needed graduate level of mathematics, particularly high school computer science pupils.
APA, Harvard, Vancouver, ISO, and other styles
7

García, Javier, Roberto Iglesias, Miguel A. Rodríguez, and Carlos V. Regueiro. "Directed Exploration in Black-Box Optimization for Multi-Objective Reinforcement Learning." International Journal of Information Technology & Decision Making 18, no. 03 (May 2019): 1045–82. http://dx.doi.org/10.1142/s0219622019500093.

Full text
Abstract:
Usually, real-world problems involve the optimization of multiple, possibly conflicting, objectives. These problems may be addressed by Multi-objective Reinforcement learning (MORL) techniques. MORL is a generalization of standard Reinforcement Learning (RL) where the single reward signal is extended to multiple signals, in particular, one for each objective. MORL is the process of learning policies that optimize multiple objectives simultaneously. In these problems, the use of directional/gradient information can be useful to guide the exploration to better and better behaviors. However, traditional policy-gradient approaches have two main drawbacks: they require the use of a batch of episodes to properly estimate the gradient information (reducing in this way the learning speed), and they use stochastic policies which could have a disastrous impact on the safety of the learning system. In this paper, we present a novel population-based MORL algorithm for problems in which the underlying objectives are reasonably smooth. It presents two main characteristics: fast computation of the gradient information for each objective through the use of neighboring solutions, and the use of this information to carry out a geometric partition of the search space and thus direct the exploration to promising areas. Finally, the algorithm is evaluated and compared to policy gradient MORL algorithms on different multi-objective problems: the water reservoir and the biped walking problem (the latter both on simulation and on a real robot).
APA, Harvard, Vancouver, ISO, and other styles
8

Mayr, Franz, Sergio Yovine, and Ramiro Visca. "Property Checking with Interpretable Error Characterization for Recurrent Neural Networks." Machine Learning and Knowledge Extraction 3, no. 1 (February 12, 2021): 205–27. http://dx.doi.org/10.3390/make3010010.

Full text
Abstract:
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Anđelić, Nikola, Ivan Lorencin, Matko Glučina, and Zlatan Car. "Mean Phase Voltages and Duty Cycles Estimation of a Three-Phase Inverter in a Drive System Using Machine Learning Algorithms." Electronics 11, no. 16 (August 21, 2022): 2623. http://dx.doi.org/10.3390/electronics11162623.

Full text
Abstract:
To achieve an accurate, efficient, and high dynamic control performance of electric motor drives, precise phase voltage information is required. However, measuring the phase voltages of electrical motor drives online is expensive and potentially contains measurement errors, so they are estimated by inverter models. In this paper, the idea is to investigate if various machine learning (ML) algorithms could be used to estimate the mean phase voltages and duty cycles of the black-box inverter model and black-box inverter compensation scheme with high accuracy using a publicly available dataset. Initially, nine ML algorithms were trained and tested using default parameters. Then, the randomized hyper-parameter search was developed and implemented alongside a 5-fold cross-validation procedure on each ML algorithm to find the hyper-parameters that will achieve high estimation accuracy on both the training and testing part of a dataset. Based on obtained estimation accuracies, the eight ML algorithms from all nine were chosen and used to build the stacking ensemble. The best mean estimation accuracy values achieved with stacking ensemble in the black-box inverter model are R¯2=0.9998, MAE¯=1.03, and RMSE¯=1.54, and in the case of the black-box inverter compensation scheme R¯2=0.9991, MAE¯=0.0042, and RMSE¯=0.0063, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Veugen, Thijs, Bart Kamphorst, and Michiel Marcus. "Privacy-Preserving Contrastive Explanations with Local Foil Trees." Cryptography 6, no. 4 (October 28, 2022): 54. http://dx.doi.org/10.3390/cryptography6040054.

Full text
Abstract:
We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data and the model itself.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Black-box learning algorithm"

1

Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.

Full text
Abstract:
Cette thèse porte sur la configurationAutomatisée des algorithmes qui vise à trouver le meilleur paramétrage à un problème donné ou une catégorie deproblèmes.Le problème de configuration de l'algorithme revient doncà un problème de métaFoptimisation dans l'espace desparamètres, dont le métaFobjectif est la mesure deperformance de l’algorithme donné avec une configuration de paramètres donnée.Des approches plus récentes reposent sur une description des problèmes et ont pour but d’apprendre la relationentre l’espace des caractéristiques des problèmes etl’espace des configurations de l’algorithme à paramétrer.Cette thèse de doctorat porter le CAPI (Configurationd'Algorithme Par Instance) pour résoudre des problèmesd'optimisation de boîte noire continus, où seul un budgetlimité d'évaluations de fonctions est disponible. Nous étudions d'abord' les algorithmes évolutionnairesPour l'optimisation continue, en mettant l'accent sur deux algorithmes que nous avons utilisés comme algorithmecible pour CAPI,DE et CMAFES.Ensuite, nous passons en revue l'état de l'art desapproches de configuration d'algorithme, et lesdifférentes fonctionnalités qui ont été proposées dansla littérature pour décrire les problèmesd'optimisation de boîte noire continue.Nous introduisons ensuite une méthodologie générale Pour étudier empiriquement le CAPI pour le domainecontinu, de sorte que toutes les composantes du CAPIpuissent être explorées dans des conditions réelles.À cette fin, nous introduisons également un nouveau Banc d'essai de boîte noire continue, distinct ducélèbre benchmark BBOB, qui est composé deplusieurs fonctions de test multidimensionnelles avec'différentes propriétés problématiques, issues de lalittérature.La méthodologie proposée est finalement appliquée 'àdeux AES. La méthodologie est ainsi, validéempiriquement sur le nouveau banc d’essaid’optimisation boîte noire pour des dimensions allant jusqu’à 100
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
APA, Harvard, Vancouver, ISO, and other styles
2

Siqueira, Gomes Hugo. "Meta learning for population-based algorithms in black-box optimization." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/68764.

Full text
Abstract:
Les problèmes d’optimisation apparaissent dans presque tous les domaines scientifiques. Cependant, le processus laborieux de conception d’un optimiseur approprié peut demeurer infructueux. La question la plus ambitieuse de l’optimisation est peut-être de savoir comment concevoir des optimiseurs suffisamment flexibles pour s’adapter à un grand nombre de scénarios, tout en atteignant des performances de pointe. Dans ce travail, nous visons donner une réponse potentielle à cette question en étudiant comment faire un méta-apprentissage d’optimiseurs à base de population. Nous motivons et décrivons une modélisation commune pour la plupart des algorithmes basés sur la population, qui présentent des principes d’adaptation générale. Cette structure permet de dériver un cadre de méta-apprentissage basé sur un processus de décision de Markov partiellement observable (POMDP). Notre formulation conceptuelle fournit une méthodologie générale pour apprendre l’algorithme d’optimisation lui-même, présenté comme un problème de méta-apprentissage ou d’apprentissage pour optimiser à l’aide d’ensembles de données d’analyse comparative en boîte noire, pour former des optimiseurs polyvalents efficaces. Nous estimons une fonction d’apprentissage de méta-perte basée sur les performances d’algorithmes stochastiques. Notre analyse expérimentale indique que cette nouvelle fonction de méta-perte encourage l’algorithme appris à être efficace et robuste à une convergence prématurée. En outre, nous montrons que notre approche peut modifier le comportement de recherche d’un algorithme pour s’adapter facilement à un nouveau contexte et être efficace par rapport aux algorithmes de pointe, tels que CMA-ES.
Optimization problems appear in almost any scientific field. However, the laborious process to design a suitable optimizer may lead to an unsuccessful outcome. Perhaps the most ambitious question in optimization is how we can design optimizers that can be flexible enough to adapt to a vast number of scenarios while at the same time reaching state-of-the-art performance. In this work, we aim to give a potential answer to this question by investigating how to metalearn population-based optimizers. We motivate and describe a common structure for most population-based algorithms, which present principles for general adaptation. This structure can derive a meta-learning framework based on a Partially observable Markov decision process (POMDP). Our conceptual formulation provides a general methodology to learn the optimizer algorithm itself, framed as a meta-learning or learning-to-optimize problem using black-box benchmarking datasets to train efficient general-purpose optimizers. We estimate a meta-loss training function based on stochastic algorithms’ performance. Our experimental analysis indicates that this new meta-loss function encourages the learned algorithm to be sample efficient and robust to premature convergence. Besides, we show that our approach can alter an algorithm’s search behavior to fit easily in a new context and be sample efficient compared to state-of-the-art algorithms, such as CMA-ES.
APA, Harvard, Vancouver, ISO, and other styles
3

CURIA, FRANCESCO. "Explainable clinical decision support system: opening black-box meta-learner algorithm expert's based." Doctoral thesis, 2021. http://hdl.handle.net/11573/1538472.

Full text
Abstract:
Mathematical optimization methods are the basic mathematical tools of all artificial intelligence theory. In the field of machine learning and deep learning the examples with which algorithms learn (training data) are used by sophisticated cost functions which can have solutions in closed form or through approximations. The interpretability of the models used and the relative transparency, opposed to the opacity of the black-boxes, is related to how the algorithm learns and this occurs through the optimization and minimization of the errors that the machine makes in the learning process. In particular in the present work is introduced a new method for the determination of the weights in an ensemble model, supervised and unsupervised, based on the well known Analytic Hierarchy Process method (AHP). This method is based on the concept that behind the choice of different and possible algorithms to be used in a machine learning problem, there is an expert who controls the decisionmaking process. The expert assigns a complexity score to each algorithm (based on the concept of complexity-interpretability trade-off) through which the weight with which each model contributes to the training and prediction phase is determined. In addition, different methods are presented to evaluate the performance of these algorithms and explain how each feature in the model contributes to the prediction of the outputs. The interpretability techniques used in machine learning are also combined with the method introduced based on AHP in the context of clinical decision support systems in order to make the algorithms (black-box) and the results interpretable and explainable, so that clinical-decision-makers can take controlled decisions together with the concept of "right to explanation" introduced by the legislator, because the decision-makers have a civil and legal responsibility of their choices in the clinical field based on systems that make use of artificial intelligence. No less, the central point is the interaction between the expert who controls the algorithm construction process and the domain expert, in this case the clinical one. Three applications on real data are implemented with the methods known in the literature and with those proposed in this work: one application concerns cervical cancer, another the problem related to diabetes and the last one focuses on a specific pathology developed by HIV-infected individuals. All applications are supported by plots, tables and explanations of the results, implemented through Python libraries. The main case study of this thesis regarding HIV-infected individuals concerns an unsupervised ensemble-type problem, in which a series of clustering algorithms are used on a set of features and which in turn produce an output used again as a set of meta-features to provide a set of labels for each given cluster. The meta-features and labels obtained by choosing the best algorithm are used to train a Logistic regression meta-learner, which in turn is used through some explainability methods to provide the value of the contribution that each algorithm has had in the training phase. The use of Logistic regression as a meta-learner classifier is motivated by the fact that it provides appreciable results and also because of the easy explainability of the estimated coefficients.
APA, Harvard, Vancouver, ISO, and other styles
4

Repický, Jakub. "Evoluční algoritmy a aktivní učení." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-355988.

Full text
Abstract:
Názov práce: Evoluční algoritmy a aktivní učení Autor: Jakub Repický Katedra: Katedra teoretické informatiky a matematické logiky Vedúci diplomovej práce: doc. RNDr. Ing. Martin Holeňa, CSc., Ústav informa- tiky, Akademie věd České republiky Abstrakt: Vyhodnotenie ciel'ovej funkcie v úlohách spojitej optimalizácie často do- minuje výpočtovej náročnosti algoritmu. Platí to najmä v prípade black-box fun- kcií, t. j. funkcií, ktorých analytický popis nie je známy a ktoré sú vyhodnocované empiricky. Témou urýchl'ovania black-box optimalizácie s pomocou náhradných modelov ciel'ovej funkcie sa zaoberá vel'a autorov a autoriek. Ciel'om tejto dip- lomovej práce je vyhodnotit' niekol'ko metód, ktoré prepájajú náhradné modely založené na Gaussovských procesoch (GP) s Evolučnou stratégiou adaptácie ko- variančnej matice (CMA-ES). Gaussovské procesy umožňujú aktívne učenie, pri ktorom sú body pre vyhodnotenie vyberané s ciel'om zlepšit' presnost' modelu. Tradičné náhradné modely založené na GP zah'rňajú Metamodelom asistovanú evolučnú stratégiu (MA-ES) a Optimalizačnú procedúru pomocou Gaussovských procesov (GPOP). Pre účely tejto práce boli oba prístupy znovu implementované a po prvý krát vyhodnotené na frameworku Black-Box...
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Black-box learning algorithm"

1

Russell, David W. The BOXES Methodology: Black Box Dynamic Control. London: Springer London, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Russell, David W. The BOXES Methodology: Black Box Dynamic Control. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

The BOXES Methodology: Black Box Dynamic Control. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Russell, David W. BOXES Methodology Second Edition: Black Box Control of Ill-Defined Systems. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Black-box learning algorithm"

1

He, Yaodong, and Shiu Yin Yuen. "Black Box Algorithm Selection by Convolutional Neural Network." In Machine Learning, Optimization, and Data Science, 264–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64580-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neele, Thomas, and Matteo Sammartino. "Compositional Automata Learning of Synchronous Systems." In Fundamental Approaches to Software Engineering, 47–66. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_3.

Full text
Abstract:
AbstractAutomata learning is a technique to infer an automaton model of a black-box system via queries to the system. In recent years it has found widespread use both in industry and academia, as it enables formal verification when no model is available or it is too complex to create one manually. In this paper we consider the problem of learning the individual components of a black-box synchronous system, assuming we can only query the whole system. We introduce a compositional learning approach in which several learners cooperate, each aiming to learn one of the components. Our experiments show that, in many cases, our approach requires significantly fewer queries than a widely-used non-compositional algorithm such as $$\mathtt {L^*}$$ L ∗ .
APA, Harvard, Vancouver, ISO, and other styles
3

Cowley, Benjamin Ultan, Darryl Charles, Gerit Pfuhl, and Anna-Mari Rusanen. "Artificial Intelligence in Education as a Rawlsian Massively Multiplayer Game: A Thought Experiment on AI Ethics." In AI in Learning: Designing the Future, 297–316. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09687-7_18.

Full text
Abstract:
AbstractIn this chapter, we reflect on the deployment of artificial intelligence (AI) as a pedagogical and educational instrument and the challenges that arise to ensure transparency and fairness to staff and students . We describe a thought experiment: ‘simulation of AI in education as a massively multiplayer social online game’ (AIEd-MMOG). Here, all actors (humans, institutions, AI agents and algorithms) are required to conform to the definition of a player. Models of player behaviour that ‘understand’ the game space provide an application programming interface for typical algorithms, e.g. deep learning neural nets or reinforcement learning agents, to interact with humans and the game space. The definition of ‘player’ is a role designed to maximise protection and benefit for human players during interaction with AI. The concept of benefit maximisation is formally defined as a Rawlsian justice game, played within the AIEd-MMOG to facilitate transparency and trust of the algorithms involved, without requiring algorithm-specific technical solutions to, e.g. ‘peek inside the black box’. Our thought experiment for an AIEd-MMOG simulation suggests solutions for the well-known challenges of explainable AI and distributive justice.
APA, Harvard, Vancouver, ISO, and other styles
4

Baniecki, Hubert, Wojciech Kretowicz, and Przemyslaw Biecek. "Fooling Partial Dependence via Data Poisoning." In Machine Learning and Knowledge Discovery in Databases, 121–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26409-2_8.

Full text
Abstract:
AbstractMany methods have been developed to understand complex predictive models and high expectations are placed on post-hoc model explainability. It turns out that such explanations are not robust nor trustworthy, and they can be fooled. This paper presents techniques for attacking Partial Dependence (plots, profiles, PDP), which are among the most popular methods of explaining any predictive model trained on tabular data. We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications where auditability became a must-have trait supporting black-box machine learning. The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic and gradient algorithms. We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
APA, Harvard, Vancouver, ISO, and other styles
5

Klein, Alexander. "Challenges of Model Predictive Control in a Black Box Environment." In Reinforcement Learning Algorithms: Analysis and Applications, 177–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-41188-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coello, Carlos A. Coello, Silvia González Brambila, Josué Figueroa Gamboa, and Ma Guadalupe Castillo Tapia. "Multi-Objective Evolutionary Algorithms: Past, Present, and Future." In Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, 137–62. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bastani, Osbert, Jeevana Priya Inala, and Armando Solar-Lezama. "Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis." In xxAI - Beyond Explainable AI, 207–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_11.

Full text
Abstract:
AbstractReinforcement learning is a promising strategy for automatically training policies for challenging control tasks. However, state-of-the-art deep reinforcement learning algorithms focus on training deep neural network (DNN) policies, which are black box models that are hard to interpret and reason about. In this chapter, we describe recent progress towards learning policies in the form of programs. Compared to DNNs, such programmatic policies are significantly more interpretable, easier to formally verify, and more robust. We give an overview of algorithms designed to learn programmatic policies, and describe several case studies demonstrating their various advantages.
APA, Harvard, Vancouver, ISO, and other styles
8

Bartz-Beielstein, Thomas, Frederik Rehbach, and Margarita Rebolledo. "Tuning Algorithms for Stochastic Black-Box Optimization: State of the Art and Future Perspectives." In Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, 67–108. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vidovic, Marina M. C., Nico Görnitz, Klaus-Robert Müller, Gunnar Rätsch, and Marius Kloft. "Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-Based Learning Algorithms." In Machine Learning and Knowledge Discovery in Databases, 137–53. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23525-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schneider, Lennart, Lennart Schäpermeier, Raphael Patrick Prager, Bernd Bischl, Heike Trautmann, and Pascal Kerschke. "HPO $$\times $$ ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis." In Lecture Notes in Computer Science, 575–89. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14714-2_40.

Full text
Abstract:
AbstractHyperparameter optimization (HPO) is a key component of machine learning models for achieving peak predictive performance. While numerous methods and algorithms for HPO have been proposed over the last years, little progress has been made in illuminating and examining the actual structure of these black-box optimization problems. Exploratory landscape analysis (ELA) subsumes a set of techniques that can be used to gain knowledge about properties of unknown optimization problems. In this paper, we evaluate the performance of five different black-box optimizers on 30 HPO problems, which consist of two-, three- and five-dimensional continuous search spaces of the XGBoost learner trained on 10 different data sets. This is contrasted with the performance of the same optimizers evaluated on 360 problem instances from the black-box optimization benchmark (BBOB). We then compute ELA features on the HPO and BBOB problems and examine similarities and differences. A cluster analysis of the HPO and BBOB problems in ELA feature space allows us to identify how the HPO problems compare to the BBOB problems on a structural meta-level. We identify a subset of BBOB problems that are close to the HPO problems in ELA feature space and show that optimizer performance is comparably similar on these two sets of benchmark problems. We highlight open challenges of ELA for HPO and discuss potential directions of future research and applications.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Black-box learning algorithm"

1

Cohen, Itay, Roi Fogler, and Doron Peled. "A Reinforcement-Learning Style Algorithm for Black Box Automata." In 2022 20th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). IEEE, 2022. http://dx.doi.org/10.1109/memocode57689.2022.9954382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Mengchen, Bo An, Wei Gao, and Teng Zhang. "Efficient Label Contamination Attacks Against Black-Box Learning Models." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/551.

Full text
Abstract:
Label contamination attack (LCA) is an important type of data poisoning attack where an attacker manipulates the labels of training data to make the learned model beneficial to him. Existing work on LCA assumes that the attacker has full knowledge of the victim learning model, whereas the victim model is usually a black-box to the attacker. In this paper, we develop a Projected Gradient Ascent (PGA) algorithm to compute LCAs on a family of empirical risk minimizations and show that an attack on one victim model can also be effective on other victim models. This makes it possible that the attacker designs an attack against a substitute model and transfers it to a black-box victim model. Based on the observation of the transferability, we develop a defense algorithm to identify the data points that are most likely to be attacked. Empirical studies show that PGA significantly outperforms existing baselines and linear learning models are better substitute models than nonlinear ones.
APA, Harvard, Vancouver, ISO, and other styles
3

Gajane, Pratik, Peter Auer, and Ronald Ortner. "Autonomous Exploration for Navigating in MDPs Using Blackbox RL Algorithms." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/413.

Full text
Abstract:
We consider the problem of navigating in a Markov decision process where extrinsic rewards are either absent or ignored. In this setting, the objective is to learn policies to reach all the states that are reachable within a given number of steps (in expectation) from a starting state. We introduce a novel meta-algorithm which can use any online reinforcement learning algorithm (with appropriate regret guarantees) as a black-box. Our algorithm demonstrates a method for transforming the output of online algorithms to a batch setting. We prove an upper bound on the sample complexity of our algorithm in terms of the regret bound of the used black-box RL algorithm. Furthermore, we provide experimental results to validate the effectiveness of our algorithm and correctness of our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Fei-Yu, Zi-Niu Li, and Chao Qian. "Self-Guided Evolution Strategies with Historical Estimated Gradients." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/205.

Full text
Abstract:
Evolution Strategies (ES) are a class of black-box optimization algorithms and have been widely applied to solve problems, e.g., in reinforcement learning (RL), where the true gradient is unavailable. ES estimate the gradient of an objective function with respect to the parameters by randomly sampling search directions and evaluating parameter perturbations in these directions. However, the gradient estimator of ES tends to have a high variance for high-dimensional optimization, thus requiring a large number of samples and making ES inefficient. In this paper, we propose a new ES algorithm SGES, which utilizes historical estimated gradients to construct a low-dimensional subspace for sampling search directions, and adjusts the importance of this subspace adaptively. We prove that the variance of the gradient estimator of SGES can be much smaller than that of Vanilla ES; meanwhile, its bias can be well bounded. Empirical results on benchmark black-box functions and a set of popular RL tasks exhibit the superior performance of SGES over state-of-the-art ES algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Sabbatini, Federico, and Roberta Calegari. "Symbolic Knowledge Extraction from Opaque Machine Learning Predictors: GridREx & PEDRO." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/57.

Full text
Abstract:
Procedures aimed at explaining outcomes and behaviour of opaque predictors are becoming more and more essential as machine learning (ML) black-box (BB) models pervade a wide variety of fields and, in particular, critical ones - e.g., medical or financial -, where it is not possible to make decisions on the basis of a blind automatic prediction. A growing number of methods designed to overcome this BB limitation is present in the literature, however some ML tasks are nearly or completely neglected-e.g., regression and clustering. Furthermore, existing techniques may be not applicable in complex real-world scenarios or they can affect the output predictions with undesired artefacts. In this paper we present the design and the implementation of GridREx, a pedagogical algorithm to extract knowledge from black-box regressors, along with PEDRO, an optimisation procedure to automate the GridREx hyper-parameter tuning phase with better results than manual tuning. We also report the results of our experiments involving the application of GridREx and PEDRO in real case scenarios, including GridREx performance assessment by using as benchmarks other similar state-of-the-art techniques. GridREx proved to be able to give more concise explanations with higher fidelity and predictive capabilities.
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Samara Silva, Marcos Antonio Alves, Leonardo Augusto Ferreira, and Frederico Gadelha Guimarães. "PDTX: A novel local explainer based on the Perceptron Decision Tree." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-50.

Full text
Abstract:
Artificial Intelligence (AI) approaches that achieve good results and generalization are often opaque models and the decision-maker has no clear explanation about the final classification. As a result, there is an increasing demand for Explainable AI (XAI) models, whose main goal is to provide understandable solutions for human beings and to elucidate the relationship between the features and the black-box model. In this paper, we introduce a novel explainer method, named PDTX, based on the Perceptron Decision Tree (PDT). The evolutionary algorithm jSO is employed to fit the weights of the PDT to approximate the predictions of the black-box model. Then, it is possible to extract valuable information that explains the behavior of the machine learning method. The PDTX was tested in 10 different datasets from a public repository as an explainer for three classifiers: Multi-Layer Perceptron, Random Forest and Support Vector Machine. Decision-Tree and LIME were used as baselines for comparison. The results showed promising performance in the majority of the experiments, achieving 87.34% of average accuracy, against 64.23% from DT and 37.44% from LIME. The PDTX can be used for black-box classifier explanations, for local instances and it is model-agnostic.
APA, Harvard, Vancouver, ISO, and other styles
7

Russell, David W. "On the Control of Dynamically Unstable Systems Using a Self Organizing Black Box Controller." In ASME 7th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2004. http://dx.doi.org/10.1115/esda2004-58290.

Full text
Abstract:
Many systems are difficult to control by conventional means because of the complexity of the very fabric of their being. Some systems perform very well under some conditions and then burst into wild, maybe even chaotic, oscillations for no apparent reason. Such systems exist in bioreactors, electro-plating and other application domains. In these cases a model may not exist that can be trusted to accurately replicate the dynamics of the real-world system. BOXES is a well known methodology that learns to perform control maneuvers for dynamic systems with only cursory a priori knowledge of the mathematics of the system model. A limiting factor in the BOXES algorithm has always been the assignment of appropriate boundaries to subdivide each state variable into regions. In addition to suggesting a method of alleviating this weakness, the paper shows that the accumulated statistical data in near neighboring states may be a powerful agent in accelerating learning, and may eventually provide a possible evolution to self-organization.
APA, Harvard, Vancouver, ISO, and other styles
8

Abba, S. I., Sagir Jibrin Kawu, Hamza Sabo Maccido, S. M. Lawan, Gafai Najashi, and Abdullahi Yusuf Sada. "Short-term load demand forecasting using nonlinear dynamic grey-black-box and kernel optimization models: a new generation learning algorithm." In 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS). IEEE, 2021. http://dx.doi.org/10.1109/icmeas52683.2021.9692314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heidari, Hoda, and Andreas Krause. "Preventing Disparate Treatment in Sequential Decision Making." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/311.

Full text
Abstract:
We study fairness in sequential decision making environments, where at each time step a learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations made so far. In order to prevent cases of disparate treatment, our time-dependent notion of fairness requires algorithmic decisions to be consistent: if two individuals are similar in the feature space and arrive during the same time epoch, the algorithm must assign them to similar outcomes. We propose a general framework for post-processing predictions made by a black-box learning model, that guarantees the resulting sequence of outcomes is consistent. We show theoretically that imposing consistency will not significantly slow down learning. Our experiments on two real-world data sets illustrate and confirm this finding in practice.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Jiangjiang, Zhuoran Wang, and Fangchun Yang. "Genetic Prompt Search via Exploiting Language Model Probabilities." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/588.

Full text
Abstract:
Prompt tuning for large-scale pretrained language models (PLMs) has shown remarkable potential, especially in low-resource scenarios such as few-shot learning. Moreover, derivative-free optimisation (DFO) techniques make it possible to tune prompts for a black-box PLM to better fit downstream tasks. However, there are usually preconditions to apply existing DFO-based prompt tuning methods, e.g. the backbone PLM needs to provide extra APIs so that hidden states (and/or embedding vectors) can be injected into it as continuous prompts, or carefully designed (discrete) manual prompts need to be available beforehand, serving as the initial states of the tuning algorithm. To waive such preconditions and make DFO-based prompt tuning ready for general use, this paper introduces a novel genetic algorithm (GA) that evolves from empty prompts, and uses the predictive probabilities derived from the backbone PLM(s) on the basis of a (few-shot) training set to guide the token selection process during prompt mutations. Experimental results on diverse benchmark datasets show that the proposed precondition-free method significantly outperforms the existing DFO-style counterparts that require preconditions, including black-box tuning, genetic prompt search and gradient-free instructional prompt search.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography