Journal articles on the topic 'Black-box learning'

To see the other types of publications on this topic, follow the link: Black-box learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Black-box learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nax, Heinrich H., Maxwell N. Burton-Chellew, Stuart A. West, and H. Peyton Young. "Learning in a black box." Journal of Economic Behavior & Organization 127 (July 2016): 1–15. http://dx.doi.org/10.1016/j.jebo.2016.04.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Battaile, Bennett. "Black-box electronics and passive learning." Physics Today 67, no. 2 (February 2014): 11. http://dx.doi.org/10.1063/pt.3.2258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hess, Karl. "Black-box electronics and passive learning." Physics Today 67, no. 2 (February 2014): 11–12. http://dx.doi.org/10.1063/pt.3.2259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Katrutsa, Alexandr, Talgat Daulbaev, and Ivan Oseledets. "Black-box learning of multigrid parameters." Journal of Computational and Applied Mathematics 368 (April 2020): 112524. http://dx.doi.org/10.1016/j.cam.2019.112524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

The Lancet Respiratory Medicine. "Opening the black box of machine learning." Lancet Respiratory Medicine 6, no. 11 (November 2018): 801. http://dx.doi.org/10.1016/s2213-2600(18)30425-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rudnick, Abraham. "The Black Box Myth." International Journal of Extreme Automation and Connectivity in Healthcare 1, no. 1 (January 2019): 1–3. http://dx.doi.org/10.4018/ijeach.2019010101.

Full text
Abstract:
Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.
APA, Harvard, Vancouver, ISO, and other styles
7

Pintelas, Emmanuel, Ioannis E. Livieris, and Panagiotis Pintelas. "A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability." Algorithms 13, no. 1 (January 5, 2020): 17. http://dx.doi.org/10.3390/a13010017.

Full text
Abstract:
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model.
APA, Harvard, Vancouver, ISO, and other styles
8

Kirsch, Louis, Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh, and Yutian Chen. "Introducing Symmetries to Black Box Meta Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7202–10. http://dx.doi.org/10.1609/aaai.v36i7.20681.

Full text
Abstract:
Meta reinforcement learning (RL) attempts to discover new RL algorithms automatically from environment interaction. In so-called black-box approaches, the policy and the learning algorithm are jointly represented by a single neural network. These methods are very flexible, but they tend to underperform compared to human-engineered RL algorithms in terms of generalisation to new, unseen environments. In this paper, we explore the role of symmetries in meta-generalisation. We show that a recent successful meta RL approach that meta-learns an objective for backpropagation-based learning exhibits certain symmetries (specifically the reuse of the learning rule, and invariance to input and output permutations) that are not present in typical black-box meta RL systems. We hypothesise that these symmetries can play an important role in meta-generalisation. Building off recent work in black-box supervised meta learning, we develop a black-box meta RL system that exhibits these same symmetries. We show through careful experimentation that incorporating these symmetries can lead to algorithms with a greater ability to generalise to unseen action & observation spaces, tasks, and environments.
APA, Harvard, Vancouver, ISO, and other styles
9

Taub, Simon, and Oleg S. Pianykh. "An alternative to the black box: Strategy learning." PLOS ONE 17, no. 3 (March 18, 2022): e0264485. http://dx.doi.org/10.1371/journal.pone.0264485.

Full text
Abstract:
In virtually any practical field or application, discovering and implementing near-optimal decision strategies is essential for achieving desired outcomes. Workflow planning is one of the most common and important problems of this kind, as sub-optimal decision-making may create bottlenecks and delays that decrease efficiency and increase costs. Recently, machine learning has been used to attack this problem, but unfortunately, most proposed solutions are “black box” algorithms with underlying logic unclear to humans. This makes them hard to implement and impossible to trust, significantly limiting their practical use. In this work, we propose an alternative approach: using machine learning to generate optimal, comprehensible strategies which can be understood and used by humans directly. Through three common decision-making problems found in scheduling, we demonstrate the implementation and feasibility of this approach, as well as its great potential to attain near-optimal results.
APA, Harvard, Vancouver, ISO, and other styles
10

Hargreaves, Eleanore. "Assessment for learning? Thinking outside the (black) box." Cambridge Journal of Education 35, no. 2 (June 2005): 213–24. http://dx.doi.org/10.1080/03057640500146880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

TUMER, KAGAN, and ADRIAN AGOGINO. "MULTIAGENT LEARNING FOR BLACK BOX SYSTEM REWARD FUNCTIONS." Advances in Complex Systems 12, no. 04n05 (August 2009): 475–92. http://dx.doi.org/10.1142/s0219525909002295.

Full text
Abstract:
In large, distributed systems composed of adaptive and interactive components (agents), ensuring the coordination among the agents so that the system achieves certain performance objectives is a challenging proposition. The key difficulty to overcome in such systems is one of credit assignment: How to apportion credit (or blame) to a particular agent based on the performance of the entire system. In this paper, we show how this problem can be solved in general for a large class of reward functions whose analytical form may be unknown (hence "black box" reward). This method combines the salient features of global solutions (e.g. "team games") which are broadly applicable but provide poor solutions in large problems with those of local solutions (e.g. "difference rewards") which learn quickly, but can be computationally burdensome. We introduce two estimates for local rewards for a class of problems where the mapping from the agent actions to system reward functions can be decomposed into a linear combination of nonlinear functions of the agents' actions. We test our method's performance on a distributed marketing problem and an air traffic flow management problem and show a 44% performance improvement over team games and a speedup of order n for difference rewards (for an n agent system).
APA, Harvard, Vancouver, ISO, and other styles
12

Baird, Jo-Anne. "Does the learning happen inside the black box?" Assessment in Education: Principles, Policy & Practice 18, no. 4 (November 2011): 343–45. http://dx.doi.org/10.1080/0969594x.2011.614857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Eshel, Neir, Ju Tian, and Naoshige Uchida. "Opening the black box: dopamine, predictions, and learning." Trends in Cognitive Sciences 17, no. 9 (September 2013): 430–31. http://dx.doi.org/10.1016/j.tics.2013.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

García, Raquel M. Crespo, Abelardo Pardo, Carlos Delgado Kloos, Katja Niemann, Maren Scheffel, and Martin Wolpers. "Peeking into the black box: visualising learning activities." International Journal of Technology Enhanced Learning 4, no. 1/2 (2012): 99. http://dx.doi.org/10.1504/ijtel.2012.048313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fung, Pak L., Martha A. Zaidan, Hilkka Timonen, Jarkko V. Niemi, Anu Kousa, Joel Kuula, Krista Luoma, et al. "Evaluation of white-box versus black-box machine learning models in estimating ambient black carbon concentration." Journal of Aerosol Science 152 (February 2021): 105694. http://dx.doi.org/10.1016/j.jaerosci.2020.105694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sharma, Shubham, and Usha Lenka. "How organizations learn: models uncovering the black box." Development and Learning in Organizations: An International Journal 33, no. 1 (January 7, 2019): 20–23. http://dx.doi.org/10.1108/dlo-01-2018-0008.

Full text
Abstract:
Purpose As contemporary organizations’ focus shifts from knowledge orientation to learning orientation, this paper aims to articulate the need for models that describe the learning process in organizations. Simply assuming that organizations learn without any support of tangible framework or models highlights this need. The paper presents limitations of two prevalent themes of organizational learning, i.e. learning by adapting to environmental disturbances and learning from organizational members. Design/methodology/approach Based on the literature review on organizational learning, studies that depict the mechanism of organizational learning were selected. These were grouped into two categories: one that focuses on how organizations learn from its environment and other on how organization learn from its members. Findings This paper suggests the need for developing models and frameworks that eloquently describe the learning process in organizations. The literature focuses on organizational learning from individuals and adapting to the environment. Organizations tend to attribute the cause of failure to environmental shocks. Then, instead of the environment being a source of learning, it becomes a cause of failure. If individuals are agents of organization through which the latter learns, how this tacit knowledge becomes institutionalized in organizational memory is unknown. Originality/value This paper is a retrospective view on organizational learning. It attempts to question the black box of organizational learning, i.e. how the learning of individuals is transferred to organizational memory, or simply put, how the organizational learning mechanism works. There is a dearth of studies that address this question, and it has been simply assumed that somehow organizations do learn, but how?
APA, Harvard, Vancouver, ISO, and other styles
17

Nayyar, Rashmeet Kaur, Pulkit Verma, and Siddharth Srivastava. "Differential Assessment of Black-Box AI Agents." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9868–76. http://dx.doi.org/10.1609/aaai.v36i9.21223.

Full text
Abstract:
Much of the research on learning symbolic models of AI agents focuses on agents with stationary models. This assumption fails to hold in settings where the agent's capabilities may change as a result of learning, adaptation, or other post-deployment modifications. Efficient assessment of agents in such settings is critical for learning the true capabilities of an AI system and for ensuring its safe usage. In this work, we propose a novel approach to differentially assess black-box AI agents that have drifted from their previously known models. As a starting point, we consider the fully observable and deterministic setting. We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy that selectively queries the agent and computes an updated model of its functionality. Empirical evaluation shows that our approach is much more efficient than re-learning the agent model from scratch. We also show that the cost of differential assessment using our method is proportional to the amount of drift in the agent's functionality.
APA, Harvard, Vancouver, ISO, and other styles
18

Price, W. Nicholson. "Big data and black-box medical algorithms." Science Translational Medicine 10, no. 471 (December 12, 2018): eaao5333. http://dx.doi.org/10.1126/scitranslmed.aao5333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Burton-Chellew, Maxwell N., and Stuart A. West. "The Black Box as a Control for Payoff-Based Learning in Economic Games." Games 13, no. 6 (November 16, 2022): 76. http://dx.doi.org/10.3390/g13060076.

Full text
Abstract:
The black box method was developed as an “asocial control” to allow for payoff-based learning while eliminating social responses in repeated public goods games. Players are told they must decide how many virtual coins they want to input into a virtual black box that will provide uncertain returns. However, in truth, they are playing with each other in a repeated social game. By “black boxing” the game’s social aspects and payoff structure, the method creates a population of self-interested but ignorant or confused individuals that must learn the game’s payoffs. This low-information environment, stripped of social concerns, provides an alternative, empirically derived null hypothesis for testing social behaviours, as opposed to the theoretical predictions of rational self-interested agents (Homo economicus). However, a potential problem is that participants can unwittingly affect the learning of other participants. Here, we test a solution to this problem in a range of public goods games by making participants interact, unknowingly, with simulated players (“computerised black box”). We find no significant differences in rates of learning between the original and the computerised black box, therefore either method can be used to investigate learning in games. These results, along with the fact that simulated agents can be programmed to behave in different ways, mean that the computerised black box has great potential for complementing studies of how individuals and groups learn under different environments in social dilemmas.
APA, Harvard, Vancouver, ISO, and other styles
20

Thakkar, Pooja. "Drug Classification using Black-box models and Interpretability." International Journal for Research in Applied Science and Engineering Technology 9, no. 9 (September 30, 2021): 1518–29. http://dx.doi.org/10.22214/ijraset.2021.38203.

Full text
Abstract:
Abstract: The focus of this study is on drug categorization utilising Machine Learning models, as well as interpretability utilizing LIME and SHAP to get a thorough understanding of the ML models. To do this, the researchers used machine learning models such as random forest, decision tree, and logistic regression to classify drugs. Then, using LIME and SHAP, they determined if these models were interpretable, which allowed them to better understand their results. It may be stated at the conclusion of this paper that LIME and SHAP can be utilised to get insight into a Machine Learning model and determine which attribute is accountable for the divergence in the outcomes. According to the LIME and SHAP results, it is also discovered that Random Forest and Decision Tree ML models are the best models to employ for drug classification, with Na to K and BP being the most significant characteristics for drug classification. Keywords: Machine Learning, Back-box models, LIME, SHAP, Decision Tree
APA, Harvard, Vancouver, ISO, and other styles
21

Mahya, Parisa, and Johannes Fürnkranz. "An Empirical Comparison of Interpretable Models to Post-Hoc Explanations." AI 4, no. 2 (May 19, 2023): 426–36. http://dx.doi.org/10.3390/ai4020023.

Full text
Abstract:
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations and interpretable models on several datasets for rule-based and feature-based interpretable models. The results seem to underline that often directly learned interpretable models approximate the black-box models at least as well as their post-hoc surrogates, even though the former do not have direct access to the black-box model.
APA, Harvard, Vancouver, ISO, and other styles
22

Çallı, Erdi, Keelin Murphy, Ernst T. Scholten, Steven Schalekamp, and Bram van Ginneken. "Explainable emphysema detection on chest radiographs with deep learning." PLOS ONE 17, no. 7 (July 28, 2022): e0267539. http://dx.doi.org/10.1371/journal.pone.0267539.

Full text
Abstract:
We propose a deep learning system to automatically detect four explainable emphysema signs on frontal and lateral chest radiographs. Frontal and lateral chest radiographs from 3000 studies were retrospectively collected. Two radiologists annotated these with 4 radiological signs of pulmonary emphysema identified from the literature. A patient with ≥2 of these signs present is considered emphysema positive. Using separate deep learning systems for frontal and lateral images we predict the presence of each of the four visual signs and use these to determine emphysema positivity. The ROC and AUC results on a set of 422 held-out cases, labeled by both radiologists, are reported. Comparison with a black-box model which predicts emphysema without the use of explainable visual features is made on the annotations from both radiologists, as well as the subset that they agreed on. DeLong’s test is used to compare with the black-box model ROC and McNemar’s test to compare with radiologist performance. In 422 test cases, emphysema positivity was predicted with AUCs of 0.924 and 0.946 using the reference standard from each radiologist separately. Setting model sensitivity equivalent to that of the second radiologist, our model has a comparable specificity (p = 0.880 and p = 0.143 for each radiologist respectively). Our method is comparable with the black-box model with AUCs of 0.915 (p = 0.407) and 0.935 (p = 0.291), respectively. On the 370 cases where both radiologists agreed (53 positives), our model achieves an AUC of 0.981, again comparable to the black-box model AUC of 0.972 (p = 0.289). Our proposed method can predict emphysema positivity on chest radiographs as well as a radiologist or a comparable black-box method. It additionally produces labels for four visual signs to ensure the explainability of the result. The dataset is publicly available at https://doi.org/10.5281/zenodo.6373392.
APA, Harvard, Vancouver, ISO, and other styles
23

Ath Thaariq, Guruh Ihda Alfi, Budi Nugroho, and Faisal Muttaqin. "PENGUJIAN EQUIVALENCE PARTITIONS PADA E-LEARNING ILMU UPN "VETERAN" JAWA TIMUR." Prosiding Seminar Nasional Informatika Bela Negara 2 (November 25, 2021): 44–47. http://dx.doi.org/10.33005/santika.v2i0.101.

Full text
Abstract:
Pengujian perangkat lunak adalah salah satu proses dalam pembuatan suatu sistem informasi untuk menemukan perbedaan antara komponen yang dibutuhkan dengan komponen yang berada dalam sistem. Pengujian memiliki berbagai macam metode dan teknik, salah satunya adalah metode black box testing dan salah satu teknik didalam black box testing adalah equivalence partitions. Pada penelitian ini pengujian dilakukan pada website e-learning “Ilmu” yang dimiliki oleh UPN “Veteran” Jawa Timur dengan metode black box menggunakan teknik equivalence partitions yang mana tujuan dari penelitian ini adalah untuk mencari tahu apakah ada kesalahan-kesalahan di dalam website ini. Pada penelitian ini terdapat 2 form dengan 8 field, dilakukan 23 kali pengujian equivalence partitions dengan memberi masukan pada setiap field dengan data valid dan data acak (invalid). Hasil dari pengujian adalah 23 pengujian sesuai dengan apa yang diharapkan. Dari pengujian tersebut dapat diambil kesimpulan bahwa website e-learning “Ilmu” UPN “Veteran” Jawa Timur memiliki sistem yang baik dengan pengujian menggunakan equivalence partitions. Hal ini juga menunjukkan bahwa metode black box testing dengan teknik equivalence partitions dapat menjadi salah satu pengujian yang cocok digunakan untuk mencari kesalahan dalam sebuah sistem.
APA, Harvard, Vancouver, ISO, and other styles
24

Traub, Simon, and Oleg S. Pianykh. "Correction: An alternative to the black box: Strategy learning." PLOS ONE 17, no. 6 (June 21, 2022): e0270441. http://dx.doi.org/10.1371/journal.pone.0270441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Yu, Mengran, and Shiliang Sun. "Natural Black-Box Adversarial Examples against Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8936–44. http://dx.doi.org/10.1609/aaai.v36i8.20876.

Full text
Abstract:
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behaviors of target policies as well as craft adversarial examples, and attack the target policies with these transferable adversarial examples. However, the transferability of adversarial examples is not always guaranteed. Moreover, current methods of crafting adversarial examples only utilize simple pixel space metrics which neglect semantics in the whole images, and thus generate unnatural adversarial examples. To address these problems, we propose an advRL-GAN framework to directly generate semantically natural adversarial examples in the black-box setting, bypassing the transferability requirement of adversarial examples. It formalizes the black-box attack as a reinforcement learning (RL) agent, which explores natural and aggressive adversarial examples with generative adversarial networks and the feedback of target agents. To the best of our knowledge, it is the first RL-based adversarial attack on a deep RL agent. Experimental results on multiple environments demonstrate the effectiveness of advRL-GAN in terms of reward reductions and magnitudes of perturbations, and validate the sparse and targeted property of adversarial perturbations through visualization.
APA, Harvard, Vancouver, ISO, and other styles
26

Qin, Zengyi, Dawei Sun, and Chuchu Fan. "Sablas: Learning Safe Control for Black-Box Dynamical Systems." IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 1928–35. http://dx.doi.org/10.1109/lra.2022.3142743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Martínez Ramírez, Marco A., Emmanouil Benetos, and Joshua D. Reiss. "Deep Learning for Black-Box Modeling of Audio Effects." Applied Sciences 10, no. 2 (January 16, 2020): 638. http://dx.doi.org/10.3390/app10020638.

Full text
Abstract:
Virtual analog modeling of audio effects consists of emulating the sound of an audio processor reference device. This digital simulation is normally done by designing mathematical models of these systems. It is often difficult because it seeks to accurately model all components within the effect unit, which usually contains various nonlinearities and time-varying components. Most existing methods for audio effects modeling are either simplified or optimized to a very specific circuit or type of audio effect and cannot be efficiently translated to other types of audio effects. Recently, deep neural networks have been explored as black-box modeling strategies to solve this task, i.e., by using only input–output measurements. We analyse different state-of-the-art deep learning models based on convolutional and recurrent neural networks, feedforward WaveNet architectures and we also introduce a new model based on the combination of the aforementioned models. Through objective perceptual-based metrics and subjective listening tests we explore the performance of these models when modeling various analog audio effects. Thus, we show virtual analog models of nonlinear effects, such as a tube preamplifier; nonlinear effects with memory, such as a transistor-based limiter and nonlinear time-varying effects, such as the rotating horn and rotating woofer of a Leslie speaker cabinet.
APA, Harvard, Vancouver, ISO, and other styles
28

Hwangbo, Jemin, Christian Gehring, Hannes Sommer, Roland Siegwart, and Jonas Buchli. "Policy Learning with an Efficient Black-Box Optimization Algorithm." International Journal of Humanoid Robotics 12, no. 03 (September 2015): 1550029. http://dx.doi.org/10.1142/s0219843615500292.

Full text
Abstract:
Robotic learning on real hardware requires an efficient algorithm which minimizes the number of trials needed to learn an optimal policy. Prolonged use of hardware causes wear and tear on the system and demands more attention from an operator. To this end, we present a novel black-box optimization algorithm, Reward Optimization with Compact Kernels and fast natural gradient regression (ROCK⋆). Our algorithm immediately updates knowledge after a single trial and is able to extrapolate in a controlled manner. These features make fast and safe learning on real hardware possible. The performance of our method is evaluated with standard benchmark functions that are commonly used to test optimization algorithms. We also present three different robotic optimization examples using ROCK⋆. The first robotic example is on a simulated robot arm, the second is on a real articulated legged system, and the third is on a simulated quadruped robot with 12 actuated joints. ROCK⋆ outperforms the current state-of-the-art algorithms in all tasks sometimes even by an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
29

Hang, Jie, Keji Han, Hui Chen, and Yun Li. "Ensemble adversarial black-box attacks against deep learning systems." Pattern Recognition 101 (May 2020): 107184. http://dx.doi.org/10.1016/j.patcog.2019.107184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hsu, William, and Joann G. Elmore. "Shining Light Into the Black Box of Machine Learning." JNCI: Journal of the National Cancer Institute 111, no. 9 (January 10, 2019): 877–79. http://dx.doi.org/10.1093/jnci/djy226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tenne, Yoel. "Machine–Learning in Optimization of Expensive Black–Box Functions." International Journal of Applied Mathematics and Computer Science 27, no. 1 (March 28, 2017): 105–18. http://dx.doi.org/10.1515/amcs-2017-0008.

Full text
Abstract:
Abstract Modern engineering design optimization often uses computer simulations to evaluate candidate designs. For some of these designs the simulation can fail for an unknown reason, which in turn may hamper the optimization process. To handle such scenarios more effectively, this study proposes the integration of classifiers, borrowed from the domain of machine learning, into the optimization process. Several implementations of the proposed approach are described. An extensive set of numerical experiments shows that the proposed approach improves search effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
32

Azodi, Christina B., Jiliang Tang, and Shin-Han Shiu. "Opening the Black Box: Interpretable Machine Learning for Geneticists." Trends in Genetics 36, no. 6 (June 2020): 442–55. http://dx.doi.org/10.1016/j.tig.2020.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kuze, Naomi, Keiichiro Seno, and Toshimitsu Ushio. "Learning-based black box checking for k-safety hyperproperties." Engineering Applications of Artificial Intelligence 126 (November 2023): 107029. http://dx.doi.org/10.1016/j.engappai.2023.107029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yulistyanti, Dwi, Tri Yani Akhirina, Thomas Afrizal, Aulia Paramita, and Naely Farkhatin. "Testing Learning Media for English Learning Applications Using BlackBox Testing Based on Equivalence Partitions." Scope : Journal of English Language Teaching 6, no. 2 (April 22, 2022): 73. http://dx.doi.org/10.30998/scope.v6i2.12845.

Full text
Abstract:
Some testings or testings of an application are very important. So that an application can appear to be running properly or there are still errors in its use. Which later can be improved optimally. In this test, the black box testing technique was used as a test. Black box testing is a system for testing an application by looking at the menu whether it is following what is desired or still needs improvement. The application that will be tested is the Duolingo application which on the play store gets a high rating and is used by more than 12 million smartphone users. The type of testing of the method used, namely the black box, varies, but one of the methods used here is Equivalence Partitions. The Equivalence Partitions technique is a test based on entering data on each form of the system or application used. In each menu, input data on each form will be tested and carried out based on its function, whether it is valid or not. So that by doing this testing the application is getting better and the application must be of good quality according to their respective functions. So that many smartphone users who want to use the application make it easy to use it<p> </p>
APA, Harvard, Vancouver, ISO, and other styles
35

Xie, Xianwei, Baozhi Sun, Xiaohe Li, Tobias Olsson, Neda Maleki, and Fredrik Ahlgren. "Fuel Consumption Prediction Models Based on Machine Learning and Mathematical Methods." Journal of Marine Science and Engineering 11, no. 4 (March 29, 2023): 738. http://dx.doi.org/10.3390/jmse11040738.

Full text
Abstract:
An accurate fuel consumption prediction model is the basis for ship navigation status analysis, energy conservation, and emission reduction. In this study, we develop a black-box model based on machine learning and a white-box model based on mathematical methods to predict ship fuel consumption rates. We also apply the Kwon formula as a data preprocessing cleaning method for the black-box model that can eliminate the data generated during the acceleration and deceleration process. The ship model test data and the regression methods are employed to evaluate the accuracy of the models. Furthermore, we use the predicted correlation between fuel consumption rates and speed under simulated conditions for model performance validation. We also discuss applying the data-cleaning method in the preprocessing of the black-box model. The results demonstrate that this method is feasible and can support the performance of the fuel consumption model in a broad and dense distribution of noise data in data collected from real ships. We improved the error to 4% of the white-box model and the R2 to 0.9977 and 0.9922 of the XGBoost and RF models, respectively. After applying the Kwon cleaning method, the value of R2 also can reach 0.9954, which can provide decision support for the operation of shipping companies.
APA, Harvard, Vancouver, ISO, and other styles
36

ROCHA, ANDERSON, JOÃO PAULO PAPA, and LUIS A. A. MEIRA. "HOW FAR DO WE GET USING MACHINE LEARNING BLACK-BOXES?" International Journal of Pattern Recognition and Artificial Intelligence 26, no. 02 (March 2012): 1261001. http://dx.doi.org/10.1142/s0218001412610010.

Full text
Abstract:
With several good research groups actively working in machine learning (ML) approaches, we have now the concept of self-containing machine learning solutions that oftentimes work out-of-the-box leading to the concept of ML black-boxes. Although it is important to have such black-boxes helping researchers to deal with several problems nowadays, it comes with an inherent problem increasingly more evident: we have observed that researchers and students are progressively relying on ML black-boxes and, usually, achieving results without knowing the machinery of the classifiers. In this regard, this paper discusses the use of machine learning black-boxes and poses the question of how far we can get using these out-of-the-box solutions instead of going deeper into the machinery of the classifiers. The paper focuses on three aspects of classifiers: (1) the way they compare examples in the feature space; (2) the impact of using features with variable dimensionality; and (3) the impact of using binary classifiers to solve a multi-class problem. We show how knowledge about the classifier's machinery can improve the results way beyond out-of-the-box machine learning solutions.
APA, Harvard, Vancouver, ISO, and other styles
37

Yu, Wen, and Francisco Vega. "Nonlinear system modeling using the takagi-sugeno fuzzy model and long-short term memory cells." Journal of Intelligent & Fuzzy Systems 39, no. 3 (October 7, 2020): 4547–56. http://dx.doi.org/10.3233/jifs-200491.

Full text
Abstract:
The data driven black-box or gray-box models like neural networks and fuzzy systems have some disadvantages, such as the high and uncertain dimensions and complex learning process. In this paper, we combine the Takagi-Sugeno fuzzy model with long-short term memory cells to overcome these disadvantages. This novel model takes the advantages of the interpretability of the fuzzy system and the good approximation ability of the long-short term memory cell. We propose a fast and stable learning algorithm for this model. Comparisons with others similar black-box and grey-box models are made, in order to observe the advantages of the proposal.
APA, Harvard, Vancouver, ISO, and other styles
38

Candelieri, Antonio, Riccardo Perego, Ilaria Giordani, Andrea Ponti, and Francesco Archetti. "Modelling human active search in optimizing black-box functions." Soft Computing 24, no. 23 (October 24, 2020): 17771–85. http://dx.doi.org/10.1007/s00500-020-05398-2.

Full text
Abstract:
AbstractModelling human function learning has been the subject of intense research in cognitive sciences. The topic is relevant in black-box optimization where information about the objective and/or constraints is not available and must be learned through function evaluations. In this paper, we focus on the relation between the behaviour of humans searching for the maximum and the probabilistic model used in Bayesian optimization. As surrogate models of the unknown function, both Gaussian processes and random forest have been considered: the Bayesian learning paradigm is central in the development of active learning approaches balancing exploration/exploitation in uncertain conditions towards effective generalization in large decision spaces. In this paper, we analyse experimentally how Bayesian optimization compares to humans searching for the maximum of an unknown 2D function. A set of controlled experiments with 60 subjects, using both surrogate models, confirm that Bayesian optimization provides a general model to represent individual patterns of active learning in humans.
APA, Harvard, Vancouver, ISO, and other styles
39

Mayr, Franz, Sergio Yovine, and Ramiro Visca. "Property Checking with Interpretable Error Characterization for Recurrent Neural Networks." Machine Learning and Knowledge Extraction 3, no. 1 (February 12, 2021): 205–27. http://dx.doi.org/10.3390/make3010010.

Full text
Abstract:
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Percy, William, and Kevin Dow. "The Coaching Black Box: Risk Mitigation during Change Management." Journal of Risk and Financial Management 14, no. 8 (July 27, 2021): 344. http://dx.doi.org/10.3390/jrfm14080344.

Full text
Abstract:
A case study of strategic renewal in the Chinese education market, this paper explores a non-directive coaching model and its impact on risk mitigation, knowledge exchange and innovation in strategic renewal through the application of multi-tiered coaching and manager coaches. Through an ethnographic action research methodology, we ask “Can coaching mitigate organisational risk and increase the likelihood of positive outcomes in change management?” and “Can managers, acting as internal coaches, increase knowledge socialisation and mitigate risk in the change management process?” The paper finds that there is no inherent failure rate in the change management process and that a strategic management approach can mitigate risk liberating managers and organisations to seek to create the collaborative environments that support organisational learning and strategic renewal, thus moving beyond a narrative of failure to one of strategic empowerment and a strategic management approach to risk mitigation. We conclude that a data-driven approach to organisational learning and Professional Learning Communities helps teams to ask the right questions and to mitigate risk through better aligning the organisation to its strategic reality, exploiting organisational learning to achieve competitive advantage and ensuring that systems and processes continue to match the emerging strategic reality.
APA, Harvard, Vancouver, ISO, and other styles
41

Iyer, Padmavathi, and Amirreza Masoumzadeh. "Learning Relationship-Based Access Control Policies from Black-Box Systems." ACM Transactions on Privacy and Security 25, no. 3 (August 31, 2022): 1–36. http://dx.doi.org/10.1145/3517121.

Full text
Abstract:
Access control policies are crucial in securing data in information systems. Unfortunately, often times, such policies are poorly documented, and gaps between their specification and implementation prevent the system users, and even its developers, from understanding the overall enforced policy of a system. To tackle this problem, we propose the first of its kind systematic approach for learning the enforced authorizations from a target system by interacting with and observing it as a black box. The black-box view of the target system provides the advantage of learning its overall access control policy without dealing with its internal design complexities. Furthermore, compared to the previous literature on policy mining and policy inference, we avoid exhaustive exploration of the authorization space by minimizing our observations. We focus on learning relationship-based access control (ReBAC) policy, and show how we can construct a deterministic finite automaton (DFA) to formally characterize such an enforced policy. We theoretically analyze our proposed learning approach by studying its termination, correctness, and complexity. Furthermore, we conduct extensive experimental analysis based on realistic application scenarios to establish its cost, quality of learning, and scalability in practice.
APA, Harvard, Vancouver, ISO, and other styles
42

Park, Charles, Claire Wu, and Glenn Regehr. "Shining a Light Into the Black Box of Group Learning." Academic Medicine 95, no. 6 (June 2020): 919–24. http://dx.doi.org/10.1097/acm.0000000000003099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

YILDIRIM, Özen, and Safiye BİLİCAN DEMİR. "Inside the black box: do teachers practice assessment as learning?" International Journal of Assessment Tools in Education 9, Special Issue (November 29, 2022): 46–71. http://dx.doi.org/10.21449/ijate.1132923.

Full text
Abstract:
The conceptual development of assessment literature in recent years has been remarkable. One of the latest concepts to have emerged in parallel with this development is Assessment as Learning (AsL). This study investigated how AsL pertains to classroom practices within its conceptual framework by examining teacher reports. Case study design, a qualitative research method, was used to collect detailed information about in-class teacher practices. The teachers were interviewed with semi-structured interview forms and the data obtained were then analyzed using content analysis. The results revealed that in-class teacher practices were incapable of supporting AsL and promoting self-regulated behaviors and that many of the activities conducted in class were teacher-centered. Teachers did not apply self-assessment or peer-assessment practices, and the feedback they gave to students was mainly based on measurement scores. The researchers discussed the results in relation to the relevant literature and offered some suggestions for applying AsL in practice.
APA, Harvard, Vancouver, ISO, and other styles
44

Amato, Domenico, Salvatore Calderaro, Giosué Lo Bosco, Riccardo Rizzo, and Filippo Vella. "Metric Learning in Histopathological Image Classification: Opening the Black Box." Sensors 23, no. 13 (June 28, 2023): 6003. http://dx.doi.org/10.3390/s23136003.

Full text
Abstract:
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large number of images in long and repetitive tasks. This work proposes the adoption of metric learning that, beyond the task of classifying images, can provide additional information able to support the decision of the classification system. In particular, triplet networks have been employed to create a representation in the embedding space that gathers together images of the same class while tending to separate images with different labels. The obtained representation shows an evident separation of the classes with the possibility of evaluating the similarity and the dissimilarity among input images according to distance criteria. The model has been tested on the BreakHis dataset, a reference and largely used dataset that collects breast cancer images with eight pathology labels and four magnification levels. Our proposed classification model achieves relevant performance on the patient level, with the advantage of providing interpretable information for the obtained results, which represent a specific feature missed by the all the recent methodologies proposed for the same purpose.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Sijia, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, and Alexander Gray. "An ADMM Based Framework for AutoML Pipeline Configuration." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4892–99. http://dx.doi.org/10.1609/aaai.v34i04.5926.

Full text
Abstract:
We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines. This black-box (gradient-free) optimization with mixed integer & continuous variables is a challenging problem. We propose a novel AutoML scheme by leveraging the alternating direction method of multipliers (ADMM). The proposed framework is able to (i) decompose the optimization problem into easier sub-problems that have a reduced number of variables and circumvent the challenge of mixed variable categories, and (ii) incorporate black-box constraints alongside the black-box optimization objective. We empirically evaluate the flexibility (in utilizing existing AutoML techniques), effectiveness (against open source AutoML toolkits), and unique capability (of executing AutoML with practically motivated black-box constraints) of our proposed scheme on a collection of binary classification data sets from UCI ML & OpenML repositories. We observe that on an average our framework provides significant gains in comparison to other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical advantages of this framework.
APA, Harvard, Vancouver, ISO, and other styles
46

Letsoin, Sri Murniani Angelina. "DESAIN DAN IMPLEMENTASI MOBILE LEARNING (M-LEARNING) GEOMETRY AND BULDING FLAT (GBF)." MUSTEK ANIM HA 7, no. 1 (August 6, 2018): 57–68. http://dx.doi.org/10.35724/mustek.v7i1.1500.

Full text
Abstract:
Mobile Learning merupakan model kegiatan pembelajaran yang memanfaatkan kemampuan teknologi informasi dan komunikasi, yang dapat digunakan untuk mendukung aktifitas hybrid learning. Berdasarkan studi pendahuluan yang dilakukan di SMPN Buti Kabupaten Merauke diperoleh 85,45% siswa menjawab sedikit sulit, 7,27% siswa menjawab sulit dan 7,27% siswa menjawab sangat sulit dalam mempelajari bidang Matematika mengenai geometri dan bangun datar, serta diperoleh rata-rata penggunaan smartphone untuk chating, internet dan hiburan. Hal ini dapat menjadi peluang untuk mengembangkan aplikasi pembelajaran berbasis m-learning khususnya pada platform android untuk mendukung pembelajaran geometri dan bangun ruang. Metode penelitian yang digunakan pada tahapan pengumpulan data adalah studi literatur, observasi, kuisioner dan wawancara. Metode Prototype digunakan untuk pengembangan perangkat lunak. Perancangan sistem menggunakan Bahasa pemodelan UML dan flowchart. Tahap implementasi menggunakan bahasa Java dengan berbantukan IDE Eclipse. Pengujian sistem menggunakan 3 metode yaitu: Black Box, Kuisioner dan Uji T. Luaran penelitian ini berupa aplikasi Geometry and Building Flat (GBF) yang dapat digunakan sebagai media pendukung belajar matematika tentang bangun ruang dan bangun datar. Kata Kunci: M-learning, Geometri, Waterfall, Black box, Uji T
APA, Harvard, Vancouver, ISO, and other styles
47

Cretu, Andrei. "Learning the Ashby Box: an experiment in second order cybernetic modeling." Kybernetes 49, no. 8 (November 23, 2019): 2073–90. http://dx.doi.org/10.1108/k-06-2019-0439.

Full text
Abstract:
Purpose W. Ross Ashby’s elementary non-trivial machine, known in the cybernetic literature as the “Ashby Box,” has been described as the prototypical example of a black box system. As far as it can be ascertained from Ashby’s journal, the intended purpose of this device may have been to exemplify the environment where an “artificial brain” may operate. This paper describes the construction of an elementary observer/controller for the class of systems exemplified by the Ashby Box – variable structure black box systems with parallel input. Design/methodology/approach Starting from a formalization of the second-order assumptions implicit in the design of the Ashby Box, the observer/controller system is synthesized from the ground up, in a strictly system-theoretic setting, without recourse to disciplinary metaphors or current theories of learning and cognition, based mainly on guidance from Heinz von Foerster’s theory of self-organizing systems and W. Ross Ashby’s own insights into adaptive systems. Findings Achieving and maintaining control of the Ashby Box requires a non-trivial observer system able to use the results of its interactions with the non-trivial machine to autonomously construct, deconstruct and reconstruct its own function. The algorithm and the dynamical model of the Ashby Box observer developed in this paper define the basic specifications of a general purpose, unsupervised learning architecture able to accomplish this task. Originality/value The problem exemplified by the Ashby Box is fundamental and goes to the roots of cybernetic theory; second-order cybernetics offers an adequate foundation for the mathematical modeling of this problem.
APA, Harvard, Vancouver, ISO, and other styles
48

Bausch, Johannes. "Fast Black-Box Quantum State Preparation." Quantum 6 (August 4, 2022): 773. http://dx.doi.org/10.22331/q-2022-08-04-773.

Full text
Abstract:
Quantum state preparation is an important ingredient for other higher-level quantum algorithms, such as Hamiltonian simulation, or for loading distributions into a quantum device to be used e.g. in the context of optimization tasks such as machine learning. Starting with a generic "black box" method devised by Grover in 2000, which employs amplitude amplification to load coefficients calculated by an oracle, there has been a long series of results and improvements with various additional conditions on the amplitudes to be loaded, culminating in Sanders et al.'s work which avoids almost all arithmetic during the preparation stage.In this work, we construct an optimized black box state loading scheme with which various important sets of coefficients can be loaded significantly faster than in O(N) rounds of amplitude amplification, up to only O(1) many. We achieve this with two variants of our algorithm. The first employs a modification of the oracle from Sanders et al., which requires fewer ancillas (log2&#x2061;g vs g+2 in the bit precision g), and fewer non-Clifford operations per amplitude amplification round within the context of our algorithm. The second utilizes the same oracle, but at slightly increased cost in terms of ancillas (g+log2&#x2061;g) and non-Clifford operations per amplification round. As the number of amplitude amplification rounds enters as multiplicative factor, our black box state loading scheme yields an up to exponential speedup as compared to prior methods. This speedup translates beyond the black box case.
APA, Harvard, Vancouver, ISO, and other styles
49

Arora, Sanjeev. "Opening the Black Box of Deep Learning: Some Lessons and Take-aways." ACM SIGMETRICS Performance Evaluation Review 49, no. 1 (June 22, 2022): 1. http://dx.doi.org/10.1145/3543516.3453910.

Full text
Abstract:
Deep learning has rapidly come to dominate AI and machine learning in the past decade. These successes have come despite deep learning largely being a "black box." A small subdiscipline has grown up trying to derive better understanding of the underlying mathematical properties. Via a tour d'horizon of recent theoretical analyses of deep learning in some concrete settings, we illustrate how the black box view can miss out on (or even be wrong about) special phenomena going on during training. These phenomena are also not captured by the training objective. We argue that understanding such phenomena via mathematical understanding will be crucial for enabling the full range of future applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Gadzinski, Gregory, and Alessio Castello. "Combining white box models, black box machines and human interventions for interpretable decision strategies." Judgment and Decision Making 17, no. 3 (May 2022): 598–627. http://dx.doi.org/10.1017/s1930297500003594.

Full text
Abstract:
AbstractGranting a short-term loan is a critical decision. A great deal of research has concerned the prediction of credit default, notably through Machine Learning (ML) algorithms. However, given that their black-box nature has sometimes led to unwanted outcomes, comprehensibility in ML guided decision-making strategies has become more important. In many domains, transparency and accountability are no longer optional. In this article, instead of opposing white-box against black-box models, we use a multi-step procedure that combines the Fast and Frugal Tree (FFT) methodology of Martignon et al. (2005) and Phillips et al. (2017) with the extraction of post-hoc explainable information from ensemble ML models. New interpretable models are then built thanks to the inclusion of explainable ML outputs chosen by human intervention. Our methodology improves significantly the accuracy of the FFT predictions while preserving their explainable nature. We apply our approach to a dataset of short-term loans granted to borrowers in the UK, and show how complex machine learning can challenge simpler machines and help decision makers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography