Дисертації з теми "Black-box learning"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Black-box learning.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-47 дисертацій для дослідження на тему "Black-box learning".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hussain, Jabbar. "Deep Learning Black Box Problem." Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393479.

Повний текст джерела
Анотація:
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kamp, Michael [Verfasser]. "Black-Box Parallelization for Machine Learning / Michael Kamp." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1200020057/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Verì, Daniele. "Empirical Model Learning for Constrained Black Box Optimization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25704/.

Повний текст джерела
Анотація:
Black box optimization is a field of the global optimization which consists in a family of methods intended to minimize or maximize an objective function that doesn’t allow the exploitation of gradients, linearity or convexity information. Beside that the objective is often a problem that requires a significant amount of time/resources to query a point and thus the goal is to go as close as possible to the optimum with the less number of iterations possible. The Emprical Model Learning is a methodology for merging Machine Learning and optimization techniques like Constraint Programming and Mixed Integer Linear Programming by extracting decision models from the data. This work aims to close the gap between Empirical Model Learning optimization and Black Box optimization methods (which have a strong literature) via active learning. At each iteration of the optimization loop a ML model is fitted on the data points and it is embedded in a prescriptive model using the EML. The encoded model is then enriched with domain specific constraints and is optimized selecting the next point to query and add to the collection of samples.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rowan, Adriaan. "Unravelling black box machine learning methods using biplots." Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31124.

Повний текст джерела
Анотація:
Following the development of new mathematical techniques, the improvement of computer processing power and the increased availability of possible explanatory variables, the financial services industry is moving toward the use of new machine learning methods, such as neural networks, and away from older methods such as generalised linear models. However, their use is currently limited because they are seen as “black box” models, which gives predictions without justifications and which are therefore not understood and cannot be trusted. The goal of this dissertation is to expand on the theory and use of biplots to visualise the impact of the various input factors on the output of the machine learning black box. Biplots are used because they give an optimal two-dimensional representation of the data set on which the machine learning model is based.The biplot allows every point on the biplot plane to be converted back to the original ��-dimensions – in the same format as is used by the machine learning model. This allows the output of the model to be represented by colour coding each point on the biplot plane according to the output of an independently calibrated machine learning model. The interaction of the changing prediction probabilities – represented by the coloured output – in relation to the data points and the variable axes and category level points represented on the biplot, allows the machine learning model to be globally and locally interpreted. By visualing the models and their predictions, this dissertation aims to remove the stigma of calling non-linear models “black box” models and encourage their wider application in the financial services industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.

Повний текст джерела
Анотація:
Currently, thanks to the Big Data boom, the excellent results obtained by deep learning models and the strong digital transformation experienced over the last years, many companies have decided to incorporate machine learning models into their systems. Some companies have detected this opportunity and are making a portfolio of artificial intelligence services available to third parties in the form of application programming interfaces (APIs). Subsequently, developers include calls to these APIs to incorporate AI functionalities in their products. Although it is an option that saves time and resources, it is true that, in most cases, these APIs are displayed in the form of blackboxes, the details of which are unknown to their clients. The complexity of such products typically leads to a lack of control and knowledge of the internal components, which, in turn, can drive to potential uncontrolled risks. Therefore, it is necessary to develop methods capable of evaluating the performance of these black-boxes when applied to a specific application. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. In the first part of the thesis, we develop a thorough revision of the concept of uncertainty, focusing on the uncertainty of classification systems. We review the existingrelated literature, describing the different approaches for modelling this uncertainty, its application to different use cases and some of its desirable properties. Next, we introduce the proposed method for modelling uncertainty in black-box settings. Moreover, in the last chapters of the thesis, we showcase the method applied to different domains, including NLP and computer vision problems. Finally, we include two reallife applications of the method: classification of overqualification in job descriptions and readability assessment of texts.
La tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Siqueira, Gomes Hugo. "Meta learning for population-based algorithms in black-box optimization." Master's thesis, Université Laval, 2021. http://hdl.handle.net/20.500.11794/68764.

Повний текст джерела
Анотація:
Les problèmes d’optimisation apparaissent dans presque tous les domaines scientifiques. Cependant, le processus laborieux de conception d’un optimiseur approprié peut demeurer infructueux. La question la plus ambitieuse de l’optimisation est peut-être de savoir comment concevoir des optimiseurs suffisamment flexibles pour s’adapter à un grand nombre de scénarios, tout en atteignant des performances de pointe. Dans ce travail, nous visons donner une réponse potentielle à cette question en étudiant comment faire un méta-apprentissage d’optimiseurs à base de population. Nous motivons et décrivons une modélisation commune pour la plupart des algorithmes basés sur la population, qui présentent des principes d’adaptation générale. Cette structure permet de dériver un cadre de méta-apprentissage basé sur un processus de décision de Markov partiellement observable (POMDP). Notre formulation conceptuelle fournit une méthodologie générale pour apprendre l’algorithme d’optimisation lui-même, présenté comme un problème de méta-apprentissage ou d’apprentissage pour optimiser à l’aide d’ensembles de données d’analyse comparative en boîte noire, pour former des optimiseurs polyvalents efficaces. Nous estimons une fonction d’apprentissage de méta-perte basée sur les performances d’algorithmes stochastiques. Notre analyse expérimentale indique que cette nouvelle fonction de méta-perte encourage l’algorithme appris à être efficace et robuste à une convergence prématurée. En outre, nous montrons que notre approche peut modifier le comportement de recherche d’un algorithme pour s’adapter facilement à un nouveau contexte et être efficace par rapport aux algorithmes de pointe, tels que CMA-ES.
Optimization problems appear in almost any scientific field. However, the laborious process to design a suitable optimizer may lead to an unsuccessful outcome. Perhaps the most ambitious question in optimization is how we can design optimizers that can be flexible enough to adapt to a vast number of scenarios while at the same time reaching state-of-the-art performance. In this work, we aim to give a potential answer to this question by investigating how to metalearn population-based optimizers. We motivate and describe a common structure for most population-based algorithms, which present principles for general adaptation. This structure can derive a meta-learning framework based on a Partially observable Markov decision process (POMDP). Our conceptual formulation provides a general methodology to learn the optimizer algorithm itself, framed as a meta-learning or learning-to-optimize problem using black-box benchmarking datasets to train efficient general-purpose optimizers. We estimate a meta-loss training function based on stochastic algorithms’ performance. Our experimental analysis indicates that this new meta-loss function encourages the learned algorithm to be sample efficient and robust to premature convergence. Besides, we show that our approach can alter an algorithm’s search behavior to fit easily in a new context and be sample efficient compared to state-of-the-art algorithms, such as CMA-ES.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.

Повний текст джерела
Анотація:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 45-47).
We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.
by Michael Sun.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.

Повний текст джерела
Анотація:
Cette thèse porte sur la configurationAutomatisée des algorithmes qui vise à trouver le meilleur paramétrage à un problème donné ou une catégorie deproblèmes.Le problème de configuration de l'algorithme revient doncà un problème de métaFoptimisation dans l'espace desparamètres, dont le métaFobjectif est la mesure deperformance de l’algorithme donné avec une configuration de paramètres donnée.Des approches plus récentes reposent sur une description des problèmes et ont pour but d’apprendre la relationentre l’espace des caractéristiques des problèmes etl’espace des configurations de l’algorithme à paramétrer.Cette thèse de doctorat porter le CAPI (Configurationd'Algorithme Par Instance) pour résoudre des problèmesd'optimisation de boîte noire continus, où seul un budgetlimité d'évaluations de fonctions est disponible. Nous étudions d'abord' les algorithmes évolutionnairesPour l'optimisation continue, en mettant l'accent sur deux algorithmes que nous avons utilisés comme algorithmecible pour CAPI,DE et CMAFES.Ensuite, nous passons en revue l'état de l'art desapproches de configuration d'algorithme, et lesdifférentes fonctionnalités qui ont été proposées dansla littérature pour décrire les problèmesd'optimisation de boîte noire continue.Nous introduisons ensuite une méthodologie générale Pour étudier empiriquement le CAPI pour le domainecontinu, de sorte que toutes les composantes du CAPIpuissent être explorées dans des conditions réelles.À cette fin, nous introduisons également un nouveau Banc d'essai de boîte noire continue, distinct ducélèbre benchmark BBOB, qui est composé deplusieurs fonctions de test multidimensionnelles avec'différentes propriétés problématiques, issues de lalittérature.La méthodologie proposée est finalement appliquée 'àdeux AES. La méthodologie est ainsi, validéempiriquement sur le nouveau banc d’essaid’optimisation boîte noire pour des dimensions allant jusqu’à 100
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Стилі APA, Harvard, Vancouver, ISO та ін.
9

REPETTO, MARCO. "Black-box supervised learning and empirical assessment: new perspectives in credit risk modeling." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/402366.

Повний текст джерела
Анотація:
I recenti algoritmi di apprendimento automatico ad alte prestazioni sono convincenti ma opachi, quindi spesso è difficile capire come arrivano alle loro previsioni, dando origine a problemi di interpretabilità. Questi problemi sono particolarmente rilevanti nell'apprendimento supervisionato, dove questi modelli "black-box" non sono facilmente comprensibili per le parti interessate. Un numero crescente di lavori si concentra sul rendere più interpretabili i modelli di apprendimento automatico, in particolare quelli di apprendimento profondo. Gli approcci attualmente proposti si basano su un'interpretazione post-hoc, utilizzando metodi come la mappatura della salienza e le dipendenze parziali. Nonostante i progressi compiuti, l'interpretabilità è ancora un'area di ricerca attiva e non esiste una soluzione definitiva. Inoltre, nei processi decisionali ad alto rischio, l'interpretabilità post-hoc può essere subottimale. Un esempio è il campo della modellazione del rischio di credito aziendale. In questi campi, i modelli di classificazione discriminano tra buoni e cattivi mutuatari. Di conseguenza, gli istituti di credito possono utilizzare questi modelli per negare le richieste di prestito. Il rifiuto di un prestito può essere particolarmente dannoso quando il mutuatario non può appellarsi o avere una spiegazione e una motivazione della decisione. In questi casi, quindi, è fondamentale capire perché questi modelli producono un determinato risultato e orientare il processo di apprendimento verso previsioni basate sui fondamentali. Questa tesi si concentra sul concetto di Interpretable Machine Learning, con particolare attenzione al contesto della modellazione del rischio di credito. In particolare, la tesi ruota attorno a tre argomenti: l'interpretabilità agnostica del modello, l'interpretazione post-hoc nel rischio di credito e l'apprendimento guidato dall'interpretabilità. Più specificamente, il primo capitolo è un'introduzione guidata alle tecniche model-agnostic che caratterizzano l'attuale panorama del Machine Learning e alle loro implementazioni. Il secondo capitolo si concentra su un'analisi empirica del rischio di credito delle piccole e medie imprese italiane. Propone una pipeline analitica in cui l'interpretabilità post-hoc gioca un ruolo cruciale nel trovare le basi rilevanti che portano un'impresa al fallimento. Il terzo e ultimo articolo propone una nuova metodologia di iniezione di conoscenza multicriteriale. La metodologia si basa sulla doppia retropropagazione e può migliorare le prestazioni del modello, soprattutto in caso di scarsità di dati. Il vantaggio essenziale di questa metodologia è che permette al decisore di imporre le sue conoscenze pregresse all'inizio del processo di apprendimento, facendo previsioni che si allineano con i fondamentali.
Recent highly performant Machine Learning algorithms are compelling but opaque, so it is often hard to understand how they arrive at their predictions giving rise to interpretability issues. Such issues are particularly relevant in supervised learning, where such black-box models are not easily understandable by the stakeholders involved. A growing body of work focuses on making Machine Learning, particularly Deep Learning models, more interpretable. The currently proposed approaches rely on post-hoc interpretation, using methods such as saliency mapping and partial dependencies. Despite the advances that have been made, interpretability is still an active area of research, and there is no silver bullet solution. Moreover, in high-stakes decision-making, post-hoc interpretability may be sub-optimal. An example is the field of enterprise credit risk modeling. In such fields, classification models discriminate between good and bad borrowers. As a result, lenders can use these models to deny loan requests. Loan denial can be especially harmful when the borrower cannot appeal or have the decision explained and grounded by fundamentals. Therefore in such cases, it is crucial to understand why these models produce a given output and steer the learning process toward predictions based on fundamentals. This dissertation focuses on the concept of Interpretable Machine Learning, with particular attention to the context of credit risk modeling. In particular, the dissertation revolves around three topics: model agnostic interpretability, post-hoc interpretation in credit risk, and interpretability-driven learning. More specifically, the first chapter is a guided introduction to the model-agnostic techniques shaping today’s landscape of Machine Learning and their implementations. The second chapter focuses on an empirical analysis of the credit risk of Italian Small and Medium Enterprises. It proposes an analytical pipeline in which post-hoc interpretability plays a crucial role in finding the relevant underpinnings that drive a firm into bankruptcy. The third and last paper proposes a novel multicriteria knowledge injection methodology. The methodology is based on double backpropagation and can improve model performance, especially in the case of scarce data. The essential advantage of such methodology is that it allows the decision maker to impose his previous knowledge at the beginning of the learning process, making predictions that align with the fundamentals.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison." Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.

Повний текст джерела
Анотація:
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Rentschler, Tobias [Verfasser]. "Explainable machine learning in soil mapping : Peeking into the black box / Tobias Rentschler." Tübingen : Universitätsbibliothek Tübingen, 2021. http://d-nb.info/1236994000/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Cazzaro, Lorenzo <1997&gt. "AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19980.

Повний текст джерела
Анотація:
Machine learning (ML) models are vulnerable to evasion attacks, where the attacker adds almost imperceptible perturbation to a correctly classified instance so as to induce misclassification. In the black-box setting where the attacker only has query access to the target model, traditional attack strategies exploit a property known as transferability, i.e., the empirical observation that evasion attacks often generalize across different models. The attacker can thus rely on the following two-step attack strategy: (i) query the target model to learn how to train a surrogate model approximating it; and (ii) craft evasion attacks against the surrogate model, hoping that they “transfer” to the target model. Since the two phases are assumed to be strictly separated, this strategy is sub-optimal and under-approximates the possible actions that a real attacker might take. In this thesis we present AMEBA, the first adaptive approach to the black-box evasion of machine learning models. We describe the reduction from the two-step evasion problem to the MAB problem that allows us to exploit the Thompson sampling algorithm to define AMEBA. As a result, AMEBA infers the best alternation of actions for surrogate model training and evasion attack crafting. We choose multiple datasets and ML models to compare the two attack strategies. Our experiments show that AMEBA outperforms the traditional two-steps attack strategy and is perfectly appropriate for practical usage.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Löfström, Helena. "Time to Open the Black Box : Explaining the Predictions of Text Classification." Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-14194.

Повний текст джерела
Анотація:
The purpose of this thesis has been to evaluate if a new instance based explanation method, called Automatic Instance Text Classification Explanator (AITCE), could provide researchers with insights about the predictions of automatic text classification and decision support about documents requiring human classification. Making it possible for researchers, that normally use manual classification, to cut time and money in their research, with the maintained quality. In the study, AITCE was implemented and applied to the predictions of a black box classifier. The evaluation was performed at two levels: at instance level, where a group of 3 senior researchers, that use human classification in their research, evaluated the results from AITCE from an expert view; and at model level, where a group of 24 non experts evaluated the characteristics of the classes. The evaluations indicate that AITCE produces insights about which words that most strongly affect the prediction. The research also suggests that the quality of an automatic text classification may increase through an interaction between the user and the classifier in situations with unsure predictions.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Karim, Abdul. "Molecular toxicity prediction using deep learning." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/406981.

Повний текст джерела
Анотація:
In this thesis, we address the black-box nature of deep learning models for molecular toxicity prediction as well as propose methods for aggregating various chemical features to have an improved accuracy. An ideal toxicity prediction model is characterized with high accuracy, capable of handling descriptors/features diversity, ease of training and interpretability. Considering these attributes of an ideal model, in the first quarter of this thesis we present a novel hybrid framework based on decision trees (DT) and shallow neural networks (SNN). This method paves a path to feature interpretability while enhancing the accuracy by selecting only the relevant features for model training. Using this approach, the run-time complexity of developed toxicity model is substantially reduced. The idea is to create a contextual adaptation of the models by hybridizing the decisions trees to enhance the features interpretability and accuracy both. In the later quarters of this thesis, we argue for the idea of effective aggregation of chemical knowledge about molecules in toxicity prediction. Molecules are represented in various data formats such that each format has its own specific role in predicting molecular activities. We propose various deep learning ensemble approaches to effectively aggregate different chemical features information. We have applied these methods to quantitative and qualitative molecular toxicity prediction problems and have obtained new stateof- the-art accuracy improvements with respect to existing deep learning methods. Our ensembling methods also prove helpful in making the model’s prediction robust over a range of performance metrics for toxicity prediction.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Corinaldesi, Marianna. "Explainable AI: tassonomia e analisi di modelli spiegabili per il Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Знайти повний текст джерела
Анотація:
La complessità dei modelli di Deep Learning ha permesso di ottenere risultati sbalorditivi in termini di accuratezza. Tale complessità è data sia dalla struttura non lineare e multistrato delle reti neurali profonde, sia dal loro elevato numero di parametri calcolati. Tuttavia, questo causa grandi difficoltà nello spiegare il processo decisionale di una rete neurale, che in alcuni contesti è però essenziale. Di fatto, per permettere l’accesso alle tecnologie di Deep Learning e Machine Learning anche ai settori critici - ovvero quei settori in cui le decisioni hanno un peso importante, quali l’ambito medico, economico, politico, giudiziario e così via - è necessario che le predizioni dei modelli siano avvalorate da una spiegazione. L’ Explainable AI (XAI) è il campo di studi che si occupa di sviluppare metodi per fornire spiegazioni alle decisioni effettuate da un modello predittivo. Questo lavoro di tesi raccoglie, organizza ed esamina i principali studi dei ricercatori di XAI in modo da facilitare l’avvicinamento a questa disciplina in rapido sviluppo. Si spiegherà a cosa, a chi e quando serve XAI; sarà mostrata la tassonomia degli attuali metodi utilizzati; si descriveranno e analizzeranno i limiti di alcuni tra gli algoritmi di maggior successo: tecniche basate sul gradiente ascendente sull’input, Deconvolutional Neural Network, CAM e Grad-CAM, LIME, SHAP; si discuterà brevemente dei metodi di valutazione di un modello XAI; si mostrerà il confronto tra l’ allenamento basato sul campionamento nello spazio latente e l’allenamento basato sul calcolo o stima del likelihood; si indicheranno tre librerie open-source di rilievo per la programmazione di modelli spiegabili.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Olofsson, Nina. "A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210565.

Повний текст джерела
Анотація:
Churn prediction methods are widely used in Customer Relationship Management and have proven to be valuable for retaining customers. To obtain a high predictive performance, recent studies rely on increasingly complex machine learning methods, such as ensemble or hybrid models. However, the more complex a model is, the more difficult it becomes to understand how decisions are actually made. Previous studies on machine learning interpretability have used a global perspective for understanding black-box models. This study explores the use of local explanation models for explaining the individual predictions of a Random Forest ensemble model. The churn prediction was studied on the users of Tink – a finance app. This thesis aims to take local explanations one step further by making comparisons between churn indicators of different user groups. Three sets of groups were created based on differences in three user features. The importance scores of all globally found churn indicators were then computed for each group with the help of local explanation models. The results showed that the groups did not have any significant differences regarding the globally most important churn indicators. Instead, differences were found for globally less important churn indicators, concerning the type of information that users stored in the app. In addition to comparing churn indicators between user groups, the result of this study was a well-performing Random Forest ensemble model with the ability of explaining the reason behind churn predictions for individual users. The model proved to be significantly better than a number of simpler models, with an average AUC of 0.93.
Metoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kovaleva, Svetlana. "Entrepreneurial Behavior is Still a Black Box. Three Essays on How Entrepreneurial Learning and Perceptions Can Influence Entrepreneurial Behavior and Firm Performance." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/369012.

Повний текст джерела
Анотація:
Nowadays, entrepreneurship has received a large amount of attention in such studies as economics, sociology, finance, and public policy. Furthermore, The European Union and national government have implemented several policy interventions aimed to encourage new firm formation. Entrepreneurial education is now reinforced in schools, colleges, and universities. Nevertheless, entrepreneurship remains to be a black box. Making everyday decisions on firm organization and management is a complex process, which depends on how entrepreneurs perceive the environment and their own entrepreneurial abilities. These perceptions influence firm behavior that can be represented by combination of different actions. The main goal of this doctoral thesis is to examine how entrepreneurial perceptions and learning influence entrepreneur preferences for certain actions and thus, how they affect firm performance. The first essay aims to understand whether the effectiveness of the policy is altered by the behavioral assumption that entrepreneurs are overconfident about their entrepreneurial abilities and tend to be overoptimistic in the evaluation of future prospects. The essay applies the agent-based model that is a modified version of the financial fragility model of Delli Gatti et al. (2005). The simulation results suggest that the presence of misperceptions of entrepreneurial abilities influence the policy outcomes. The main purpose of the second essay is to reveal how entrepreneurial perceptions of competitive environment influence their preferences for competitive strategies. Competitive advantages of firms are defined on the basis of Porter’s (1980) model of generic strategies — differentiation and cost leadership. The results of the analysis suggest that perceived threat of competition pushes firms to take actions. The preferences for actions are explained by available resources such as human capital. The third essay aims to evaluate the impact of capital grants given to microenterprises operating in the Province of Trento, Italy in 2009 and 2010. The last essay empirically illustrates how lack of restrictions imposed on the amount of possible subsidy requests and fixed eligibility criteria has invoked subsidy-seeking behavior of firms. The results from econometric analysis suggest that subsidies have not been able to improve firm performance or to increase firm size in 2011. However, a positive effect of subsidies on the propensity to invest in training and in marketing and advertising in 2012 has been detected.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kovaleva, Svetlana. "Entrepreneurial Behavior is Still a Black Box. Three Essays on How Entrepreneurial Learning and Perceptions Can Influence Entrepreneurial Behavior and Firm Performance." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1476/1/Kovaleva_Doctoral_thesis.pdf.

Повний текст джерела
Анотація:
Nowadays, entrepreneurship has received a large amount of attention in such studies as economics, sociology, finance, and public policy. Furthermore, The European Union and national government have implemented several policy interventions aimed to encourage new firm formation. Entrepreneurial education is now reinforced in schools, colleges, and universities. Nevertheless, entrepreneurship remains to be a black box. Making everyday decisions on firm organization and management is a complex process, which depends on how entrepreneurs perceive the environment and their own entrepreneurial abilities. These perceptions influence firm behavior that can be represented by combination of different actions. The main goal of this doctoral thesis is to examine how entrepreneurial perceptions and learning influence entrepreneur preferences for certain actions and thus, how they affect firm performance. The first essay aims to understand whether the effectiveness of the policy is altered by the behavioral assumption that entrepreneurs are overconfident about their entrepreneurial abilities and tend to be overoptimistic in the evaluation of future prospects. The essay applies the agent-based model that is a modified version of the financial fragility model of Delli Gatti et al. (2005). The simulation results suggest that the presence of misperceptions of entrepreneurial abilities influence the policy outcomes. The main purpose of the second essay is to reveal how entrepreneurial perceptions of competitive environment influence their preferences for competitive strategies. Competitive advantages of firms are defined on the basis of Porter’s (1980) model of generic strategies — differentiation and cost leadership. The results of the analysis suggest that perceived threat of competition pushes firms to take actions. The preferences for actions are explained by available resources such as human capital. The third essay aims to evaluate the impact of capital grants given to microenterprises operating in the Province of Trento, Italy in 2009 and 2010. The last essay empirically illustrates how lack of restrictions imposed on the amount of possible subsidy requests and fixed eligibility criteria has invoked subsidy-seeking behavior of firms. The results from econometric analysis suggest that subsidies have not been able to improve firm performance or to increase firm size in 2011. However, a positive effect of subsidies on the propensity to invest in training and in marketing and advertising in 2012 has been detected.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Eastwood, Clare. "Unpacking the black box of voice therapy: Exploration of motor learning and gestural components used in the treatment of muscle tension voice disorder." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25759.

Повний текст джерела
Анотація:
Voice therapy is the recommended care for Muscle Tension Voice Disorder (MTVD). For optimal care, SLPs should base their decisions on the three elements of Evidence Based Practice (E3BP) (i.e. research, clinical judgement & client factors). However, SLPs working in voice report using an uneven mix of these elements and that limited high quality evidence is a barrier to E3BP (Chan, et al., 2013). A systematic review of voice therapy for MTVD was conducted, showing positive therapeutic effect. However, methodological limitations prevented strong conclusions about voice therapy. Foremost among the problems was lack of therapy content description. Based on the argument that research and clinical practice won’t advance without disaggregating voice therapy, two studies were designed to illuminate the contents of the ‘black box’ of voice therapy. Both studies used data from six video-recorded voice therapy sessions; two consecutive sessions for MTVD from three SLP-client pairs. SLP behaviour during the videos was analysed via deductive content analysis using two frameworks. The first framework (MLCF-modified) was based on the Motor Learning Classification Framework (MLCF) (Madill et al., 2019) and consisted of ten motor learning (ML) variables. The second framework (GFFCS-modified) was based on Kong’s (2015) gesture form and function classification system and consisted of eight gesture form and eight gesture function categories. Unpacking the black box of voice therapy: Exploration of motor learning and gestural components used in the treatment of muscle tension voice disorder. The two studies were strikingly similar. SLPs used all categories of the MLCFmodified and GFFCS-modified. The rate of SLP ML variables and gestures was high and distribution of types was similar across consecutive sessions and SLPs. This suggests that SLPs communicate large amounts of information during voice therapy and questions the extent to which SLPs modify therapy according to patient need. Unpacking the ‘black box’ of voice therapy is a complex project but one that will ultimately advance voice therapy and hopefully, lead to improved voice care for MTVD.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Karlsson, Linda. "Opening up the 'black box' of Competence Development Implementation : - How the process of Competence Development Implementation is structured in the Swedish debt-collection industry." Thesis, Högskolan i Halmstad, Sektionen för ekonomi och teknik (SET), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-23522.

Повний текст джерела
Анотація:
In spite of the need for organisations to develop competencies among its employees as a source for gaining competitive advantage, and in spite of previous research efforts to find out what contributes to it and the effects of it, the process of Competence Development (CD) implementation is still a ‘black box’, whose internal linkages are unknown. Furthermore it is noticed in previous research that there is a lack of empirically-based research in organisations, and the purpose of this dissertation is therefore to explore the process of CD implementation, as perceived by employees within the debt-collection industry of Sweden.   A case-study on a Swedish Debt-Collection Company was conducted, and data collected through interviews with employees and managers, in order to find out how the process of CD implementation is structured. In order to investigate the internal linkages in the process an extensive literature review was performed in the field of CD, and used for developing a conceptual model, showing how the various stages interact and depend upon each other in gaining competence among its employees. The model was then tested empirically and the findings suggest that the CD implementation was structured mostly in line with the model, although adjustments had to be made.   The findings suggest that in the process of CD implementation conceptualisation of CD plans and selection of participants is conducted in one integrated step and not two distinct steps, as suggested in previous literature. Performance Management and Reflection- and Evaluation are not conducted in two steps but more or less simultaneously. Furthermore, this study suggests that it is the organisation’s responsibility to provide a foundation, opportunities and resources that enable CD, while the employees themselves set the standard for how much they will take advantage of it. Therefore this study argues that if employees can have input and influence on each stage of the process, better outcomes will be provided since it will be aligned with their objectives, personal and professional.   Up to this point, the process of CD implementation has been a ‘black box’, a mechanism that generates a certain level of output but whose internal workings are unknown. It is important to open up that box and to understand how CD operates to produce superior performance for an organisation. The findings in this study help to bridge that gap better, and are useful for managers conducting and implementing Human Resource practices that aim to develop competencies among the company’s workforce in order to gain better performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Malmberg, Jacob, Öhman Marcus Nystad, and Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.

Повний текст джерела
Анотація:
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy and that LIME can be used to indicate which parts ofthe memo had the biggest impact. This implicates that the credit process could be improved by utilizing machine learning, whilemaintaining transparency. However, machine learning may disrupt learning processes within the organization.
För att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

O'Shea, Amanda Jane. "Exploring the black box : a multi-case study of assessment for learning in mathematics and the development of autonomy with 9-10 year old children." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lapuschkin, Sebastian Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] [Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lapuschkin, Sebastian [Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Auernhammer, Katja [Verfasser], Felix [Akademischer Betreuer] Freiling, Kolagari Ramin [Akademischer Betreuer] Tavakoli, Felix [Gutachter] Freiling, Kolagari Ramin [Gutachter] Tavakoli, and Dominique [Gutachter] Schröder. "Mask-based Black-box Attacks on Safety-Critical Systems that Use Machine Learning / Katja Auernhammer ; Gutachter: Felix Freiling, Ramin Tavakoli Kolagari, Dominique Schröder ; Felix Freiling, Ramin Tavakoli Kolagari." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1238358292/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Beillevaire, Marc. "Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model : How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229667.

Повний текст джерела
Анотація:
Machine learning models are becoming more and more powerful and accurate, but their good predictions usually come with a high complexity. Depending on the situation, such a lack of interpretability can be an important and blocking issue. This is especially the case when trust is needed on the user side in order to take a decision based on the model prediction. For instance, when an insurance company uses a machine learning algorithm in order to detect fraudsters: the company would trust the model to be based on meaningful variables before actually taking action and investigating on a particular individual. In this thesis, several explanation methods are described and compared on multiple datasets (text data, numerical), on classification and regression problems.
Maskininlärningsmodellerna blir mer och mer kraftfulla och noggranna, men deras goda förutsägelser kommer ofta med en hög komplexitet. Beroende på situationen kan en sådan brist på tolkning vara ett viktigt och blockerande problem. Särskilt är det fallet när man behöver kunna lita på användarsidan för att fatta ett beslut baserat på modellprediktionen. Till exempel, ett försäkringsbolag kan använda en maskininlärningsalgoritm för att upptäcka bedrägerier, men företaget vill vara säkert på att modellen är baserad på meningsfulla variabler innan man faktiskt vidtar åtgärder och undersöker en viss individ. I denna avhandling beskrivs och förklaras flera förklaringsmetoder, på många dataset av typerna textdata och numeriska data, på klassificerings- och regressionsproblem.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Torres, Padilla Juan Pablo. "Inductive Program Synthesis with a Type System." Thesis, Uppsala universitet, Informationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Lindström, Sofia, Sebastian Edemalm, and Erik Reinholdsson. "Marketers are Watching You : An exploration of AI in relation to marketing, existential threats, and opportunities." Thesis, Jönköping University, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52744.

Повний текст джерела
Анотація:
Background: As of today, it is apparent that with the ever-changing demands and needs of customers, companies are facing enormous pressure to deliver the right value, on time, in the right way, and proper manner. To realize the full potential of Artificial Intelligence (AI), a careful plan and method need to be established in the development and deployment when incorporating the technology with marketing. Technology is evolving at a rapid pace and Artificial Intelligence (AI) can be found in a variety of applications. AI in marketing can provide valuable data clusterization and insights for personalized recommendations, customer segmentation, or even advertising optimization.  Problem:  To date, a few studies have been made due to the rapid development of AI which has shown an opportunity for marketers. From this hype, companies are looking into speedy implementation where one can forget that this technology comes with risks and threats. “The problem is that everybody has unconscious biases and people embed their own biases into technology” (Kantayya, 2021). Although machines can deliver personalized numerical information, it cannot deliver new solutions such as products and services, nor classify different outputs with a cognitive mindset which could result in biased results. The objective of this research is to utilize the information and insights gathered from experts in the field of engineering and marketing to gain a holistic view of the current and future capabilities of AI in marketing.  Purpose: The focus of this bachelor thesis is to provide additional insights in regards to Artificial Intelligence in relation to marketing, taking into consideration bias, personalization, the black box, along with other possible implications of AI systems, also referred to as the dark side. To fulfill the researchers’ objective, qualitative interviews with practitioners and employees with different roles within the field of AI and Marketing were conducted. The paper will be focusing on concepts, theories, secondary data, and interviews which will be further discussed and give opportunities for future research.   Method: To perform this research, a qualitative research design was applied, and 12 structured interviews were conducted with those who have knowledge and experience with AI, marketing, or both.  Results: The study elucidates the potentials and fallbacks of Artificial Intelligence in marketing. Where the findings suggest a mixture of human intervention and technology is needed to work against the perceptions, bias, and manipulation the technology can possess. The aims then guide towards the conclusion presenting the important cognitive and emotional skills that humans obtain that are currently lacking in AI.  This study finds several key areas both in terms of opportunities and risks. Such key areas involve the possibility of delivering new, unique personalized content to a mass audience at lightning-quick speed and at the same time presenting a handful of risks by giving machines the permission to make human decisions. Risks found in this study presented as the dark side include the bubble, bias, manipulation, fear of losing jobs, lack of transparency creating the black-box phenomena. Therefore, this research is interesting especially for marketing managers in how AI could be used both from an opportunity perspective and possible risks to consider.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Truong, Nghi Khue Dinh. "A web-based programming environment for novice programmers." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16471/1/Nghi_Truong_Thesis.pdf.

Повний текст джерела
Анотація:
Learning to program is acknowledged to be difficult; programming is a complex intellectual activity and cannot be learnt without practice. Research has shown that first year IT students presently struggle with setting up compilers, learning how to use a programming editor and understanding abstract programming concepts. Large introductory class sizes pose a great challenge for instructors in providing timely, individualised feedback and guidance for students when they do their practice. This research investigates the problems and identifies solutions. An interactive and constructive web-based programming environment is designed to help beginning students learn to program in high-level, object-oriented programming languages such as Java and C#. The environment eliminates common starting hurdles for novice programmers and gives them the opportunity to successfully produce working programs at the earliest stage of their study. The environment allows students to undertake programming exercises anytime, anywhere, by "filling in the gaps" of a partial computer program presented in a web page, and enables them to receive guidance in getting their programs to compile and run. Feedback on quality and correctness is provided through a program analysis framework. Students learn by doing, receiving feedback and reflecting - all through the web. A key novel aspect of the environment is its capability in supporting small "fill in the gap" programming exercises. This type of exercise places a stronger emphasis on developing students' reading and code comprehension skills than the traditional approach of writing a complete program from scratch. It allows students to concentrate on critical dimensions of the problem to be solved and reduces the complexity of writing programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Truong, Nghi Khue Dinh. "A web-based programming environment for novice programmers." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16471/.

Повний текст джерела
Анотація:
Learning to program is acknowledged to be difficult; programming is a complex intellectual activity and cannot be learnt without practice. Research has shown that first year IT students presently struggle with setting up compilers, learning how to use a programming editor and understanding abstract programming concepts. Large introductory class sizes pose a great challenge for instructors in providing timely, individualised feedback and guidance for students when they do their practice. This research investigates the problems and identifies solutions. An interactive and constructive web-based programming environment is designed to help beginning students learn to program in high-level, object-oriented programming languages such as Java and C#. The environment eliminates common starting hurdles for novice programmers and gives them the opportunity to successfully produce working programs at the earliest stage of their study. The environment allows students to undertake programming exercises anytime, anywhere, by "filling in the gaps" of a partial computer program presented in a web page, and enables them to receive guidance in getting their programs to compile and run. Feedback on quality and correctness is provided through a program analysis framework. Students learn by doing, receiving feedback and reflecting - all through the web. A key novel aspect of the environment is its capability in supporting small "fill in the gap" programming exercises. This type of exercise places a stronger emphasis on developing students' reading and code comprehension skills than the traditional approach of writing a complete program from scratch. It allows students to concentrate on critical dimensions of the problem to be solved and reduces the complexity of writing programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Irfan, Muhammad Naeem. "Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00767894.

Повний текст джерела
Анотація:
Les Components-Off-The-Shelf (COTS) sont utilisés pour le développement rapide et efficace de logiciels tout en limitant le coût. Il est important de tester le fonctionnement des composants dans le nouvel environnement. Pour les logiciels tiers,le code source des composants, les spécifications et les modèles complets ne sont pas disponibles. Dans la littérature de tels systèmes sont appelés composants "boîte noire". Nous pouvons vérifier leur fonctionnement avec des tests en boîte noire tels que le test de non-régression, le test aléatoire ou le test à partir de modèles. Pour ce dernier, un modèle qui représente le comportement attendu du système sous test(SUT) est nécessaire. Ce modèle contient un ensemble d'entrées, le comportement du SUT après stimulation par ces entrées et l'état dans lequel le système se trouve.Pour les systèmes en boîte noire, les modèles peuvent être extraits à partir des traces d'exécutions, des caractéristiques disponibles ou encore des connaissances des experts. Ces modèles permettent ensuite d'orienter le test de ces systèmes.Les techniques d'inférence de modèles permettent d'extraire une information structurelle et comportementale d'une application et de la présenter sous forme d'un modèle formel. Le modèle abstrait appris est donc cohérent avec le comportement du logiciel. Cependant, les modèles appris sont rarement complets et il est difficile de calculer le nombre de tests nécessaires pour apprendre de façon complète et précise un modèle.Cette thèse propose une analyse et des améliorations de la version Mealy de l'algorithme d'inférence L* [Angluin 87]. Elle vise à réduire le nombre de tests nécessaires pour apprendre des modèles. La version Mealy de L* nécessite d'utiliser deux types de test. Le premier type consiste à construire les modèles à partir des sorties du système, tandis que le second est utilisé pour tester l'exactitude des modèles obtenus. L'algorithme utilise ce que l'on appelle une table d'observation pour enregistrer les réponses du système.Le traitement d'un contre-exemple peut exiger d'envoyer un nombre conséquent de requêtes au système. Cette thèse aborde ce problème et propose une technique qui traite les contre-exemples de façon efficace. Nous observons aussi que l'apprentissage d'un modèle ne nécessite pas de devoir remplir complètement ces tables. Nous proposons donc un algorithme d'apprentissage qui évite de demander ces requêtes superflues.Dans certains cas, pour apprendre un modèle, la recherche de contre-exemples peut coûter cher. Nous proposons une méthode qui apprend des modèles sans demander et traiter des contre-exemples. Cela peut ajouter de nombreuses colonnes à la table d'observation mais au final, nous n'avons pas besoin d'envoyer toutes les requêtes. Cette technique ne demande que les requêtes nécessaires.Ces contributions réduisent le nombre de tests nécessaires pour apprendre des modèles de logiciels, améliorant ainsi la complexité dans le pire cas. Nous présentons les extensions que nous avons apportées à l'outil RALT pour mettre en oeuvre ces algorithmes. Elles sont ensuite validées avec des exemples tels que les tampons, les distributeurs automatiques, les protocoles d'exclusion mutuelle et les planificateurs.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Pešán, Michele. "Modelování zvukových signálů pomocí neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442569.

Повний текст джерела
Анотація:
Neuronové sítě vycházející z architektury WaveNet a sítě využívající rekurentní vrstvy jsou v současnosti používány jak pro syntézu lidské řeči, tak pro „black box“ modelování systémů pro úpravu akustického signálu – modulační efekty, nelineární zkreslovače apod. Úkolem studenta bude shrnout dosavadní poznatky o možnostech využití neuronových sítí při modelování akustických signálů. Student dále implementuje některý z modelů neuronových sítí v programovacím jazyce Python a využije jej pro natrénování a následnou simulaci libovolného efektu nebo systému pro úpravu akustického signálu. V rámci semestrální práce vypracujte teoretickou část práce, vytvořte zvukovou databázi pro trénování neuronové sítě a implementujte jednu ze struktur sítí pro modelování zvukového signálu. Neuronové sítě jsou v průběhu posledních let používány stále více, a to víceméně přes celé spektrum vědních oborů. Neuronové sítě založené na architektuře WaveNet a sítě využívající rekurentních vrstev se v současné době používají v celé řadě využití, zahrnující například syntézu lidské řeči, nebo napřklad při metodě "black-box" modelování akustických systémů, které upravují zvukový signál (modulačí efekty, nelineární zkreslovače, apod.). Tato akademická práce si dává za cíl poskytnout úvod do problematiky neuronových sítí, vysvětlit základní pojmy a mechanismy této problematiky. Popsat využití neuronových sítí v modelování akustických systémů a využít těchto poznatků k implementaci neuronových sítí za cílem modelování libovolného efektu nebo zařízení pro úpravu zvukového signálu.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Bonantini, Andrea. "Analisi di dati e sviluppo di modelli predittivi per sistemi di saldatura." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24664/.

Повний текст джерела
Анотація:
Il presente lavoro di tesi ha come obiettivo quello di predire la lunghezza dell'arco elettrico che si forma nel processo di saldatura MIG/MAG, quando un elettrodo fusibile (filo) si porta a una distanza opportuna dal componente che deve essere saldato. In questo caso specifico, la lega di materiale che forma il filo è fatta di Alluminio-Magnesio. In particolare, in questo elaborato sarà presentato l'impatto che avranno alcune grandezze fisiche, come tensione, corrente e velocità di trascinamento del filo che viene fuso, durante il procedimento della saldatura, e come queste influenzeranno la dimensione dell'arco. Più precisamente, sono stati creati dei modelli previsionali capaci di prevedere la lunghezza d'arco sulla base di tali grandezze, secondo due criteri distinti: black-box e knowledge-driven. Nello specifico, il capitolo uno prevede una panoramica sullo stato dell'arte della saldatura MIG/MAG, introducendo concretamente il Gruppo Cebora, le modalità di acquisizione dei dati e il modello fisico con cui al momento si calcola la lunghezza dell'arco elettrico. Il secondo capitolo mostra l'analisi dei dati e spiega le decisioni sperimentali che sono state prese per gestirli e comprenderli al meglio; inoltre, in questo capitolo si capirà l'accuratezza del modello di Cebora, confrontando le sue predizioni con i dati reali. Il terzo capitolo è più operativo e vengono presentate le prime rete neurali costruite, che possiedono un approccio black-box ed alcune manipolazioni sulla corrente. Il quarto capitolo sposta l'attenzione sul ruolo della tensione, e sono realizzate nuove reti con un approccio differente, ovvero knowledge-driven. Il quinto capitolo trae le conclusioni di questo elaborato, esaminando gli aspetti positivi e negativi dei migliori modelli ottenuti.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Dubois, Amaury. "Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre." Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.

Повний текст джерела
Анотація:
Le sujet de la thèse porte sur une des thématiques du LISIC : la modélisation et la simulation de systèmes complexes, ainsi que sur l'optimisation et l'apprentissage automatique pour l'agronomie. Les objectifs de la thèse sont de répondre aux questions de pilotage de l'irrigation de la culture de pomme de terre par le développement d'outils d'aide à la décision à destination des exploitants agricoles. Le choix de cette culture est motivé par sa part importante dans la région des Hauts-de-France. Le manuscrit s'articule en 3 parties. La première partie traite de l'optimisation continue mutlimodale dans un contexte de boîte noire. Il en suit une présentation d'une méthodologie d'étalonnage automatique de paramètres de modèle biologique grâce à une reformulation en un problème d'optimisation continue mono-objectif multimodale de type boîte noire. La pertinence de l'utilisation de l'analyse inverse comme méthodologie de paramétrage automatique de modèles de grandes dimensions est ensuite démontrée. La deuxième partie présente 2 nouveaux algorithmes UCB Random with Decreasing Step-size et UCT Random with Decreasing Step-size. Ce sont des algorithmes d'optimisation continue multimodale boîte noire dont le choix de la position initiale des individus est assisté par un algorithmes d'apprentissage par renforcement. Les résultats montrent que ces algorithmes possèdent de meilleures performances que les algorithmes état de l'art Quasi Random with Decreasing Step-size. Enfin, la dernière partie est focalisée sur les principes et les méthodes d'apprentissage automatique (machine learning). Une reformulation du problème de la prédiction à une semaine de la teneur en eau dans le sol en un problème d'apprentissage supervisé a permis le développement d'un nouvel outil d'aide à la décision pour répondre à la problématique du pilotage des cultures
The subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Santos, João André Agostinho dos. "Residential mortgage default risk estimation: cracking the machine learning black box." Master's thesis, 2021. http://hdl.handle.net/10362/122851.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Vorm, Eric Stephen. "Into the Black Box: Designing for Transparency in Artificial Intelligence." Diss., 2019. http://hdl.handle.net/1805/21600.

Повний текст джерела
Анотація:
Indiana University-Purdue University Indianapolis (IUPUI)
The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Balayan, Vladimir. "Human-Interpretable Explanations for Black-Box Machine Learning Models: An Application to Fraud Detection." Master's thesis, 2020. http://hdl.handle.net/10362/130774.

Повний текст джерела
Анотація:
Machine Learning (ML) has been increasingly used to aid humans making high-stakes decisions in a wide range of areas, from public policy to criminal justice, education, healthcare, or financial services. However, it is very hard for humans to grasp the rationale behind every ML model’s prediction, hindering trust in the system. The field of Explainable Artificial Intelligence (XAI) emerged to tackle this problem, aiming to research and develop methods to make those “black-boxes” more interpretable, but there is still no major breakthrough. Additionally, the most popular explanation methods — LIME and SHAP — produce very low-level feature attribution explanations, being of limited usefulness to personas without any ML knowledge. This work was developed at Feedzai, a fintech company that uses ML to prevent financial crime. One of the main Feedzai products is a case management application used by fraud analysts to review suspicious financial transactions flagged by the ML models. Fraud analysts are domain experts trained to look for suspicious evidence in transactions but they do not have ML knowledge, and consequently, current XAI methods do not suit their information needs. To address this, we present JOEL, a neural network-based framework to jointly learn a decision-making task and associated domain knowledge explanations. JOEL is tailored to human-in-the-loop domain experts that lack deep technical ML knowledge, providing high-level insights about the model’s predictions that very much resemble the experts’ own reasoning. Moreover, by collecting the domain feedback from a pool of certified experts (human teaching), we promote seamless and better quality explanations. Lastly, we resort to semantic mappings between legacy expert systems and domain taxonomies to automatically annotate a bootstrap training set, overcoming the absence of concept-based human annotations. We validate JOEL empirically on a real-world fraud detection dataset, at Feedzai. We show that JOEL can generalize the explanations from the bootstrap dataset. Furthermore, obtained results indicate that human teaching is able to further improve the explanations prediction quality.
A Aprendizagem de Máquina (AM) tem sido cada vez mais utilizada para ajudar os humanos a tomar decisões de alto risco numa vasta gama de áreas, desde política até à justiça criminal, educação, saúde e serviços financeiros. Porém, é muito difícil para os humanos perceber a razão da decisão do modelo de AM, prejudicando assim a confiança no sistema. O campo da Inteligência Artificial Explicável (IAE) surgiu para enfrentar este problema, visando desenvolver métodos para tornar as “caixas-pretas” mais interpretáveis, embora ainda sem grande avanço. Além disso, os métodos de explicação mais populares — LIME and SHAP — produzem explicações de muito baixo nível, sendo de utilidade limitada para pessoas sem conhecimento de AM. Este trabalho foi desenvolvido na Feedzai, a fintech que usa a AM para prevenir crimes financeiros. Um dos produtos da Feedzai é uma aplicação de gestão de casos, usada por analistas de fraude. Estes são especialistas no domínio treinados para procurar evidências suspeitas em transações financeiras, contudo não tendo o conhecimento em AM, os métodos de IAE atuais não satisfazem as suas necessidades de informação. Para resolver isso, apresentamos JOEL, a framework baseada em rede neuronal para aprender conjuntamente a tarefa de tomada de decisão e as explicações associadas. A JOEL é orientada a especialistas de domínio que não têm conhecimento técnico profundo de AM, fornecendo informações de alto nível sobre as previsões do modelo, que muito se assemelham ao raciocínio dos próprios especialistas. Ademais, ao recolher o feedback de especialistas certificados (ensino humano), promovemos explicações contínuas e de melhor qualidade. Por último, recorremos a mapeamentos semânticos entre sistemas legados e taxonomias de domínio para anotar automaticamente um conjunto de dados, superando a ausência de anotações humanas baseadas em conceitos. Validamos a JOEL empiricamente em um conjunto de dados de detecção de fraude do mundo real, na Feedzai. Mostramos que a JOEL pode generalizar as explicações aprendidas no conjunto de dados inicial e que o ensino humano é capaz de melhorar a qualidade da previsão das explicações.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Huang, Cong-Ren, and 黃琮仁. "The Study of Black-box SQL Injection Security Detection Mechanisms Based on Machine Learning." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7thwhz.

Повний текст джерела
Анотація:
碩士
國立高雄第一科技大學
資訊管理系碩士班
106
With the increasing emphasis on information security, financial industries are more willing to have security inspection for their websites. Black Box Testing can be divided into Software Automation Testing and Manually Testing. Software Automation Testing inspects the weakness policies database preinstalled by manufacturers. It cannot find security problems precisely when the network environment is protected by a web application firewall or an intrusion-detection system. The testing report may have misdetection or cannot find the problem of the system. Manually Testing will generate different testing reports that may be depended on tester's professional ability and the limited time. In this thesis, we design a black-box testing mechanism for detecting SQL injection based on Machine Learning. Our result can improve the drawbacks of automatic testing, and provide the advantages of high scalability and high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Lee, Bo-Yin, and 李柏穎. "The Adjustment of Control Parameters for a Black-Box Machine by Deep Reinforcement Learning." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/xk4dz6.

Повний текст джерела
Анотація:
碩士
國立中山大學
電機工程學系研究所
107
The development of Artificial Intelligence is changing with each passing day, All kinds of occupations are looking forward to introduce this technology. AI technology which can increase the production value. However, due to the advent of Industry 4.0, Smart Manufacturing becomes an important project in automated production. We can analyze the needs of different products and consider its production strategy to raise the product quality and reduce labor cost by Smart Manufacturing. In this study, we propose an adjustment method of the control parameters for a black-box Machine by Deep Reinforcement Learning. We use Supervised Learning to train a black-box machine model which can simulate the machine characteristics by neural networks. Next, all needs are to let the agent interact with the machine and associate with the goal of the mission that is expected to be accomplish by an appropriate reward function. After that, the method uses this to guide the Inversion of neural networks to adjust the parameters of the machine. Further, in adjusting these parameters we consider not only the current state but also the trajectory of the state variable to make associated decisions. By this method, it makes the process more efficient and achieve the desire output with fewer adjustments.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

CURIA, FRANCESCO. "Explainable clinical decision support system: opening black-box meta-learner algorithm expert's based." Doctoral thesis, 2021. http://hdl.handle.net/11573/1538472.

Повний текст джерела
Анотація:
Mathematical optimization methods are the basic mathematical tools of all artificial intelligence theory. In the field of machine learning and deep learning the examples with which algorithms learn (training data) are used by sophisticated cost functions which can have solutions in closed form or through approximations. The interpretability of the models used and the relative transparency, opposed to the opacity of the black-boxes, is related to how the algorithm learns and this occurs through the optimization and minimization of the errors that the machine makes in the learning process. In particular in the present work is introduced a new method for the determination of the weights in an ensemble model, supervised and unsupervised, based on the well known Analytic Hierarchy Process method (AHP). This method is based on the concept that behind the choice of different and possible algorithms to be used in a machine learning problem, there is an expert who controls the decisionmaking process. The expert assigns a complexity score to each algorithm (based on the concept of complexity-interpretability trade-off) through which the weight with which each model contributes to the training and prediction phase is determined. In addition, different methods are presented to evaluate the performance of these algorithms and explain how each feature in the model contributes to the prediction of the outputs. The interpretability techniques used in machine learning are also combined with the method introduced based on AHP in the context of clinical decision support systems in order to make the algorithms (black-box) and the results interpretable and explainable, so that clinical-decision-makers can take controlled decisions together with the concept of "right to explanation" introduced by the legislator, because the decision-makers have a civil and legal responsibility of their choices in the clinical field based on systems that make use of artificial intelligence. No less, the central point is the interaction between the expert who controls the algorithm construction process and the domain expert, in this case the clinical one. Three applications on real data are implemented with the methods known in the literature and with those proposed in this work: one application concerns cervical cancer, another the problem related to diabetes and the last one focuses on a specific pathology developed by HIV-infected individuals. All applications are supported by plots, tables and explanations of the results, implemented through Python libraries. The main case study of this thesis regarding HIV-infected individuals concerns an unsupervised ensemble-type problem, in which a series of clustering algorithms are used on a set of features and which in turn produce an output used again as a set of meta-features to provide a set of labels for each given cluster. The meta-features and labels obtained by choosing the best algorithm are used to train a Logistic regression meta-learner, which in turn is used through some explainability methods to provide the value of the contribution that each algorithm has had in the training phase. The use of Logistic regression as a meta-learner classifier is motivated by the fact that it provides appreciable results and also because of the easy explainability of the estimated coefficients.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

(6561242), Piyush Pandita. "BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY." Thesis, 2019.

Знайти повний текст джерела
Анотація:
Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.
The aim of conducting these experiments could be to study the production of a material that has great applicability.
One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.
The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.
This gives rise to what is known as epistemic uncertainty.
This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.
The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.

Повний текст джерела
Анотація:
Cardiovascular diseases are the leading global death cause. Their treatment and prevention rely on electrocardiogram interpretation, which is dependent on the physician’s variability. Subjectiveness is intrinsic to electrocardiogram interpretation and hence, prone to errors. To assist physicians in making precise and thoughtful decisions, artificial intelligence is being deployed to develop models that can interpret extent datasets and provide accurate decisions. However, the lack of interpretability of most machine learning models stands as one of the drawbacks of their deployment, particularly in the medical domain. Furthermore, most of the currently deployed explainable artificial intelligence methods assume independence between features, which means temporal independence when dealing with time series. The inherent characteristic of time series cannot be ignored as it carries importance for the human decision making process. This dissertation focuses on the explanation of heartbeat classification using several adaptations of state-of-the-art model-agnostic methods, to locally explain time series classification. To address the explanation of time series classifiers, a preliminary conceptual framework is proposed, and the use of the derivative is suggested as a complement to add temporal dependency between samples. The results were validated on an extent public dataset, through the 1-D Jaccard’s index, which consists of the comparison of the subsequences extracted from an interpretable model and the explanation methods used. Secondly, through the performance’s decrease, to evaluate whether the explanation fits the model’s behaviour. To assess models with distinct internal logic, the validation was conducted on a more transparent model and more opaque one in both binary and multiclass situation. The results show the promising use of including the signal’s derivative to introduce temporal dependency between samples in the explanations, for models with simpler internal logic.
As doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Gvozdetska, Nataliia. "Transfer Learning for Multi-surrogate-model Optimization." 2020. https://tud.qucosa.de/id/qucosa%3A73313.

Повний текст джерела
Анотація:
Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future work
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Baasch, Gaby. "Identification of thermal building properties using gray box and deep learning methods." Thesis, 2020. http://hdl.handle.net/1828/12585.

Повний текст джерела
Анотація:
Enterprising technologies and policies that focus on energy reduction in buildings are paramount to achieving global carbon emissions targets. Energy retrofits, building stock modelling, heating, ventilation, and air conditioning (HVAC) upgrades and demand side management all present high leverage opportunities in this regard. Advances in computing, data science and machine learning can be leveraged to enhance these methods and thus to expedite energy reduction in buildings but challenges such as lack of data, limited model generalizability and reliability and un-reproducible studies have resulted in restricted industry adoption. In this thesis, rigorous and reproducible studies are designed to evaluate the benefits and limitations of state-of-the-art machine learning and statistical techniques for high-impact applications, with an emphasis on addressing the challenges listed above. The scope of this work includes calibration of physics-based building models and supervised deep learning, both of which are used to estimate building properties from real and synthetic data. • Original grey-box methods are developed to characterize physical thermal properties (RC and RK)from real-world measurement data. • The novel application of supervised deep learning for thermal property estimation and HVAC systems identification is shown to achieve state-of-the-art performance (root mean squared error of 0.089 and 87% validation accuracy, respectively). • A rigorous empirical review is conducted to assess which types of gray and black box models are most suitable for practical application. The scope of the review is wider than previous studies, and the conclusions suggest a re-framing of research priorities for future work. • Modern interpretability techniques are used to provide unique insight into the learning behaviour of the black box methods. Overall, this body of work provides a critical appraisal of new and existing data-driven approaches for thermal property estimation in buildings. It provides valuable and novel insight into barriers to widespread adoption of these techniques and suggests pathways forward. Performance benchmarks, open-source model code and a parametrically generated, synthetic dataset are provided to support further research and to encourage industry adoption of the approaches. This lays the necessary groundwork for the accelerated adoption of data-driven models for thermal property identification in buildings.
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Repický, Jakub. "Evoluční algoritmy a aktivní učení." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-355988.

Повний текст джерела
Анотація:
Názov práce: Evoluční algoritmy a aktivní učení Autor: Jakub Repický Katedra: Katedra teoretické informatiky a matematické logiky Vedúci diplomovej práce: doc. RNDr. Ing. Martin Holeňa, CSc., Ústav informa- tiky, Akademie věd České republiky Abstrakt: Vyhodnotenie ciel'ovej funkcie v úlohách spojitej optimalizácie často do- minuje výpočtovej náročnosti algoritmu. Platí to najmä v prípade black-box fun- kcií, t. j. funkcií, ktorých analytický popis nie je známy a ktoré sú vyhodnocované empiricky. Témou urýchl'ovania black-box optimalizácie s pomocou náhradných modelov ciel'ovej funkcie sa zaoberá vel'a autorov a autoriek. Ciel'om tejto dip- lomovej práce je vyhodnotit' niekol'ko metód, ktoré prepájajú náhradné modely založené na Gaussovských procesoch (GP) s Evolučnou stratégiou adaptácie ko- variančnej matice (CMA-ES). Gaussovské procesy umožňujú aktívne učenie, pri ktorom sú body pre vyhodnotenie vyberané s ciel'om zlepšit' presnost' modelu. Tradičné náhradné modely založené na GP zah'rňajú Metamodelom asistovanú evolučnú stratégiu (MA-ES) a Optimalizačnú procedúru pomocou Gaussovských procesov (GPOP). Pre účely tejto práce boli oba prístupy znovu implementované a po prvý krát vyhodnotené na frameworku Black-Box...
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Engster, David. "Local- and Cluster Weighted Modeling for Prediction and State Estimation of Nonlinear Dynamical Systems." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B4FD-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Dittmar, Jörg. "Modellierung dynamischer Prozesse mit radialen Basisfunktionen." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B4DD-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії