To see the other types of publications on this topic, follow the link: ‘black box’.

Dissertations / Theses on the topic '‘black box’'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic '‘black box’.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shuler, Ryan N. "Black box /." Online version of thesis, 2009. http://hdl.handle.net/1850/9735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Longo-Capobianco, Samuel John. "Black box [1]." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1585583128274337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bausch, Amanda L. "Black Box Warning." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arlt, Stephan [Verfasser], and Andreas [Akademischer Betreuer] Podelski. "Program analysis and black-box GUI testing = Program Analysis und Black-box GUI Testing." Freiburg : Universität, 2014. http://d-nb.info/1123479232/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wetzel, Matthias. "Document driven black box testing." [S.l. : s.n.], 2004. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11144258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira, Ivo Fagundes David de. "Optimal black-box sequential searching." Universidade Federal de Minas Gerais, 2013. http://hdl.handle.net/1843/EABA-98VHPQ.

Full text
Abstract:
This dissertation constructs optimal root-searching and aximumsearching algorithms in a statistical sense and compares the statistically optimal strategies to the already known mini-maximal strategies. In order to construct the so called statistical method, new results in the eld of probability, capable of determining the probability of f(x) = y over a pre-determined set of functions, are presented.
Esta dissertação constrói algoritmos de busca de raiz e de busca de máximos, ótimos em um sentido estatístico, e compara os métodos estatisticamente ótimos com as já conhecidas estratégias mini-maximais. A fim de construir o chamado método estatístico, novos resultados na área de probabilidade, capazes de determinar a probabilidade de f(x) = y sobre um conjunto pré-determinado de funções, são apresentados.
APA, Harvard, Vancouver, ISO, and other styles
7

Torregrosa, Rivero Daniel. "Black-box interactive translation prediction." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/77110.

Full text
Abstract:
En un mundo globalizado como el actual en el que, además, muchas sociedades son inherentemente multilingües, la traducción e interpretación entre diversas lenguas requiere de un esfuerzo notable debido a su volumen. Diversas tecnologías de asistencia existen para facilitar la realización de estas tareas de traducción, entre las que se encuentra la traducción automática interactiva (TAI), una modalidad en la que el traductor va escribiendo la traducción y el sistema ofrece sugerencias que predicen las próximas palabras que va a teclear. En el estado de la cuestión, las aproximaciones a la TAI siguen una aproximación de caja de cristal: están firmemente acopladas a un sistema de traducción automática (muchas veces estadístico) que utilizan para generar las sugerencias, por lo que tienen las mismas limitaciones que el sistema de traducción automática subyacente. Esta tesis desarrolla una nueva aproximación de caja negra, donde cualquier recurso bilingüe (no solo sistemas de traducción automática, sino también otros recursos como memorias de traducción, diccionarios, glosarios, etc.) puede ser utilizado para generar las sugerencias.
APA, Harvard, Vancouver, ISO, and other styles
8

Hussain, Jabbar. "Deep Learning Black Box Problem." Thesis, Uppsala universitet, Institutionen för informatik och media, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393479.

Full text
Abstract:
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Francescato, Riccardo <1993&gt. "Efficient Black-box JTAG Discovery." Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12266.

Full text
Abstract:
Embedded devices represents the most widespread form of computing device in the world. Almost every consumer product manufactured in the last decades contains an embedded system, e.g., refrigerators, smart bulbs, activity trackers, smart watches and washing machines. This computing devices are also used in safety and security-critical systems, e.g., autonomous driving cars, cryptographic tokens, avionics, alarm systems. Often, manufacturers do not take much into consideration the attack surface offered by low level interfaces such as JTAG. In the last decade, JTAG port has been used by the research community to show a number of attacks and reverse engineering techniques. Therefore, finding and identifying the JTAG port of a device or a de-soldered integrated circuit (IC) can be the first step required for performing a successful attack. In this work we analyse the design of JTAG port and develop methods and algorithms aimed at searching the JTAG port. Specifically we will cover the following arguments: i) an introduction to the problem and related attacks; ii) a general description of the JTAG port and his functions; iii) the analysis of the problem and the naive solution; iv) an efficient algorithm based on 4-state GPIO; v) a randomized algorithm using the 4-states GPIO; vi) an overview on the problem and search methods used in PCBs; vii) the conclusions and the suggestions for a proficient use.
APA, Harvard, Vancouver, ISO, and other styles
10

Eriksson, Josephine, and Sophie Fredén. "The opening of the black box." Thesis, Halmstad University, School of Business and Engineering (SET), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2549.

Full text
Abstract:

The purpose of this thesis was to open up the black boxed TMT process by examining the interaction between TMT members using cognitive and demographic diversity variables, and to see how organisational performance could be affected by the process. By opening the process, a model of the process was developed, which can be tested in further research. The major findings are that there are some aspects that stand out; the CEO and the functional responsibilities that influence the process. Further, the integration within the TMT is not that high, so the upper echelon theory should not be used without considerations on studies where composition is related to organisational performance. These have shown to influence performance in different ways. The functional responsibility has shown to create subgroups that practice problem solving and decision making more frequent than the TMT hence also communicate more.

APA, Harvard, Vancouver, ISO, and other styles
11

Stolz, Nolan Ryan. "The touch : a black box opera /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1394663551&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Venger, Adam. "Black-box analýza zabezpečení Wi-Fi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445533.

Full text
Abstract:
Zariadenia, na ktoré sa každodenne spoliehame, sú stále zložitejšie a využívajú zložitejšie protokoly. Jedným z týchto protokolov je Wi-Fi. S rastúcou zložitosťou sa zvyšuje aj potenciál pre implementačné chyby. Táto práca skúma Wi-Fi protokol a použitie fuzz testingu pre generovanie semi-validných vstupov, ktoré by mohli odhaliť zraniteľné miesta v zariadeniach. Špeciálna pozornosť bola venovaná testovaniu Wi-Fi v systéme ESP32 a ESP32-S2. Výsledkom práce je fuzzer vhodný pre testovanie akéhokoľvek Wi-Fi zariadenia, monitorovací nástroj špeciálne pre ESP32 a sada testovacích programov pre ESP32. Nástroj neodhalil žiadne potenciálne zraniteľnosti.
APA, Harvard, Vancouver, ISO, and other styles
13

Hosein, Anesa. "Students' approaches to mathematical tasks using software as a black-box, glass-box or open-box." Thesis, Open University, 2009. http://oro.open.ac.uk/22482/.

Full text
Abstract:
Three mathematical software modes are investigated in this thesis: black-box software showing no mathematical steps; glass-box software showing the intermediate mathematical steps; and open-box software showing and allowing interaction at the intermediate mathematical steps. The glass-box and open-box software modes are often recommended over the black-box software to help understanding but there is limited research comparing all three. This research investigated students' performance and their approaches to solving three mathematical task types when assigned to the software boxes. Three approaches that students may undertake when solving the tasks were investigated: students' processing levels, their software exploration and their self-explanations. The effect of mathematics confidence on students' approaches and performance was also considered. Thirty-eight students were randomly assigned to one of the software boxes in an experimental design where all audio and video data were collected via a web-conference remote observation method. The students were asked to think-aloud whilst they solved three task types. The three task types were classified based on the level of conceptual and procedural knowledge needed for solving: mechanical tasks required procedural knowledge, interpretive tasks required conceptual knowledge; and constructive tasks used both conceptual and procedural knowledge. The results indicated that the relationship between students' approaches and performance varied with the software box. Students using the black-box software explored more for the constructive tasks than the students in the glass-box and open-box software. These black-box software students also performed better on the constructive tasks, particularly those with higher mathematics confidence. The open-box software appeared to encourage more mathematical explanations whilst the glass-box software encouraged more real-life explanations. Mathematically confident students were best able to appropriate the black-box software for their conceptual understanding. The glass-box software or open-box software appeared to be useful for helping students with procedural understanding and familiarity with mathematical terms.
APA, Harvard, Vancouver, ISO, and other styles
14

Mena, Roldán José. "Modelling Uncertainty in Black-box Classification Systems." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670763.

Full text
Abstract:
Currently, thanks to the Big Data boom, the excellent results obtained by deep learning models and the strong digital transformation experienced over the last years, many companies have decided to incorporate machine learning models into their systems. Some companies have detected this opportunity and are making a portfolio of artificial intelligence services available to third parties in the form of application programming interfaces (APIs). Subsequently, developers include calls to these APIs to incorporate AI functionalities in their products. Although it is an option that saves time and resources, it is true that, in most cases, these APIs are displayed in the form of blackboxes, the details of which are unknown to their clients. The complexity of such products typically leads to a lack of control and knowledge of the internal components, which, in turn, can drive to potential uncontrolled risks. Therefore, it is necessary to develop methods capable of evaluating the performance of these black-boxes when applied to a specific application. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. In the first part of the thesis, we develop a thorough revision of the concept of uncertainty, focusing on the uncertainty of classification systems. We review the existingrelated literature, describing the different approaches for modelling this uncertainty, its application to different use cases and some of its desirable properties. Next, we introduce the proposed method for modelling uncertainty in black-box settings. Moreover, in the last chapters of the thesis, we showcase the method applied to different domains, including NLP and computer vision problems. Finally, we include two reallife applications of the method: classification of overqualification in job descriptions and readability assessment of texts.
La tesis propone un método para el cálculo de la incertidumbre asociada a las predicciones de APIs o librerías externas de sistemas de clasificación.
APA, Harvard, Vancouver, ISO, and other styles
15

Kell, Stephen Roger. "Black-box composition of mismatched software components." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Anthony, Tim. "On the Topic of Unconstrained Black-Box Optimization with Application to Pre-Hospital Care in Sweden : Unconstrained Black-Box Optimization." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185718.

Full text
Abstract:
In this thesis, the theory and application of black-box optimization methods are explored. More specifically, we looked at two families of algorithms, descent methods andresponse surface methods (closely related to trust region methods). We also looked at possibilities in using a dimension reduction technique called active subspace which utilizes sampled gradients. This dimension reduction technique can make the descent methods more suitable to high-dimensional problems, which turned out to be most effective when the data have a ridge-like structure. Finally, the optimization methods were used on a real-world problem in the context of pre-hospital care where the objective is to minimize the ambulance response times in the municipality of Umea by changing the positions of the ambulances. Before applying the methods on the real-world ambulance problem, a simulation study was performed on synthetic data, aiming at finding the strengths and weaknesses of the different models when applied to different test functions, at different levels of noise. The results showed that we could improve the ambulance response times across several different performance metrics compared to the response times of the current ambulancepositions. This indicates that there exist adjustments that can benefit the pre-hospitalcare in the municipality of Umea. However, since the models in this thesis work find local and not global optimums, there might still exist even better ambulance positions that can improve the response time further.
I denna rapport undersöks teorin och tillämpningarna av diverse blackbox optimeringsmetoder. Mer specifikt så har vi tittat på två familjer av algoritmer, descentmetoder och responsytmetoder (nära besläktade med tillitsregionmetoder). Vi tittar också på möjligheterna att använda en dimensionreduktionsteknik som kallas active subspace som använder samplade gradienter för att göra descentmetoderna mer lämpade för högdimensionella problem, vilket visade sig vara mest effektivt när datat har en struktur där ändringar i endast en riktning har effekt på responsvärdet. Slutligen användes optimeringsmetoderna på ett verkligt problem från sjukhusvården, där målet var att minimera svarstiderna för ambulansutryckningar i Umeå kommun genom att ändra ambulanspositionerna. Innan metoderna tillämpades på det verkliga ambulansproblemet genomfördes också en simuleringsstudie på syntetiskt data. Detta för att hitta styrkorna och svagheterna hos de olika modellerna genom att undersöka hur dem hanterar ett flertal testfunktioner under olika nivåer av brus. Resultaten visade att vi kunde förbättra ambulansernas responstider över flera olika prestandamått jämfört med responstiderna för de nuvarande ambulanspositionerna. Detta indikerar att det finns förändringar av positioneringen av ambulanser som kan gynna den pre-hospitala vården inom Umeå kommun. Dock, eftersom modellerna i denna rapport hittar lokala och inte globala optimala punkter kan det fortfarande finnas ännu bättre ambulanspositioner som kan förbättra responstiden ytterligare.
APA, Harvard, Vancouver, ISO, and other styles
17

Bruns, Morgan Chase. "Propagation of Imprecise Probabilities through Black Box Models." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10553.

Full text
Abstract:
From the decision-based design perspective, decision making is the critical element of the design process. All practical decision making occurs under some degree of uncertainty. Subjective expected utility theory is a well-established method for decision making under uncertainty; however, it assumes that the DM can express his or her beliefs as precise probability distributions. For many reasons, both practical and theoretical, it can be beneficial to relax this assumption of precision. One possible means for avoiding this assumption is the use of imprecise probabilities. Imprecise probabilities are more expressive of uncertainty than precise probabilities, but they are also more computationally cumbersome. Probability Bounds Analysis (PBA) is a compromise between the expressivity of imprecise probabilities and the computational ease of modeling beliefs with precise probabilities. In order for PBA to be implemented in engineering design, it is necessary to develop appropriate computational methods for propagating probability boxes (p-boxes) through black box engineering models. This thesis examines the range of applicability of current methods for p-box propagation and proposes three alternative methods. These methods are applied towards the solution of three successively complex numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
18

Yalcinkaya, Sukru. "Black Box Groups And Related Group Theoretic Constructions." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608546/index.pdf.

Full text
Abstract:
The present thesis aims to develop an analogy between the methods for recognizing a black box group and the classification of the finite simple groups. We propose a uniform approach for recognizing simple groups of Lie type which can be viewed as the computational version of the classification of the finite simple groups. Similar to the inductive argument on centralizers of involutions which plays a crucial role in the classification project, our approach is based on a recursive construction of the centralizers of involutions in black box groups. We present an algorithm which constructs a long root SL_2(q)-subgroup in a finite simple group of Lie type of odd characteristic $p$ extended possibly by a p-group. Following this construction, we take the Aschbacher'
s ``Classical Involution Theorem'
'
as a model in the final recognition algorithm and we propose an algorithm which constructs all root SL_2(q)-subgroups corresponding to the nodes in the extended Dynkin diagram, that is, our approach is the construction of the the extended Curtis - Phan - Tits presentation of the finite simple groups of Lie type of odd characteristic which further yields the construction of all subsystem subgroups which can be read from the extended Dynkin diagram. In this thesis, we present this algorithm for the groups PSL_n(q) and PSU_n(q). We also present an algorithm which determines whether the p-core (or ``unipotent radical'
'
) O_p(G) of a black box group G is trivial or not where G/O_p(G) is a finite simple classical group of Lie type of odd characteristic p answering a well-known question of Babai and Shalev. The algorithms presented in this thesis have been implemented extensively in the computer algebra system GAP.
APA, Harvard, Vancouver, ISO, and other styles
19

Fellenius, Gustaf. "Huset som pussel : En black box vid Slussen." Thesis, KTH, Arkitektur, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-30589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hörmann, Wolfgang, and Josef Leydold. "Black-Box Algorithms for Sampling from Continuous Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1042/1/document.pdf.

Full text
Abstract:
For generating non-uniform random variates, black-box algorithms are powerful tools that allow drawing samples from large classes of distributions. We give an overview of the design principles of such methods and show that they have advantages compared to specialized algorithms even for standard distributions, e.g., the marginal generation times are fast and depend mainly on the chosen method and not on the distribution. Moreover these methods are suitable for specialized tasks like sampling from truncated distributions and variance reduction techniques. We also present a library called UNU.RAN that provides an interface to a portable implementation of such methods. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
21

Aydal, Emine Gokce. "Model Based Robustness Testing of Black box Systems." Thesis, University of York, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kamp, Michael [Verfasser]. "Black-Box Parallelization for Machine Learning / Michael Kamp." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1200020057/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

ROSSIGNOLI, DOMENICO. "DEMOCRACY, INSTITUTIONS AND GROWTH: EXPLORING THE BLACK BOX." Doctoral thesis, Università Cattolica del Sacro Cuore, 2013. http://hdl.handle.net/10280/1870.

Full text
Abstract:
La letteratura economica e politologica evidenzia un ampio consenso sull’esistenza di un effetto positivo sulla crescita di lungo periodo da parte di diritti di proprietà, stato di diritto e, in generale, istituzioni economiche. Contestualmente, il rapporto tra democrazia e crescita rimane teoricamente poco chiaro mentre l'evidenza empirica è in gran parte inconcludente. Questo studio cerca di riconciliare i fatti stilizzati su crescita e democrazia qui evidenziati, che dimostrano l'esistenza di un "successo sinergico" negli ultimi trent'anni, con la teoria esistente e l’evidenza empirica. Dopo aver dettagliatamente scandagliato la letteratura esistente, questo studio suggerisce che l’effetto della democrazia sulla crescita di lungo periodo sia indiretto, mediato dalle istituzioni. Per testare questa ipotesi si propone un modello di analisi originale, applicato ad un panel di 194 paesi osservati nel periodo 1961-2010, utilizzando lo stimatore System-GMM e una vasta gamma di controlli. I risultati dell’analisi suggeriscono che la democrazia è positivamente correlata a istituzioni “più favorevoli” alla crescita economica, in particolare diritti di proprietà e stato di diritto. Inoltre, l’evidenza empirica supporta la tesi di un effetto indiretto complessivamente positivo della democrazia sulla crescita. Infine, si propone uno sviluppo ulteriore dell’analisi, concentrato sulle determinanti della democrazia, ricercando possibili concause nell’interazione con i processi economici.
Economic and political science literature show a wide consensus about the positive effect of property rights, contract enforcing arrangements and, more generally, economic institutions to long-run growth. Conversely, the linkage between democracy and growth remains unclear and not conclusively supported by empirical research. This work is an attempt to reconcile the stylized facts about democracy and growth –evidencing a long-run “synergic success” between the two terms – with theoretical and empirical literature. After thoroughly surveying the relevant literature on the topic, this study claims that the effect of democracy on long-run growth is indirect, channeled by the means of institutions. To test this hypothesis, the thesis provides an original analytical framework which is applied to a panel of 194 countries over the period 1961-2010, adopting a System-GMM estimation technique and a wide range of robustness controls. The results suggest that democracy is positively related to “better” (namely more growth-enhancing) institutions, especially with respect to economic institutions and rule of law. Hence, the findings suggest that the overall effect on growth is positive, indirect and channeled by institutions. However, since the results are not completely conclusive, a further investigation is suggested, on further determinants of democracy, potentially affecting its pro-growth effect.
APA, Harvard, Vancouver, ISO, and other styles
24

Estrada, Vargas Ana Paula, and Vargas Ana Paula Estrada. "Black-Box identification of automated discrete event systems." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00846194.

Full text
Abstract:
This thesis deals with the identification of automated discrete event systems (DES) operating in an industrial context. In particular the work focuses on the systems composed by a plant and a programmable logic controller (PLC) operating in a closed loop- the identification consists in obtaining an approximate model expressed in interpreted Petri nets (IPN) from the observed behaviour given under the form of a single sequence of input-output vectors of the PLC. First, an overview of previous works on identification of DES is presented as well as a comparative study of the main recent approaches on the matter. Then the addressed problem is stated- important technological characteristics of automated systems and PLC are detailed. Such characteristics must be considered in solving the identification problem, but they cannot be handled by previous identification techniques. The main contribution in this thesis is the creation of two complementary identification methods. The first method allows constructing systematically an IPN model from a single input-output sequence representing the observable behaviour of the DES. The obtained IPN models describe in detail the evolution of inputs and outputs during the system operation. The second method has been conceived for addressing large and complex industrial DES- it is based on a statistical approach yielding compact and expressive IPN models. It consists of two stages- the first one obtains, from the input-output sequence, the reactive part of the model composed by observable places and transitions. The second stage builds the non observable part of the model including places that ensure the reproduction of the observed input-output sequence. The proposed methods, based on polynomial-time algorithms, have been implemented in software tools, which have been tested with input-output sequences obtained from real systems in operation. The tools are described and their application is illustrated through two case studies.
APA, Harvard, Vancouver, ISO, and other styles
25

Amin, Gaurav Shirish. "Investing in hedge funds : analysing the 'black box'." Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Liem, Rhea Patricia. "Surrogate modeling for large-scale black-box systems." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41559.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 105-110).
This research introduces a systematic method to reduce the complexity of large-scale blackbox systems for which the governing equations are unavailable. For such systems, surrogate models are critical for many applications, such as Monte Carlo simulations; however, existing surrogate modeling methods often are not applicable, particularly when the dimension of the input space is very high. In this research, we develop a systematic approach to represent the high-dimensional input space of a large-scale system by a smaller set of inputs. This collection of representatives is called a multi-agent collective, forming a surrogate model with which an inexpensive computation replaces the original complex task. The mathematical criteria used to derive the collective aim to avoid overlapping of characteristics between representatives, in order to achieve an effective surrogate model and avoid redundancies. The surrogate modeling method is demonstrated on a light inventory that contains light data corresponding to 82 aircraft types. Ten aircraft types are selected by the method to represent the full light inventory for the computation of fuel burn estimates, yielding an error between outputs from the surrogate and full models of just 2.08%. The ten representative aircraft types are selected by first aggregating similar aircraft types together into agents, and then selecting a representative aircraft type for each agent. In assessing the similarity between aircraft types, the characteristic of each aircraft type is determined from available light data instead of solving the fuel burn computation model, which makes the assessment procedure inexpensive.
(cont.) Aggregation criteria are specified to quantify the similarity between aircraft types and a stringency, which controls the tradeoff between the two competing objectives in the modeling -- the number of representatives and the estimation error. The surrogate modeling results are compared to a model obtained via manual aggregation; that is, the aggregation of aircraft types is done based on engineering judgment. The surrogate model derived using the systematic approach yields fewer representatives in the collective, yielding a surrogate model with lower computational cost, while achieving better accuracy. Further, the systematic approach eliminates the subjectivity that is inherent in the manual aggregation method. The surrogate model is also applied to other light inventories, yielding errors of similar magnitude to the case when the reference light inventory is considered.
by Rhea Patricia Liem.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

Verì, Daniele. "Empirical Model Learning for Constrained Black Box Optimization." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25704/.

Full text
Abstract:
Black box optimization is a field of the global optimization which consists in a family of methods intended to minimize or maximize an objective function that doesn’t allow the exploitation of gradients, linearity or convexity information. Beside that the objective is often a problem that requires a significant amount of time/resources to query a point and thus the goal is to go as close as possible to the optimum with the less number of iterations possible. The Emprical Model Learning is a methodology for merging Machine Learning and optimization techniques like Constraint Programming and Mixed Integer Linear Programming by extracting decision models from the data. This work aims to close the gap between Empirical Model Learning optimization and Black Box optimization methods (which have a strong literature) via active learning. At each iteration of the optimization loop a ML model is fitted on the data points and it is embedded in a prescriptive model using the EML. The encoded model is then enriched with domain specific constraints and is optimized selecting the next point to query and add to the collection of samples.
APA, Harvard, Vancouver, ISO, and other styles
28

Gatumu, Michael. "Directors' values: A glimpse into the black box." Thesis, Australian Catholic University, 2015. https://acuresearchbank.acu.edu.au/download/1ab8d120b663dfd716ad9dd067cc9e57af5933184c005c97a9265748740ccfdb/5008733/201610_Michael_Gatumu.pdf.

Full text
Abstract:
Boards have an important role in contemporary society. In fact, in some cases boards govern firms that are wealthier and more powerful than the governments of the countries in which they operate. In recent years, the impact on society of the failures of some large corporations has led governments to prescribe the role of directors. However, by virtue of context, personality or values, no degree of prescription will overcome the fact that there will always be many shades of grey. In other words, different directors and boards may come to different decisions about seemingly similar matters due to their personal values. In spite of this, little is known about the behaviour of directors or the values of the directors that govern companies. Their decisions are made behind closed doors, or in the black box. Therefore, a greater understanding of directors’ values and behaviour is argued to be worthy of research and is the focus of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
29

Rowan, Adriaan. "Unravelling black box machine learning methods using biplots." Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31124.

Full text
Abstract:
Following the development of new mathematical techniques, the improvement of computer processing power and the increased availability of possible explanatory variables, the financial services industry is moving toward the use of new machine learning methods, such as neural networks, and away from older methods such as generalised linear models. However, their use is currently limited because they are seen as “black box” models, which gives predictions without justifications and which are therefore not understood and cannot be trusted. The goal of this dissertation is to expand on the theory and use of biplots to visualise the impact of the various input factors on the output of the machine learning black box. Biplots are used because they give an optimal two-dimensional representation of the data set on which the machine learning model is based.The biplot allows every point on the biplot plane to be converted back to the original ��-dimensions – in the same format as is used by the machine learning model. This allows the output of the model to be represented by colour coding each point on the biplot plane according to the output of an independently calibrated machine learning model. The interaction of the changing prediction probabilities – represented by the coloured output – in relation to the data points and the variable axes and category level points represented on the biplot, allows the machine learning model to be globally and locally interpreted. By visualing the models and their predictions, this dissertation aims to remove the stigma of calling non-linear models “black box” models and encourage their wider application in the financial services industry.
APA, Harvard, Vancouver, ISO, and other styles
30

Estrada, Vargas Ana Paula. "Black-Box identification of automated discrete event systems." Thesis, Cachan, Ecole normale supérieure, 2013. http://www.theses.fr/2013DENS0006/document.

Full text
Abstract:
Cette thèse traite de l'identification des systèmes à événements discrets (SED) automatisés dans un contexte industriel. En particulier, le travail aborde les systèmes formés par un processus et un automate programmable (AP) fonctionnant en boucle fermée - l'identification a pour but d’obtenir un modèle approximatif exprimé en réseaux de Petri interprétés (RPI) à partir du comportement externe observé sous la forme d'une seule séquence de vecteurs d’entrée-sortie de l’AP. Tout d'abord, une analyse des méthodes d'identification est présentée, ainsi qu’une étude comparative des méthodes récentes pour l'identification des SED. Puis le problème abordé est décrit - des importantes caractéristiques technologiques dans les systèmes automatisés par l’AP sont détaillées. Ces caractéristiques doivent être prises en compte dans la résolution du problème, mais elles ne peuvent pas être traitées par les méthodes existantes d’identification. La contribution principale de cette thèse est la création de deux méthodes d’identification complémentaires. La première méthode permet de construire systématiquement un modèle RPI à partir d'une seule séquence entrée-sortie représentant le comportement observable du SED. Les modèles RPI décrivent en détail l’évolution des entrées et sorties pendant le fonctionnement du système. La seconde méthode considère des SED grands et complexes - elle est basée sur une approche statistique qui permettre la construction des modèles en RPI compactes et expressives. Elle est composée de deux étapes - la première calcule à partir de la séquence entrée-sortie, la partie réactive du modèle, constituée de places observables et de transitions. La deuxième étape fait la construction de la partie non-observable, en rajoutant des places pour permettre la reproduction de la séquence entrée-sortie. Les méthodes proposées, basées sur des algorithmes de complexité polynomiale, ont été implémentées en outils logiciels, lesquels ont été testés avec des séquences d’entrée-sortie obtenues à partir des systèmes réels en fonctionnement. Les outils sont décrits et leur application est illustrée à travers deux cas d’étude
This thesis deals with the identification of automated discrete event systems (DES) operating in an industrial context. In particular the work focuses on the systems composed by a plant and a programmable logic controller (PLC) operating in a closed loop- the identification consists in obtaining an approximate model expressed in interpreted Petri nets (IPN) from the observed behaviour given under the form of a single sequence of input-output vectors of the PLC. First, an overview of previous works on identification of DES is presented as well as a comparative study of the main recent approaches on the matter. Then the addressed problem is stated- important technological characteristics of automated systems and PLC are detailed. Such characteristics must be considered in solving the identification problem, but they cannot be handled by previous identification techniques. The main contribution in this thesis is the creation of two complementary identification methods. The first method allows constructing systematically an IPN model from a single input-output sequence representing the observable behaviour of the DES. The obtained IPN models describe in detail the evolution of inputs and outputs during the system operation. The second method has been conceived for addressing large and complex industrial DES- it is based on a statistical approach yielding compact and expressive IPN models. It consists of two stages- the first one obtains, from the input-output sequence, the reactive part of the model composed by observable places and transitions. The second stage builds the non observable part of the model including places that ensure the reproduction of the observed input-output sequence. The proposed methods, based on polynomial-time algorithms, have been implemented in software tools, which have been tested with input-output sequences obtained from real systems in operation. The tools are described and their application is illustrated through two case studies
APA, Harvard, Vancouver, ISO, and other styles
31

Carter, Brandon M. "Interpreting black-box models through sufficient input subsets." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123008.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-77).
Recent progress in machine learning has come at the cost of interpretability, earning the field a reputation of producing opaque, "black-box" models. While deep neural networks are often able to achieve superior predictive accuracy over traditional models, the functions and representations they learn are usually highly nonlinear and difficult to interpret. This lack of interpretability hinders adoption of deep learning methods in fields such as medicine where understanding why a model made a decision is crucial. Existing techniques for explaining the decisions by black-box models are often restricted to either a specific type of predictor or are undesirably sensitive to factors unrelated to the model's decision-making process. In this thesis, we propose sufficient input subsets, minimal subsets of input features whose values form the basis for a model's decision. Our technique can rationalize decisions made by a black-box function on individual inputs and can also explain the basis for misclassifications. Moreover, general principles that globally govern a model's decision-making can be revealed by searching for clusters of such input patterns across many data points. Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques. We demonstrate the utility of our interpretation method on various neural network models trained on text, genomic, and image data.
by Brandon M. Carter.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
32

Söderberg, John. "Black-box modeling of a semi-active motorcycle damper." Thesis, KTH, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-57722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Grosfils, Aline. "First principles and black box modelling of biological systems." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210677.

Full text
Abstract:
Living cells and their components play a key role within biotechnology industry. Cell cultures and their products of interest are used for the design of vaccines as well as in the agro-alimentary field. In order to ensure optimal working of such bioprocesses, the understanding of the complex mechanisms which rule them is fundamental. Mathematical models may be helpful to grasp the biological phenomena which intervene in a bioprocess. Moreover, they allow prediction of system behaviour and are frequently used within engineering tools to ensure, for instance, product quality and reproducibility.

Mathematical models of cell cultures may come in various shapes and be phrased with varying degrees of mathematical formalism. Typically, three main model classes are available to describe the nonlinear dynamic behaviour of such biological systems. They consist of macroscopic models which only describe the main phenomena appearing in a culture. Indeed, a high model complexity may lead to long numerical computation time incompatible with engineering tools like software sensors or controllers. The first model class is composed of the first principles or white box models. They consist of the system of mass balances for the main species (biomass, substrates, and products of interest) involved in a reaction scheme, i.e. a set of irreversible reactions which represent the main biological phenomena occurring in the considered culture. Whereas transport phenomena inside and outside the cell culture are often well known, the reaction scheme and associated kinetics are usually a priori unknown, and require special care for their modelling and identification. The second kind of commonly used models belongs to black box modelling. Black boxes consider the system to be modelled in terms of its input and output characteristics. They consist of mathematical function combinations which do not allow any physical interpretation. They are usually used when no a priori information about the system is available. Finally, hybrid or grey box modelling combines the principles of white and black box models. Typically, a hybrid model uses the available prior knowledge while the reaction scheme and/or the kinetics are replaced by a black box, an Artificial Neural Network for instance.

Among these numerous models, which one has to be used to obtain the best possible representation of a bioprocess? We attempt to answer this question in the first part of this work. On the basis of two simulated bioprocesses and a real experimental one, two model kinds are analysed. First principles models whose reaction scheme and kinetics can be determined thanks to systematic procedures are compared with hybrid model structures where neural networks are used to describe the kinetics or the whole reaction term (i.e. kinetics and reaction scheme). The most common artificial neural networks, the MultiLayer Perceptron and the Radial Basis Function network, are tested. In this work, pure black box modelling is however not considered. Indeed, numerous papers already compare different neural networks with hybrid models. The results of these previous studies converge to the same conclusion: hybrid models, which combine the available prior knowledge with the neural network nonlinear mapping capabilities, provide better results.

From this model comparison and the fact that a physical kinetic model structure may be viewed as a combination of basis functions such as a neural network, kinetic model structures allowing biological interpretation should be preferred. This is why the second part of this work is dedicated to the improvement of the general kinetic model structure used in the previous study. Indeed, in spite of its good performance (largely due to the associated systematic identification procedure), this kinetic model which represents activation and/or inhibition effects by every culture component suffers from some limitations: it does not explicitely address saturation by a culture component. The structure models this kind of behaviour by an inhibition which compensates a strong activation. Note that the generalization of this kinetic model is a challenging task as physical interpretation has to be improved while a systematic identification procedure has to be maintained.

The last part of this work is devoted to another kind of biological systems: proteins. Such macromolecules, which are essential parts of all living organisms and consist of combinations of only 20 different basis molecules called amino acids, are currently used in the industrial world. In order to allow their functioning in non-physiological conditions, industrials are open to modify protein amino acid sequence. However, substitutions of an amino acid by another involve thermodynamic stability changes which may lead to the loss of the biological protein functionality. Among several theoretical methods predicting stability changes caused by mutations, the PoPMuSiC (Prediction Of Proteins Mutations Stability Changes) program has been developed within the Genomic and Structural Bioinformatics Group of the Université Libre de Bruxelles. This software allows to predict, in silico, changes in thermodynamic stability of a given protein under all possible single-site mutations, either in the whole sequence or in a region specified by the user. However, PoPMuSiC suffers from limitations and should be improved thanks to recently developed techniques of protein stability evaluation like the statistical mean force potentials of Dehouck et al. (2006). Our work proposes to enhance the performances of PoPMuSiC by the combination of the new energy functions of Dehouck et al. (2006) and the well known artificial neural networks, MultiLayer Perceptron or Radial Basis Function network. This time, we attempt to obtain models physically interpretable thanks to an appropriate use of the neural networks.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
34

Zaiser, Stefan Sebastian [Verfasser]. "Mengenbasierte Black-Box-Identifikation linearer Systeme / Stefan Sebastian Zaiser." Ulm : Universität Ulm, 2017. http://d-nb.info/1132713129/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yates, James W. T. "Black box and mechanistic modelling of electronic nose systems." Thesis, University of Warwick, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sinha, Aradhana. "Scalable black-box model explainability through low-dimensional visualizations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113109.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 39-40).
Two methods are proposed to provide visual intuitive explanations for how black-box models work. The first is a projection pursuit-based method that seeks to provide data-point specific explanations. The second is a generalized additive model approach that seeks to explain the model on a more holistic level, enabling users to visualize the contributions across all features at once. Both models incorporate visual and interactive elements designed to create an intuitive understanding of both the logic and limits of the model. Both explanation systems are designed to scale well to large datasets with many data points and many features.
by Aradhana Sinha.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
37

Baran, Ilya 1981. "Adaptive algorithms for problems involving black-box Lipschitz functions." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17934.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 61-62).
Suppose we are given a black-box evaluator (an oracle that returns the function value at a given point) for a Lipschitz function with a known Lipschitz constant. We consider queries that can be answered about the function by using a finite number of black-box evaluations. Specifically, we study the problems of approximating a Lipschitz function, approximately integrating a Lipschitz function, approximately minimizing a Lipschitz function, and computing the winding number of a Lipschitz curve in R² around a point. The goal is to minimize the number of evaluations used for answering a query. Because the complexity of the problem instances varies widely, depending on the actual function, we wish to design adaptive algorithms whose performance is close to the best possible on every problem instance. We give optimally adaptive algorithms for winding number computation and univariate approximation and integration. We also give a near-optimal adaptive algorithm for univariate approximation when the output of function evaluations is corrupted by random noise. For optimization over higher dimensional domains, we prove that good adaptive algorithms are impossible.
by Ilya Baran.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
38

Kugelberg, Ingrid. "Black-Box Modeling and Attitude Control of a Quadcopter." Thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125649.

Full text
Abstract:
In this thesis, black-box models describing the quadcopter system dynamics for attitude control have been estimated using closed-loop data. A quadcopter is a naturally unstable multiple input multiple output (MIMO) system and is therefore an interesting platform to test and evaluate ideas in system identification and control theory on. The estimated attitude models have been shown to explain the output signals well enough during simulations to properly tune a PID controller for outdoor flight purposes. With data collected in closed loop during outdoor flights, knowledge about the controller and IMU measurements, three decoupled models have been estimated for the angles and angular rates in roll, pitch and yaw. The models for roll and pitch have been forced to have the same model structure and orders since this reflects the geometry of the quadcopter. The models have been validated by simulating the closed-loop system where they could explain the output signals well. The estimated models have then been used to design attitude controllers to stabilize the quadcopter around the hovering state. Three PID controllers have been implemented on the quadcopter and evaluated in simulation before being tested during both indoor and outdoor flights. The controllers have been shown to stabilize the quadcopter with good reference tracking. However, the performance of the pitch controller could be improved further as there have been small oscillations present that may indicate a stronger correlation between the roll and pitch channels than assumed.
APA, Harvard, Vancouver, ISO, and other styles
39

Pósch, Krisztián. "Procedural justice theory and the black box of causality." Thesis, London School of Economics and Political Science (University of London), 2018. http://etheses.lse.ac.uk/3805/.

Full text
Abstract:
This thesis makes a theoretical and a methodological contribution. Theoretically, it tests certain predictions of procedural justice policing, which posits that neutral, fair, and respectful treatment by the police is the cornerstone of fruitful police-public relations, in that procedural justice leads to increased police legitimacy, and that legitimacy engenders societally desirable outcomes, such as citizens’ willingness to cooperate with the police and compliance with the law. Methodologically, it identifies and assesses causal mechanisms using a family of methods developed mostly in the field of epidemiology: causal mediation analysis. The theoretical and methodological aspects of this thesis converge in the investigation of (1) the extent to which procedural justice mediates the impact of contact with the police on police legitimacy and psychological processes (Paper 1), (2) the mediating role of police legitimacy on willingness to cooperate with the police and compliance with the law (Paper 3, Paper 4), and (3) the psychological drivers that channel the impact of procedural justice on police and legal legitimacy (Paper 2). This thesis makes use of a randomised controlled trial (Scottish Community Engagement Trial), four randomised experiments, and one experiment with parallel (encouragement) design on crowdsourced samples from the US and the UK (recruited through Amazon Turk and Prolific Academic). The causal evidence attests to the centrality of procedural justice, which mediates the impact of an encounter with the police on police legitimacy, and influences psychological processes and police legitimacy. Personal sense of power, not social identity, is the causal mediator of the effect of procedural justice on police and legal legitimacy. Finally, different aspects of legitimacy transmit the influence of procedural justice on distinct outcomes, with duty to obey affecting legal compliance and normative alignment affecting willingness to cooperate. In sum, most of the causal evidence is congruent with the theory of procedural justice.
APA, Harvard, Vancouver, ISO, and other styles
40

Sayed, Shereef. "Black-Box Fuzzing of the REDHAWK Software Communications Architecture." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54566.

Full text
Abstract:
As the complexity of software increases, so does the complexity of software testing. This challenge is especially true for modern military communications as radio functionality becomes more digital than analog. The Software Communications Architecture was introduced to manage the increased complexity of software radios. But the challenge of testing software radios still remains. A common methodology of software testing is the unit test. However, unit testing of software assumes that the software under test can be decomposed into its fundamental units of work. The intention of such decomposition is to simplify the problem of identifying the set of test cases needed to demonstrate correct behavior. In practice, large software efforts can rarely be decomposed in simple and obvious ways. In this paper, we introduce the fuzzing methodology of software testing as it applies to software radios. Fuzzing is a methodology that acts only on the inputs of a system and iteratively generates new test cases in order to identify points of failure in the system under test. The REDHAWK implementation of the Software Communications Architecture is employed as the system under test by a fuzzing framework called Peach. Fuzz testing of REDHAWK identified a software bug within the Core Framework, along with a systemic flaw that leaves the system in an invalid state and open to malicious use. It is recommended that a form of Fault Detection be integrated into REDHAWK for collocated processes at a minimum, and distributed processes at best, in order to provide a more fault tolerant system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

Arnedo, Luis. "System Level Black-Box Models for DC-DC Converters." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/29193.

Full text
Abstract:
The aim of this work is to develop a two-port black-box dc-dc converter modeling methodology for system level simulation and analysis. The models do not require any information about the components, structure, or control parameters of the converter. Instead, all the information needed to build the models is collected from unterminated experimental frequency response function (FRF) measurements performed at the converter power terminals. These transfer funtions are known as audiosuceptibility, back current gain, output impedance, and input admittance. The measurements are called unterminated because they do not contain any information about the source and/or the load dynamics. This work provides insights into how the source and the load affect FRF measurements and how to decouple those effects from the measurements. The actual linear time invariant model is obtained from the experimental FRFs via system identification. Because the the two-port model obtained from a set of FRFs is linear, it will be valid in a specific operating region defined by the converter operating conditions. Therefore, to satisfy the need for models valid in a wide operating region, a model structure that combines a family of linear two-port models is proposed. One structure, known as the Wiener structure, is especially useful when the converter nonlinearities are reflected mainly in the steady state currents and voltage values. The other structure is known as a polytopic structure, and it is able to capture nonlinearities that affect the transient and steady state converter behavior. The models are used for prediction of steady state and transient behavior of voltages and currents at the converter terminals. In addition, the models are useful for subsystem interaction and small signal stability assesment of interconnected dc distribution systems comprising commericially available converters. This work presents for first time simulation and stability analysis results of a system that combines dc-dc converters from two different manufucturers. All simulation results are compared against experimental results to verify the usefulness of the approach.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Rees, Glyn Owen. "Efficient "black-box" multigrid solvers for convection-dominated problems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/efficient-blackbox-multigrid-solvers-for-convectiondominated-problems(d49ec3ea-1dc2-4238-b0c1-0688e5944ddd).html.

Full text
Abstract:
The main objective of this project is to develop a "black-box" multigrid preconditioner for the iterative solution of finite element discretisations of the convection-diffusion equation with dominant convection. This equation can be considered a stand alone scalar problem or as part of a more complex system of partial differential equations, such as the Navier-Stokes equations. The project will focus on the stand alone scalar problem. Multigrid is considered an optimal preconditioner for scalar elliptic problems. This strategy can also be used for convection-diffusion problems, however an appropriate robust smoother needs to be developed to achieve mesh-independent convergence. The focus of the thesis is on the development of such a smoother. In this context a novel smoother is developed referred to as truncated incomplete factorisation (tILU) smoother. In terms of computational complexity and memory requirements, the smoother is considerably less expensive than the standard ILU(0) smoother. At the same time, it exhibits the same robustness as ILU(0) with respect to the problem and discretisation parameters. The new smoother significantly outperforms the standard damped Jacobi smoother and is a competitor to the Gauss-Seidel smoother (and in a number of important cases tILU outperforms the Gauss-Seidel smoother). The new smoother depends on a single parameter (the truncation ratio). The project obtains a default value for this parameter and demonstrated the robust performance of the smoother on a broad range of problems. Therefore, the new smoothing method can be regarded as "black-box". Furthermore, the new smoother does not require any particular ordering of the nodes, which is a prerequisite for many robust smoothers developed for convection-dominated convection-diffusion problems. To test the effectiveness of the preconditioning methodology, we consider a number of model problems (in both 2D and 3D) including uniform and complex (recirculating) convection fields discretised by uniform, stretched and adaptively refined grids. The new multigrid preconditioner within block preconditioning of the Navier-Stokes equations was also tested. The numerical results gained during the investigation confirm that tILU is a scalable, robust smoother for both geometric and algebraic multigrid. Also, comprehensive tests show that the tILU smoother is a competitive method.
APA, Harvard, Vancouver, ISO, and other styles
43

Klarner, Patricia, Gilbert Probst, and Michael Useem. "Opening the black box: Unpacking board involvement in innovation." Sage, 2019. http://dx.doi.org/10.1177/1476127019839321.

Full text
Abstract:
Corporate governance research suggests that boards of directors play key roles in governing company strategy. Although qualitative research has examined board-management relationships to describe board involvement in strategy, we lack detailed insights into how directors engage with organizational members for governing a complex and long-term issue such as product innovation. Our multiple-case study of four listed pharmaceutical firms reveals a sequential process of board involvement: Directors with deep expertise govern scientific innovation, followed by the full board's involvement in its strategic aspects. The nature of director involvement varies across board levels in terms of the direction (proactive or reactive), timing (regular or spontaneous), and the extent of formality of exchanges between directors and organizational members. Our study contributes to corporate governance research by introducing the concept of board behavioral diversity and by theorizing about the multilevel, structural, and temporal dimensions of board behavior and its relational characteristics.
APA, Harvard, Vancouver, ISO, and other styles
44

Ait, Elhara Ouassim. "Stochastic Black-Box Optimization and Benchmarking in Large Dimensions." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS211/document.

Full text
Abstract:
Etant donné le coût élevé qui accompagne, en général, la résolution de problème en grandes dimensions, notamment quand il s'agit de problèmes réels; le recours à des fonctions dite benchmarks et une approche communément utilisée pour l'évaluation d'algorithmes avec un coût minime. Il est alors question de savoir identifier les formes par lesquelles ces problèmes se présentent pour pouvoir les reproduire dans ces benchmarks. Une question dont la réponse est difficile vu la variété de ces problèmes, leur complexité, et la difficulté de tous les décrire pertinemment. L'idée est alors d'examiner les difficultés qui accompagnent généralement ces problème, ceci afin de les reproduire dans les fonctions benchmarks et évaluer la capacité des algorithmes à les résoudre. Dans le cas des problèmes de grandes dimensions, il serait pratique de pouvoir simplement étendre les benchmarks déjà utilisés pour les dimensions moins importantes. Cependant, il est important de prendre en compte les contraintes additionnelles qui accompagnent les problèmes de grandes dimensions, notamment ceux liés à la complexité d'évaluer ces fonctions benchmark. Idéalement, les fonctions benchmark en grandes dimension garderaient la majorité des propriétés de leurs contreparties en dimensions réduite tout en ayant un coût raisonnable. Les problèmes benchmark sont souvent classifiés en catégories suivant les difficultés qu'ils présentent. Même dans un scénario en boîte-noire où ce genre d'information n'est pas partagée avec l'algorithme, il reste important et pertinent d'avoir cette classification. Ceci permet d'identifier les lacunes d'un algorithme vis à vis d'une difficulté en particulier, et donc de plus facilement pouvoir l'améliorer. Une autre question importante à se poser en modélisant des problèmes de grandes dimensions est la pertinence des variables. En effet, quand la dimension est relativement petite, il n'est pas rare de voir toutes les variables contribuer à définir la qualité d'une solution. Cependant, quand la dimension grandit, il arrive souvent que des variables deviennent redondantes voire inutiles; notamment vu la difficulté de trouver une représentation minimaliste du problème. Ce dernier point encourage la conception et d'algorithmes et de fonctions benchmark traitant cette classe de problèmes. Dans cette thèse, on répond, principalement, à trois questions rencontrées dans l'optimisation stochastique continue en grandes dimensions : 1. Comment concevoir une méthode d'adaptation du pas d'une stratégie d'évolution qui, à la fois, est efficace et a un coût en calculs raisonnable ? 2. Comment construire et généraliser des fonctions à faible dimension effective ? 3. Comment étendre un ensemble de fonctions benchmarks pour des cas de grandes dimensions en préservant leurs propriétés sans avoir des caractéristiques qui soient exploitables ?
Because of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
APA, Harvard, Vancouver, ISO, and other styles
45

Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.

Full text
Abstract:
Cette thèse porte sur la configurationAutomatisée des algorithmes qui vise à trouver le meilleur paramétrage à un problème donné ou une catégorie deproblèmes.Le problème de configuration de l'algorithme revient doncà un problème de métaFoptimisation dans l'espace desparamètres, dont le métaFobjectif est la mesure deperformance de l’algorithme donné avec une configuration de paramètres donnée.Des approches plus récentes reposent sur une description des problèmes et ont pour but d’apprendre la relationentre l’espace des caractéristiques des problèmes etl’espace des configurations de l’algorithme à paramétrer.Cette thèse de doctorat porter le CAPI (Configurationd'Algorithme Par Instance) pour résoudre des problèmesd'optimisation de boîte noire continus, où seul un budgetlimité d'évaluations de fonctions est disponible. Nous étudions d'abord' les algorithmes évolutionnairesPour l'optimisation continue, en mettant l'accent sur deux algorithmes que nous avons utilisés comme algorithmecible pour CAPI,DE et CMAFES.Ensuite, nous passons en revue l'état de l'art desapproches de configuration d'algorithme, et lesdifférentes fonctionnalités qui ont été proposées dansla littérature pour décrire les problèmesd'optimisation de boîte noire continue.Nous introduisons ensuite une méthodologie générale Pour étudier empiriquement le CAPI pour le domainecontinu, de sorte que toutes les composantes du CAPIpuissent être explorées dans des conditions réelles.À cette fin, nous introduisons également un nouveau Banc d'essai de boîte noire continue, distinct ducélèbre benchmark BBOB, qui est composé deplusieurs fonctions de test multidimensionnelles avec'différentes propriétés problématiques, issues de lalittérature.La méthodologie proposée est finalement appliquée 'àdeux AES. La méthodologie est ainsi, validéempiriquement sur le nouveau banc d’essaid’optimisation boîte noire pour des dimensions allant jusqu’à 100
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
APA, Harvard, Vancouver, ISO, and other styles
46

Conley, Natasha. "BARRIERS AND FACILITATORS OF GROWTH IN BLACK ENTREPRENEURIAL VENTURES: THINKING OUTSIDE THE BLACK BOX." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1522882124350055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Joel, Viklund. "Explaining the output of a black box model and a white box model: an illustrative comparison." Thesis, Uppsala universitet, Filosofiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-420889.

Full text
Abstract:
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
APA, Harvard, Vancouver, ISO, and other styles
48

Borgan, Tom-Rune. "Condition Monitoring : Based on "black-box"- and First Principal Models." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-12610.

Full text
Abstract:
Summary and conclusionsThis master thesis investigates Principal Component Analysis (PCA) methods used in the field of Early Fault and Disturbance Detection (EFDD). Statoil and ABB have in collaboration developed an application for EFDD that among others consist of PCA methods. The applications are used to run condition monitoring on oil- and gas processes, and are a software prototype currently being tested in Statoil. To make the method more robust and convincing to the operators it is desirable to improve the application.This thesis will focus on the Principal Component Analysis (PCA) method and some extension to it based on Model Based PCA (MBPCA). The PCA in its simplest form have some severe restrictions due to linear and stationary data. The motivation will therefore be to see how PCA and extension based on MBPCA and Nonlinear PCA (NLPCA) methods operates when used on non-linear data. For the PCA to describe a process adequately a certain amount of data is required. In the industry the process is often sort of instrumentation. The next motivation would be to investigate if lack of instrumentation could be replaced by some estimates and then approve the ability for the PCA analysis. Another issue concerning instrumentation is the use of virtual tags. Virtual tags are mathematical functions based on already available measurements. The idea is based on process insight. If we know the process well and the cause of nonlinearities some additional nonlinear functions could be incorporated to increase the performance of the PCA method. To verify the factors mention above some of the methods would use data from a heat exchanger – and Centrifugal pump process, and some using only one of the processes.The conclusion from the work is as follows: Based on the simulations preformed it’s evident that using MBPCA do improve the PCA method for fault detection, even if the model is not entirely correct to the real process. When it comes to using virtual tags the simulations on centrifugal pump increased the performance of the PCA method. NLPCA here based on Autoassociative Neural Networks did not perform as well as MBPCA but the method is harder to tune and therefore it would be wrong to brush aside this method. The improvement of using estimates for missing measurement gave small improvement.
APA, Harvard, Vancouver, ISO, and other styles
49

Martyn, Karen. "Decision-making in a corporate boardroom: Inside the black box." Massey University, 2006. http://hdl.handle.net/10179/986.

Full text
Abstract:
The lack of empirical studies on board process represents a serious knowledge gap in the governance literature. To date there has been little research on how boards actually make decisions, the factors that contribute to effective board decision-making, and what tools and techniques may be used to improve board decision-making. Effective board processes are identified as leading to effective board outputs, and subsequently more effective organisational outcomes. This study explored the internal factors under the control of the board (or those selecting board members) that contribute to effective board decision-making processes. The perspective of small group decision-making research was applied to explore board decision-making processes. The three aims of the study were to investigate those factors that directors thought contributed to their board's successful and unsuccessful decision-making, to observe how a board actually makes decisions; and to determine whether training and usage of a normative decision-making methodology (including the use of a reminder role) might improve that board's decision-making process. Data collection included direct, in situ, observation of a board; semi-structured interviews with all board directors, the CEO and four executive team members; three surveys; and emotional intelligence testing (MSCEIT). The board was found to use normative decision-making procedures. These decision making procedures appeared to contribute to better decision-making processes and consequently better decision-making outputs. The task intent of acting in the best interest of the company and the relationship intent of trust were found to permeate the board inputs and processes examined during this research. Other input and process variables observed to influence board decision-making were classified as being task (structure, process, communication) and/or relational (relationships, director attributes and emotions) factors. Task factors included rational decision-making procedures; clarity of goals and roles; use of external advisors as critical evaluators; quantity and quality of information; consensus decision-making; post-decision evaluation and learning. Relational factors included homogeneity of directors through careful selection; socialising with management; board norms of a safe environment, supporting the doubtful director and the obligation to share contrary views; adequate business knowledge; emotional intelligence; and commitment. The results of emotional intelligence testing revealed levels sufficient to assist in positive board dynamics. The study results support the application of small group decision making research to aid in board process research, and further empirical exploration of board inputs using psychometric measures.
APA, Harvard, Vancouver, ISO, and other styles
50

Brus, Linda. "Recursive black-box identification of nonlinear state-space ODE models." Licentiate thesis, Uppsala : Department of Information Technology, Uppsala University, 2006. http://www.it.uu.se/research/publications/lic/2006-001/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography