Academic literature on the topic 'Black Box trees'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Black Box trees.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Black Box trees"

1

Veugen, Thijs, Bart Kamphorst, and Michiel Marcus. "Privacy-Preserving Contrastive Explanations with Local Foil Trees." Cryptography 6, no. 4 (October 28, 2022): 54. http://dx.doi.org/10.3390/cryptography6040054.

Full text
Abstract:
We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data and the model itself.
APA, Harvard, Vancouver, ISO, and other styles
2

Stone, C., and PE Bacon. "Influence of Insect Herbivory on the Decline of Black Box (Eucalyptus largiflorens)." Australian Journal of Botany 43, no. 6 (1995): 555. http://dx.doi.org/10.1071/bt9950555.

Full text
Abstract:
The contribution of insect herbivory to the canopy decline of Eucalyptus largiflorens F.Muell. (black box) was assessed on nine irrigated properties around Deniliquin in southern central New South Wales. Fully expanded leaves less than 1 year old were sampled from 36 mature trees in June 1993 and again in June 1994 after half the trees had been treated with a systemic insecticide in November 1993. Insect herbivory in treated trees fell significantly from 27 to 9%. It also fell, but to a lesser extent (28-19%, P < 0.05), in the untreated trees. The fall in insect herbivory in control trees corresponded to a decrease in rainfall in 1994 when the rainfall was 50% of that for 1993. There was a significant linear relationship between insect herbivory and trunk diameter increment in the untreated trees. There was no consistent relationship between insect herbivory and the visual assessment of crown condition. Although E. largiflorens is described as having both narrow adult and juvenile foliage, adjacent trees in this study differed significantly in their leaf length:breadth ratios. Canopies with a dominance of broader foliage had significantly higher levels of herbivory. Individual trees tended to replace foliage with leaves of similar morphology. It is suggested that this variation in leaf shape may be genetic rather than environmental. If so, landholders could select for trees with narrower foliage which may result in reduced impact of insect herbivory.
APA, Harvard, Vancouver, ISO, and other styles
3

McTavish, Hayden, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, and Margo Seltzer. "Fast Sparse Decision Tree Optimization via Reference Ensembles." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9604–13. http://dx.doi.org/10.1609/aaai.v36i9.21194.

Full text
Abstract:
Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have been made on the problem only within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. The guesses come from knowledge gleaned from black box models. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess.
APA, Harvard, Vancouver, ISO, and other styles
4

Welchowski, Thomas, Kelly O. Maloney, Richard Mitchell, and Matthias Schmid. "Techniques to Improve Ecological Interpretability of Black-Box Machine Learning Models." Journal of Agricultural, Biological and Environmental Statistics 27, no. 1 (October 28, 2021): 175–97. http://dx.doi.org/10.1007/s13253-021-00479-7.

Full text
Abstract:
AbstractStatistical modeling of ecological data is often faced with a large number of variables as well as possible nonlinear relationships and higher-order interaction effects. Gradient boosted trees (GBT) have been successful in addressing these issues and have shown a good predictive performance in modeling nonlinear relationships, in particular in classification settings with a categorical response variable. They also tend to be robust against outliers. However, their black-box nature makes it difficult to interpret these models. We introduce several recently developed statistical tools to the environmental research community in order to advance interpretation of these black-box models. To analyze the properties of the tools, we applied gradient boosted trees to investigate biological health of streams within the contiguous USA, as measured by a benthic macroinvertebrate biotic index. Based on these data and a simulation study, we demonstrate the advantages and limitations of partial dependence plots (PDP), individual conditional expectation (ICE) curves and accumulated local effects (ALE) in their ability to identify covariate–response relationships. Additionally, interaction effects were quantified according to interaction strength (IAS) and Friedman’s $$H^2$$ H 2 statistic. Interpretable machine learning techniques are useful tools to open the black-box of gradient boosted trees in the environmental sciences. This finding is supported by our case study on the effect of impervious surface on the benthic condition, which agrees with previous results in the literature. Overall, the most important variables were ecoregion, bed stability, watershed area, riparian vegetation and catchment slope. These variables were also present in most identified interaction effects. In conclusion, graphical tools (PDP, ICE, ALE) enable visualization and easier interpretation of GBT but should be supported by analytical statistical measures. Future methodological research is needed to investigate the properties of interaction tests. Supplementary materials accompanying this paper appear on-line.
APA, Harvard, Vancouver, ISO, and other styles
5

Barbosa, Pedro, Astrid Caldas, and Gaden Robinson. "Host Plant Associations among Species in Two Macrolepidopteran Assemblages." Journal of Entomological Science 38, no. 1 (January 1, 2003): 41–47. http://dx.doi.org/10.18474/0749-8004-38.1.41.

Full text
Abstract:
Host plant associations of macrolepidopteran species in assemblages on box elder, Acer negundo L., and black willow, Salix nigra (Marsh), were characterized. Almost 90% of the macrolepidoptera collected on these two riparian tree species of the mid-Atlantic area of the United States were new host records. Larvae of 87 species (and another nine specimens identified to genus) were collected on box elder and black willow. About one-fifth of the species were found exclusively on box elder, one-third exclusively on black willow, and about one-half of the macrolepidopteran species were found on both tree species. Although many macrolepidopterawere found on both tree species, they were not equally abundant on both trees, suggesting a predominantly favored tree species. However, there was no statistically significant asymmetry in host tree species use.
APA, Harvard, Vancouver, ISO, and other styles
6

Wagers, Steven, Guillermo Castilla, Michelle Filiatrault, and G. Arturo Sanchez-Azofeifa. "Using TLS-Measured Tree Attributes to Estimate Aboveground Biomass in Small Black Spruce Trees." Forests 12, no. 11 (November 4, 2021): 1521. http://dx.doi.org/10.3390/f12111521.

Full text
Abstract:
Research Highlights: This study advances the effort to accurately estimate the biomass of trees in peatlands, which cover 13% of Canada’s land surface. Background and Objectives: Trees remove carbon from the atmosphere and store it as biomass. Terrestrial laser scanning (TLS) has become a useful tool for modelling forest structure and estimating the above ground biomass (AGB) of trees. Allometric equations are often used to estimate individual tree AGB as a function of height and diameter at breast height (DBH), but these variables can often be laborious to measure using traditional methods. The main objective of this study was to develop allometric equations using TLS-measured variables and compare their accuracy with that of other widely used equations that rely on DBH. Materials and Methods: The study focusses on small black spruce trees (<5 m) located in peatland ecosystems of the Taiga Plains Ecozone in the Northwest Territories, Canada. Black spruce growing in peatlands are often stunted when compared to upland black spruce and having models specific to them would allow for more precise biomass estimates. One hundred small trees were destructively sampled from 10 plots and the dry weight of each tree was measured in the lab. With this reference data, we fitted biomass models specific to peatland black spruce using DBH, crown diameter, crown area, height, tree volume, and bounding box volume as predictors. Results: Our best models had crown size and height as predictors and outperformed established AGB equations that rely on DBH. Conclusions: Our equations are based on predictors that can be measured from above, and therefore they may enable the plotless creation of accurate biomass reference data for a prominent tree species in a common ecosystem (treed peatlands) in North America’s boreal.
APA, Harvard, Vancouver, ISO, and other styles
7

Shahpouri, Saeid, Armin Norouzi, Christopher Hayduk, Reza Rezaei, Mahdi Shahbakhti, and Charles Robert Koch. "Hybrid Machine Learning Approaches and a Systematic Model Selection Process for Predicting Soot Emissions in Compression Ignition Engines." Energies 14, no. 23 (November 24, 2021): 7865. http://dx.doi.org/10.3390/en14237865.

Full text
Abstract:
The standards for emissions from diesel engines are becoming more stringent and accurate emission modeling is crucial in order to control the engine to meet these standards. Soot emissions are formed through a complex process and are challenging to model. A comprehensive analysis of diesel engine soot emissions modeling for control applications is presented in this paper. Physical, black-box, and gray-box models are developed for soot emissions prediction. Additionally, different feature sets based on the least absolute shrinkage and selection operator (LASSO) feature selection method and physical knowledge are examined to develop computationally efficient soot models with good precision. The physical model is a virtual engine modeled in GT-Power software that is parameterized using a portion of experimental data. Different machine learning methods, including Regression Tree (RT), Ensemble of Regression Trees (ERT), Support Vector Machines (SVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Bayesian Neural Network (BNN) are used to develop the black-box models. The gray-box models include a combination of the physical and black-box models. A total of five feature sets and eight different machine learning methods are tested. An analysis of the accuracy, training time and test time of the models is performed using the K-means clustering algorithm. It provides a systematic way for categorizing the feature sets and methods based on their performance and selecting the best method for a specific application. According to the analysis, the black-box model consisting of GPR and feature selection by LASSO shows the best performance with test R2 of 0.96. The best gray-box model consists of SVM-based method with physical insight feature set along with LASSO for feature selection with test R2 of 0.97.
APA, Harvard, Vancouver, ISO, and other styles
8

Wongvibulsin, Shannon, Katherine C. Wu, and Scott L. Zeger. "Improving Clinical Translation of Machine Learning Approaches Through Clinician-Tailored Visual Displays of Black Box Algorithms: Development and Validation." JMIR Medical Informatics 8, no. 6 (June 9, 2020): e15791. http://dx.doi.org/10.2196/15791.

Full text
Abstract:
Background Despite the promise of machine learning (ML) to inform individualized medical care, the clinical utility of ML in medicine has been limited by the minimal interpretability and black box nature of these algorithms. Objective The study aimed to demonstrate a general and simple framework for generating clinically relevant and interpretable visualizations of black box predictions to aid in the clinical translation of ML. Methods To obtain improved transparency of ML, simplified models and visual displays can be generated using common methods from clinical practice such as decision trees and effect plots. We illustrated the approach based on postprocessing of ML predictions, in this case random forest predictions, and applied the method to data from the Left Ventricular (LV) Structural Predictors of Sudden Cardiac Death (SCD) Registry for individualized risk prediction of SCD, a leading cause of death. Results With the LV Structural Predictors of SCD Registry data, SCD risk predictions are obtained from a random forest algorithm that identifies the most important predictors, nonlinearities, and interactions among a large number of variables while naturally accounting for missing data. The black box predictions are postprocessed using classification and regression trees into a clinically relevant and interpretable visualization. The method also quantifies the relative importance of an individual or a combination of predictors. Several risk factors (heart failure hospitalization, cardiac magnetic resonance imaging indices, and serum concentration of systemic inflammation) can be clearly visualized as branch points of a decision tree to discriminate between low-, intermediate-, and high-risk patients. Conclusions Through a clinically important example, we illustrate a general and simple approach to increase the clinical translation of ML through clinician-tailored visual displays of results from black box algorithms. We illustrate this general model-agnostic framework by applying it to SCD risk prediction. Although we illustrate the methods using SCD prediction with random forest, the methods presented are applicable more broadly to improving the clinical translation of ML, regardless of the specific ML algorithm or clinical application. As any trained predictive model can be summarized in this manner to a prespecified level of precision, we encourage the use of simplified visual displays as an adjunct to the complex predictive model. Overall, this framework can allow clinicians to peek inside the black box and develop a deeper understanding of the most important features from a model to gain trust in the predictions and confidence in applying them to clinical care.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernando, Denise R., Jonathan P. Lynch, Meredith T. Hanlon, and Alan T. Marshall. "Foliar elemental microprobe data and leaf anatomical traits consistent with drought tolerance in Eucalyptus largiflorens (Myrtaceae)." Australian Journal of Botany 69, no. 4 (2021): 215. http://dx.doi.org/10.1071/bt20170.

Full text
Abstract:
In food-productive river basins, ecosystems reliant on natural flows are affected by climate change and water removal. One such example is Australia’s Murray–Darling Basin (MDB), to which the ecologically important black box tree Eucalyptus largiflorens (Myrtaceae) is unique. Little is known about its mineral nutrition and response to flooding. A field study conducted at Hattah Kulkyne National Park on the MDB examined nutrient and Al distribution in mature and young foliage of trees whose status varied with respect to the presence of surface floodwaters. Black box is also of interest due to emerging evidence of its capacity to accumulate high foliar salt concentrations. Here, cryo scanning electron microscopy alone (SEM), combined with energy dispersive spectroscopy (SEM-EDS) and X-ray fluorescence (XRF) spectroscopy were applied to evaluate leaf anatomy and elemental patterns at the cellular and whole-leaf levels. Variation in whole-leaf elemental levels across flooded and dry trees aligned with known nutritional fluctuations in this drought-tolerant species reliant on occasional infrequent flooding. The microprobe data provide evidence of drought tolerance by demonstrating that extended conditions of lack of water to trees do not elicit leaf anatomical changes nor changes to leaf cellular storage of these elements. Foliar Na concentrations of ~2000–6000mgkg–1 DW were found co-localised with Cl in mesophyll and dermal cells of young and mature leaves, suggesting vacuolar salt disposal as a detoxification strategy.
APA, Harvard, Vancouver, ISO, and other styles
10

Duryea, Mary, George Blakeslee, William Hubbard, and Ricardo Vasquez. "Wind and Trees: A Survey of Homeowners After Hurricane Andrew." Arboriculture & Urban Forestry 22, no. 1 (January 1, 1996): 44–50. http://dx.doi.org/10.48044/jauf.1996.006.

Full text
Abstract:
The destructive winds of Hurricane Andrew dramatically changed the urban forest in Dade County, Florida on August 24,1992. Overnight, the tree canopy was replaced by a landscape of broken, uprooted, defoliated and severely damaged trees. To assist communities in reforestation efforts, scientists at the University of Florida conducted a homeowner survey to determine how different tree species responded to strong winds. Native tree species, such as box leaf stopper, sabal palm gumbo limbo, and live oak were the best survivors of the winds. Other palms such as areca, cabada, and Alexander were also highly wind resistant. In general, fruit trees such as navel orange, mango, avocado and grapefruit were severely damaged. Black olive, live oak, and gumbo limbo trees that were pruned survived the hurricane better than unpruned trees. Only 18% of all the trees that fell caused property damage. Hurricane-susceptible communities should consider wind resistance as one of their criteria in tree species selection.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Black Box trees"

1

Saeed, Umar, and Ansur Mahmood Amjad. "ISTQB : Black Box testing Strategies used in Financial Industry for Functional testing." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3237.

Full text
Abstract:
Black box testing techniques are important to test the functionality of the system without knowing its inner detail which makes sure correct, consistent, complete and accurate behavior or function of a system. Black box testing strategies are used to test logical, data or behavioral dependencies, to generate test data and quality of test cases which have potential to guess more defects. Black box testing strategies play pivotal role to detect possible defects in system and can help in successful completion of system according to functionality. The studies of five companies regarding important black box testing strategies are presented in this thesis. This study explores the black box testing techniques which are present in literature and practiced in industry as well. Interview studies are conducted in companies of Pakistan providing solutions to finance industry, which is an attempt to find the usage of these techniques. The advantages and disadvantages of identified Black box testing strategies are discussed, along with it; the comparison of different techniques with respect to most defect guessing, dependencies, sophistication, effort, and cost is presented as well.
APA, Harvard, Vancouver, ISO, and other styles
2

Kamal, Ahmad Waqas. "A Hierarchical Approach to Software Testing." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4889.

Full text
Abstract:
To produce high quality software both software developers and testers need continuous improvement in their work methodologies and processes. So, far much work has been done in the effective ways of eliciting and documenting the requirements. However important aspect is to make sure that whatever is documented in specifications actually works correctly in the developed software. Software testing is done to ensure this phenomenon. Aim of this thesis is to develop a software test case work flow strategy that helps in identification and selection of suitable test paths that can be used as an input to acceptance testing and as a pre-requisite to start actual testing of the system. This thesis focuses on organizing system test artifacts by closely specifying them with system requirements and use cases. In this perspective focus of this thesis is on requirement writing by use cases, requirements traceability, test case prioritization and application acceptance criteria. A structured way to design test cases is proposed with the help of use cases. Some work is done to trace user needs to system requirements and use cases and benefits of using use case modeling approach in structuring the relationships among test cases is analyzed. As test cases are subject to changes in future so, challenges imposed due to traceability among requirements, use cases and test cases are main subjects of this work along with the challenges faced by software testers to perform application acceptance testing. A green path scheme is proposed to help testers define application acceptance criteria and weight assignment approach is used to prioritize the test cases and to determine the percentage of application running successfully.
APA, Harvard, Vancouver, ISO, and other styles
3

Bensadon, Jérémy. "Applications de la théorie de l'information à l'apprentissage statistique." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS025/document.

Full text
Abstract:
On considère ici deux sujets différents, en utilisant des idées issues de la théorie de l'information : 1) Context Tree Weighting est un algorithme de compression de texte qui calcule exactement une prédiction Bayésienne qui considère tous les modèles markoviens visibles : on construit un "arbre de contextes", dont les nœuds profonds correspondent aux modèles complexes, et la prédiction est calculée récursivement à partir des feuilles. On étend cette idée à un contexte plus général qui comprend également l'estimation de densité et la régression, puis on montre qu'il est intéressant de remplacer les mixtures Bayésiennes par du "switch", ce qui revient à considérer a priori des suites de modèles plutôt que de simples modèles. 2) Information Geometric Optimization (IGO) est un cadre général permettant de décrire plusieurs algorithmes d'optimisation boîte noire, par exemple CMA-ES et xNES. On transforme le problème initial en un problème d'optimisation d'une fonction lisse sur une variété Riemannienne, ce qui permet d'obtenir une équation différentielle du premier ordre invariante par reparamétrage. En pratique, il faut discrétiser cette équation, et l'invariance n'est plus valable qu'au premier ordre. On définit l'algorithme IGO géodésique (GIGO), qui utilise la structure de variété Riemannienne mentionnée ci-dessus pour obtenir un algorithme totalement invariant par reparamétrage. Grâce au théorème de Noether, on obtient facilement une équation différentielle du premier ordre satisfaite par les géodésiques de la variété statistique des gaussiennes, ce qui permet d'implémenter GIGO. On montre enfin que xNES et GIGO sont différents dans le cas général, mais qu'il est possible de définir un nouvel algorithme presque invariant par reparamétrage, GIGO par blocs, qui correspond exactement à xNES dans le cas Gaussien
We study two different topics, using insight from information theory in both cases: 1) Context Tree Weighting is a text compression algorithm that efficiently computes the Bayesian combination of all visible Markov models: we build a "context tree", with deeper nodes corresponding to more complex models, and the mixture is computed recursively, starting with the leaves. We extend this idea to a more general context, also encompassing density estimation and regression; and we investigate the benefits of replacing regular Bayesian inference with switch distributions, which put a prior on sequences of models instead of models. 2) Information Geometric Optimization (IGO) is a general framework for black box optimization that recovers several state of the art algorithms, such as CMA-ES and xNES. The initial problem is transferred to a Riemannian manifold, yielding parametrization-invariant first order differential equation. However, since in practice, time is discretized, this invariance only holds up to first order. We introduce the Geodesic IGO (GIGO) update, which uses this Riemannian manifold structure to define a fully parametrization invariant algorithm. Thanks to Noether's theorem, we obtain a first order differential equation satisfied by the geodesics of the statistical manifold of Gaussians, thus allowing to compute the corresponding GIGO update. Finally, we show that while GIGO and xNES are different in general, it is possible to define a new "almost parametrization-invariant" algorithm, Blockwise GIGO, that recovers xNES from abstract principles
APA, Harvard, Vancouver, ISO, and other styles
4

Dubois, Amaury. "Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre." Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.

Full text
Abstract:
Le sujet de la thèse porte sur une des thématiques du LISIC : la modélisation et la simulation de systèmes complexes, ainsi que sur l'optimisation et l'apprentissage automatique pour l'agronomie. Les objectifs de la thèse sont de répondre aux questions de pilotage de l'irrigation de la culture de pomme de terre par le développement d'outils d'aide à la décision à destination des exploitants agricoles. Le choix de cette culture est motivé par sa part importante dans la région des Hauts-de-France. Le manuscrit s'articule en 3 parties. La première partie traite de l'optimisation continue mutlimodale dans un contexte de boîte noire. Il en suit une présentation d'une méthodologie d'étalonnage automatique de paramètres de modèle biologique grâce à une reformulation en un problème d'optimisation continue mono-objectif multimodale de type boîte noire. La pertinence de l'utilisation de l'analyse inverse comme méthodologie de paramétrage automatique de modèles de grandes dimensions est ensuite démontrée. La deuxième partie présente 2 nouveaux algorithmes UCB Random with Decreasing Step-size et UCT Random with Decreasing Step-size. Ce sont des algorithmes d'optimisation continue multimodale boîte noire dont le choix de la position initiale des individus est assisté par un algorithmes d'apprentissage par renforcement. Les résultats montrent que ces algorithmes possèdent de meilleures performances que les algorithmes état de l'art Quasi Random with Decreasing Step-size. Enfin, la dernière partie est focalisée sur les principes et les méthodes d'apprentissage automatique (machine learning). Une reformulation du problème de la prédiction à une semaine de la teneur en eau dans le sol en un problème d'apprentissage supervisé a permis le développement d'un nouvel outil d'aide à la décision pour répondre à la problématique du pilotage des cultures
The subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
APA, Harvard, Vancouver, ISO, and other styles
5

Harland, A. N. "Tracing local hydrology and water source use of Eucalyptus largiflorens on the Calperum Floodplain using strontium, oxygen and deuterium isotopes." Thesis, 2018. http://hdl.handle.net/2440/130626.

Full text
Abstract:
This item is only available electronically.
Black Box trees (Eucalyptus largiflorens) across the Murray-Darling Basin are in critical condition due to high groundwater salinity and infrequent natural flooding. Geochemical tracers such as radiogenic strontium (87Sr/86Sr), oxygen-18 (𝛿𝛿18O) and deuterium (𝛿𝛿D) are considered useful in the understanding of catchment hydrology and plant water use, and in this study, 87Sr/86Sr, 𝛿𝛿18O and 𝛿𝛿D isotopes were used accordingly to better comprehend local hydrology and water use behaviour patterns of Black Box trees on the Calperum Floodplain, South Australia. Investigations were achieved by sampling and analysing local surface waters (Lake Merreti, Lake Clover, and River Murray), groundwater, soils (1.5 m depth) and plant material (stem water, and leaves) from two separate sites, north (Site 1) and south (Site 4). Considering the local hydrology, Lake Clover was composed of evaporated rainwater, while Lake Merreti was a relative mix of both evaporated rainwater and river water. Additionally, local rainfall sources appeared to vary overtime. Furthermore, groundwater showed no close relationship with rain water suggesting an alternative recharge source such as river water or remnant paleo-water. In terms of water use, linear mixing models using soil 87Sr/86Sr, leaf 87Sr/86Sr and stem water 𝛿𝛿18O inputs showed that Site 1 trees, on average, were predominately using rainwater (77%, 77% & 67%), while Site 4 trees used both rainwater (16%, 32% & 42%) and saline groundwater (70%, 62% & 58%), regardless of nearby lakes and streams. These findings have implications for future monitoring, and the management of outer floodplain Black Box populations that are unable to receive natural flooding inundation.
Thesis (B.Sc.(Hons)) -- University of Adelaide, School of Physical Sciences, 2018
APA, Harvard, Vancouver, ISO, and other styles
6

Salvaire, Pierre Antony Jean Marie. "Explaining the predictions of a boosted tree algorithm : application to credit scoring." Master's thesis, 2019. http://hdl.handle.net/10362/85991.

Full text
Abstract:
Dissertation report presented as partial requirement for obtaining the Master’s degree in Information Management, with a specialization in Business Intelligence and Knowledge Management
The main goal of this report is to contribute to the adoption of complex « Black Box » machine learning models in the field of credit scoring for retail credit. Although numerous investigations have been showing the potential benefits of using complex models, we identified the lack of interpretability as one of the main vector preventing from a full and trustworthy adoption of these new modeling techniques. Intrinsically linked with recent data concerns such as individual rights for explanation, fairness (introduced in the GDPR1) or model reliability, we believe that this kind of research is crucial for easing its adoption among credit risk practitioners. We build a standard Linear Scorecard model along with a more advanced algorithm called Extreme Gradient Boosting (XGBoost) on a retail credit open source dataset. The modeling scenario is a binary classification task consisting in identifying clients that will experienced 90 days past due delinquency state or worse. The interpretation of the Scorecard model is performed using the raw output of the algorithm while more complex data perturbation technique, namely Partial Dependence Plots and Shapley Additive Explanations methods are computed for the XGBoost algorithm. As a result, we observe that the XGBoost algorithm is statistically more performant at distinguishing “bad” from “good” clients. Additionally, we show that the global interpretation of the XGBoost is not as accurate as the Scorecard algorithm. At an individual level however (for each instance of the dataset), we show that the level of interpretability is very similar as they are both able to quantify the contribution of each variable to the predicted risk of a specific application.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Black Box trees"

1

Greenwell, Brandon M. "Peeking inside the “black box”: post-hoc interpretability." In Tree-Based Methods for Statistical Learning in R, 203–28. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.

Full text
Abstract:
Abstract We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.
APA, Harvard, Vancouver, ISO, and other styles
3

"Operator Control Parameters and Fine Tuning of Genetic Algorithms (GAs)." In Advances in Computational Intelligence and Robotics, 115–27. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4105-0.ch007.

Full text
Abstract:
Genetic algorithms (GAs) are heuristic, blind (i.e., black box-based) search techniques. The internal working of GAs is complex and is opaque for the general practitioner. GAs are a set of interconnected procedures that consist of complex interconnected activity among parameters. When a naive GA practitioner tries to implement GA code, the first question that comes into the mind is what are the value of GA control parameters (i.e., various operators such as crossover probability, mutation probability, population size, number of generations, etc. will be set to run a GA code)? This chapter clears all the complexities about the internal interconnected working of GA control parameters. GA can have many variations in its implementation (i.e., mutation alone-based GA, crossover alone-based GA, GA with combination of mutation and crossover, etc.). In this chapter, the authors discuss how variation in GA control parameter settings affects the solution quality.
APA, Harvard, Vancouver, ISO, and other styles
4

Loute, Alain. "The “Pragmatist Turn” in Theory of Governance." In Ethical Governance of Emerging Technologies Development, 213–20. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3670-5.ch014.

Full text
Abstract:
In this essay, the author focuses on what Jacques Lenoble and Marc Maesschalck call the “pragmatist turn” in the theory of governance. Speaking of pragmatist turn, they refer to recent work by a range of authors such as Charles Sabel, Joshua Cohen and Michael Dorf, who develop an experimental and pragmatist approach of democracy. The concept of “turn” may raise some perplexity. The author believes that we can speak of “turn” about these experimentalist theories because these theories introduce a key issue, what we may call the question of “self-capacitation of the actors.” The author tries to show that this issue constitutes a novelty compared to the deliberative paradigm in the theory of governance. While the issue of collective learning is a black box in the deliberative paradigm, democratic experimentalism seeks to reflect on how the actors can organize themselves to acquire new capacities and to learn new roles. The author concludes in revealing the limits of this approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Zou, Jinying, and Ovanes Petrosian. "Explainable AI: Using Shapley Value to Explain Complex Anomaly Detection ML-Based Systems." In Machine Learning and Artificial Intelligence. IOS Press, 2020. http://dx.doi.org/10.3233/faia200777.

Full text
Abstract:
Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This “black box” problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection “Loglizer”. The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.
APA, Harvard, Vancouver, ISO, and other styles
6

Lorbiecki, Marybeth. "The Endangered Species and Youth: Keeping All the Cogs and Wheels." In A Fierce Green Fire. Oxford University Press, 2016. http://dx.doi.org/10.1093/oso/9780199965038.003.0025.

Full text
Abstract:
I have something to show you. I just got it by overnight delivery.” My student’s face was a blaze of eagerness. From his backpack, he pulled out a small box. Seconds later, magically cupped in his hands was a tiny, neon lime green frog with black eyes. This was hardly the fare for a usual student–teacher appointment. But Blake Klocke is no ordinary university student, though he appears so—the same blue jeans and backpack uniform, laptop at the ready. The difference is not his red hair and freckles, but his amphibian excitement. On his laptop, he displayed for me dozens of frog-related book-marked websites, which he explained aglow with enthusiasm. He had been raising frogs since he was nine, being part of a rescue train across the world of hobbyists who have been keeping the genetic strains of frogs alive in their homes as they are being extinguished in the wilds. Zoos don’t have the space or the avid visiting publics to care about these small, diverse members of the living community, so without the care of personal frog lovers like Blake in raising captive-bred endangered amphibians, our world would have lost these strands of life’s web. The black-eyed tree frog is a critically endangered Central American species that is decreasing so rapidly that scientists predict it will be reduced by 80% in the wild in ten years by the life-sucking, zombie-like Chytrid fungus that is wiping out full populations. “Once my frogs have young, I can get you some so you can raise your own,” Blake offered, ready to convert me to the simple joys of amphibian care. Blake has experienced this excitement from his youth on, and his outdoors adventures have created a love in him that will carry him far— far beyond the lakes and wetlands near Eagan, Minnesota, where he first started catching tadpoles. This finding the “drama in the bush” is just what Leopold had been advocating in his classes, radio talks to young farmers, and writings about the sport of amateur naturalist studies.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Black Box trees"

1

Martignon, Laura, Joachim Engel, and Tim Erickson. "A Transparent, Simple AI Tool for Constructing Efficient and Robust Fast and Frugal Trees for Classification Under Risk." In Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. International Association for Statistical Education, 2022. http://dx.doi.org/10.52041/iase.icots11.t6g3.

Full text
Abstract:
Artificial Intelligence (AI) has produced extremely efficient and effective classification and decision “machines” that learn from given data sets and generalize well to unknown data. These are mostly celebrated tools produced by methodologies of machine learning. There is a drawback, though, namely their lack of transparency in construction. Agents often ignore the construction steps and use them as black-box algorithms. We exhibit simple and transparent steps for creating robust and yet simple heuristics for classification based on the AI tool ARBOR. We also claim that these transparent classifiers compete well against powerful machines, especially when training sets are small.
APA, Harvard, Vancouver, ISO, and other styles
2

Dann, Michael, Yuan Yao, Brian Logan, and John Thangarajah. "Multi-Agent Intention Progression with Black-Box Agents." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/19.

Full text
Abstract:
We propose a new approach to intention progression in multi-agent settings where other agents are effectively black boxes. That is, while their goals are known, the precise programs used to achieve these goals are not known. In our approach, agents use an abstraction of their own program called a partially-ordered goal-plan tree (pGPT) to schedule their intentions and predict the actions of other agents. We show how a pGPT can be derived from the program of a BDI agent, and present an approach based on Monte Carlo Tree Search (MCTS) for scheduling an agent's intentions using pGPTs. We evaluate our pGPT-based approach in cooperative, selfish and adversarial multi-agent settings, and show that it out-performs MCTS-based scheduling where agents assume that other agents have the same program as themselves.
APA, Harvard, Vancouver, ISO, and other styles
3

Santos, Samara Silva, Marcos Antonio Alves, Leonardo Augusto Ferreira, and Frederico Gadelha Guimarães. "PDTX: A novel local explainer based on the Perceptron Decision Tree." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-50.

Full text
Abstract:
Artificial Intelligence (AI) approaches that achieve good results and generalization are often opaque models and the decision-maker has no clear explanation about the final classification. As a result, there is an increasing demand for Explainable AI (XAI) models, whose main goal is to provide understandable solutions for human beings and to elucidate the relationship between the features and the black-box model. In this paper, we introduce a novel explainer method, named PDTX, based on the Perceptron Decision Tree (PDT). The evolutionary algorithm jSO is employed to fit the weights of the PDT to approximate the predictions of the black-box model. Then, it is possible to extract valuable information that explains the behavior of the machine learning method. The PDTX was tested in 10 different datasets from a public repository as an explainer for three classifiers: Multi-Layer Perceptron, Random Forest and Support Vector Machine. Decision-Tree and LIME were used as baselines for comparison. The results showed promising performance in the majority of the experiments, achieving 87.34% of average accuracy, against 64.23% from DT and 37.44% from LIME. The PDTX can be used for black-box classifier explanations, for local instances and it is model-agnostic.
APA, Harvard, Vancouver, ISO, and other styles
4

Allen, Cameron, Michael Katz, Tim Klinger, George Konidaris, Matthew Riemer, and Gerald Tesauro. "Efficient Black-Box Planning Using Macro-Actions with Focused Effects." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/554.

Full text
Abstract:
The difficulty of deterministic planning increases exponentially with search-tree depth. Black-box planning presents an even greater challenge, since planners must operate without an explicit model of the domain. Heuristics can make search more efficient, but goal-aware heuristics for black-box planning usually rely on goal counting, which is often quite uninformative. In this work, we show how to overcome this limitation by discovering macro-actions that make the goal-count heuristic more accurate. Our approach searches for macro-actions with focused effects (i.e. macros that modify only a small number of state variables), which align well with the assumptions made by the goal-count heuristic. Focused macros dramatically improve black-box planning efficiency across a wide range of planning domains, sometimes beating even state-of-the-art planners with access to a full domain model.
APA, Harvard, Vancouver, ISO, and other styles
5

Laget, Hannes, Michae¨l Deneve, Evert Vanderhaegen, and Thomas Museur. "Combustion Dynamics Data Mining Techniques: A Way to Gain Enhanced Insight in the Combustion Processes of Fielded Gas Turbines." In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59553.

Full text
Abstract:
Combustion dynamics are still an important challenge for the gas turbine operators. Modern dry low NOx combustors operate within very small tolerances of equivalence ratio, airfuel mixing and heat release rate in order to attain low NOx emissions and combustion stability. Small changes in fuel composition, or extremes in ambient temperature can trigger combustion instabilities. Large amounts of data of real engines are available to the end user. Moreover, instead of adaptations to the hardware, the end-user is primarily interested in the actual condition of its gas turbine. Although physical insight is without any doubt an important step to enhance knowledge of the processes within the combustion chamber, these large datasets can also be exploited with data-mining techniques based on black box models, such as artificial neural networks or decision trees. In this paper, the latter approach is discussed in detail and implemented on a F-class gas turbine. The operational and combustion data, acquired over a long period on the gas turbine, have been used as the input to a commercial data-mining program in order to study the correlations between the different operational parameters and the characteristic amplitude and frequency of the combustion dynamics. Moreover, the data-mining program allows the nonlinear modelling of the combustion dynamics, which in a second step has been used to carry out a parametric study. The parameters with a high influence, amongst others the gas quality, the compressor inlet temperature and the firing temperature, on the presence of combustion dynamics have been retained for modelling the behaviour of the combustion dynamics. The obtained models show good correspondence with operational experience and data gathered during gas turbine tuning operation. These models can thus be used to enhance the insight into the complex behaviour of combustion dynamics. They can be helpful for predictive maintenance and finally can be applied for the determination of tuning margins and the prevention of high combustion dynamics.
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira-Junior, Robinson A. A. de. "Credit scoring development in the light of the new Brazilian General Data Protection Law." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/kdmile.2021.17462.

Full text
Abstract:
With the advent of the new Brazilian General Data Protection Law (LGPD) which determines the right to the explanation of automated decisions, the use of non-interpretable models for human beings, known as black boxes, for the purposes of credit risk assessment may remain unfeasible. Thus, three different methods commonly applied to credit scoring – logistic regression, decision tree, and support vector machine (SVM) – were adjusted to an anonymized sample of a consumer credit portfolio from a credit union. Their results were compared and the adequacy of the explanation achieved for each classifier was assessed. Particularly for the SVM, which generated a black box model, a local interpretation method – the SHapley Additive exPlanation (SHAP) – was incorporated, enabling this machine learning classifier to fulfill the requirements imposed by the new LGPD, in equivalence to the inherent comprehensibility of the white box models.
APA, Harvard, Vancouver, ISO, and other styles
7

Garifullin, Albert, Alexandr Shcherbakov, and Vladimir Frolov. "Fitting Parameters for Procedural Plant Generation." In WSCG'2022 - 30. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2022. Západočeská univerzita, 2022. http://dx.doi.org/10.24132/csrn.3201.35.

Full text
Abstract:
We propose a novel method to obtain a 3D model of a tree based on a single input image by fitting parameters for some procedural plant generator. Unlike other methods, our approach can work with any plant generator, treating it as a black-box function. It is also possible to specify the desired characteristics of the plant, such as the geometric complexity of the model or its size. We propose a similarity function between the given image and generated model, that better catches the significant differences between tree shapes. To find the appropriate parameter set, we use a specific variant of a genetic algorithm designed for this purpose to maximize similarity function. This approach can greatly simplify the artist's work. We demonstrate the results of our algorithm with several procedural generators, from a very simple to a fairly advanced one.
APA, Harvard, Vancouver, ISO, and other styles
8

Dudek, Jeffrey M., Aditya A. Shrotri, and Moshe Y. Vardi. "DPSampler: Exact Weighted Sampling Using Dynamic Programming." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/250.

Full text
Abstract:
The problem of exact weighted sampling of solutions of Boolean formulas has applications in Bayesian inference, testing, and verification. The state-of-the-art approach to sampling involves carefully decomposing the input formula and compiling a data structure called d-DNNF in the process. Recent work in the closely connected field of model counting, however, has shown that smartly composing different subformulas using dynamic programming and Algebraic Decision Diagrams (ADDs) can outperform d-DNNF-style approaches on many benchmarks. In this work, we present a modular algorithm called DPSampler that extends such dynamic-programming techniques to the problem of exact weighted sampling. DPSampler operates in three phases. First, an execution plan in the form of a project-join tree is computed using tree decompositions. Second, the plan is used to compile the input formula into a succinct tree-of-ADDs representation. Third, this tree is traversed to generate a random sample. This decoupling of planning, compilation and sampling phases enables usage of specialized libraries for each purpose in a black-box fashion. Further, our novel ADD-sampling algorithm avoids the need for expensive dynamic memory allocation required in previous work. Extensive experiments over diverse sets of benchmarks show DPSampler is more scalable and versatile than existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Rahutomo, S. "Shallow Water Subsea Well Drilling and Completion Utilizing Jack Up Rig at Natuna Sea Block." In Indonesian Petroleum Association 44th Annual Convention and Exhibition. Indonesian Petroleum Association, 2021. http://dx.doi.org/10.29118/ipa21-e-38.

Full text
Abstract:
In 2019, Premier Oil Indonesia commenced drilling campaign to complete 4 development wells consist of 3 subsea development wells and 1 platform well at Natuna Sea Block with water depth 260ft - 280ft. To optimize the mobilization and demobilization cost, one single Jack Up rig was selected to complete those 4 wells. The Jack Up rig require several modifications to enable drilling and completing subsea wells. The Jack Up rig is equipped only with surface BOP stack; therefore, 16” HP riser is utilized to drill reservoir section then a Suspended Texas Deck (STD) was required as means to suspend the weight in order to hang 16” HP riser stack and also it is designed to take the full BOP weight in the event the secondary BOP tensioner fail. A modification work was also performed on Conductor Tensioner Platform (CTP) to extend it to accommodate Subsea Tree temporary storing of Subsea Tree complete with Tree Running Tool (TRT) for inspection and preparation prior to subsea deployment. On the other hand, Subsea Tree has Protection Structure Assembly (PSA) has the biggest dimension among other subsea equipment which leaving only option to be deployed by TDS directly from supply vessel to the wellhead using ROV-friendly lifting bridle arrangement. The drilling campaign has successfully completed and met drilling objectives without major safety case. It is proven that with collaborative engineering works involving company, rig contractor and third party contractors enabling the success of subsea drilling and completion campaign in shallow water at Natuna Sea Block using Jack-up rig. The lesson learned was established upon completing drilling the 1st well then applied to next well resulting significant time improvement to complete the well from 52 days to become only 35 days, noting that these 2 wells have similar drilling and completion profile.
APA, Harvard, Vancouver, ISO, and other styles
10

Illich, Moritz, and Birte Glimm. "Computing Concept Referring Expressions for Queries on Horn ALC Ontologies." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/370.

Full text
Abstract:
Classical instance queries over an ontology only consider explicitly named individuals. Concept referring expressions (CREs) also allow for returning answers in the form of concepts that describe implicitly given individuals in terms of their relation to an explicitly named one. Existing approaches, e.g., based on tree automata, can neither be integrated into state-of-the-art OWL reasoners nor are they directly amenable for an efficient implementation. To address this, we devise a novel algorithm that uses highly optimized OWL reasoners as a black box. In addition to the standard criteria of singularity and certainty for CREs, we devise and consider the criterion of uniqueness of CREs for Horn ALC ontologies. The evaluation of our prototypical implementation shows that computing CREs for the most general concept (Top) can be done in less than one minute for ontologies with thousands of individuals and concepts.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Black Box trees"

1

Hauzenberger, Niko, Florian Huber, Gary Koop, and James Mitchell. Bayesian modeling of time-varying parameters using regression trees. Federal Reserve Bank of Cleveland, January 2023. http://dx.doi.org/10.26509/frbc-wp-202305.

Full text
Abstract:
In light of widespread evidence of parameter instability in macroeconomic models, many time-varying parameter (TVP) models have been proposed. This paper proposes a nonparametric TVP-VAR model using Bayesian additive regression trees (BART). The novelty of this model stems from the fact that the law of motion driving the parameters is treated nonparametrically. This leads to great flexibility in the nature and extent of parameter change, both in the conditional mean and in the conditional variance. In contrast to other nonparametric and machine learning methods that are black box, inference using our model is straightforward because, in treating the parameters rather than the variables nonparametrically, the model remains conditionally linear in the mean. Parsimony is achieved through adopting nonparametric factor structures and use of shrinkage priors. In an application to US macroeconomic data, we illustrate the use of our model in tracking both the evolving nature of the Phillips curve and how the effects of business cycle shocks on inflationary measures vary nonlinearly with movements in uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography