Segui questo link per vedere altri tipi di pubblicazioni sul tema: Decision tree.

Articoli di riviste sul tema "Decision tree"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Decision tree".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

TOFAN, Cezarina Adina. "Optimization Techniques of Decision Making - Decision Tree". Advances in Social Sciences Research Journal 1, n. 5 (30 settembre 2014): 142–48. http://dx.doi.org/10.14738/assrj.15.437.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Naylor, Mike. "Decision Tree". Mathematics Teacher: Learning and Teaching PK-12 113, n. 7 (luglio 2020): 612. http://dx.doi.org/10.5951/mtlt.2020.0081.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Oo, Aung Nway, e Thin Naing. "Decision Tree Models for Medical Diagnosis". International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (30 aprile 2019): 1697–99. http://dx.doi.org/10.31142/ijtsrd23510.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

BRESLOW, LEONARD A., e DAVID W. AHA. "Simplifying decision trees: A survey". Knowledge Engineering Review 12, n. 01 (gennaio 1997): 1–40. http://dx.doi.org/10.1017/s0269888997000015.

Testo completo
Abstract (sommario):
Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

ZANTEMA, HANS, e HANS L. BODLAENDER. "SIZES OF ORDERED DECISION TREES". International Journal of Foundations of Computer Science 13, n. 03 (giugno 2002): 445–58. http://dx.doi.org/10.1142/s0129054102001205.

Testo completo
Abstract (sommario):
Decision tables provide a natural framework for knowledge acquisition and representation in the area of knowledge based information systems. Decision trees provide a standard method for inductive inference in the area of machine learning. In this paper we show how decision tables can be considered as ordered decision trees: decision trees satisfying an ordering restriction on the nodes. Every decision tree can be represented by an equivalent ordered decision tree, but we show that doing so may exponentially blow up sizes, even if the choice of the order is left free. Our main result states that finding an ordered decision tree of minimal size that represents the same function as a given ordered decision tree is an NP-hard problem; in earlier work we obtained a similar result for unordered decision trees.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

TOFAN, Cezarina Adina. "Method of decision tree applied in adopting the decision for promoting a company". Annals of "Spiru Haret". Economic Series 15, n. 3 (30 settembre 2015): 47. http://dx.doi.org/10.26458/1535.

Testo completo
Abstract (sommario):
The decision can be defined as the way chosen from several possible to achieve an objective. An important role in the functioning of the decisional-informational system is held by the decision-making methods. Decision trees are proving to be very useful tools for taking financial decisions or regarding the numbers, where a large amount of complex information must be considered. They provide an effective structure in which alternative decisions and the implications of their choice can be assessed, and help to form a correct and balanced vision of the risks and rewards that may result from a certain choice. For these reasons, the content of this communication will review a series of decision-making criteria. Also, it will analyse the benefits of using the decision tree method in the decision-making process by providing a numerical example. On this basis, it can be concluded that the procedure may prove useful in making decisions for companies operating on markets where competition intensity is differentiated.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Cockett, J. R. B. "Decision Expression Optimization1". Fundamenta Informaticae 10, n. 1 (1 gennaio 1987): 93–114. http://dx.doi.org/10.3233/fi-1987-10107.

Testo completo
Abstract (sommario):
A basic concern when using decision trees for the solution of taxonomic or similar problems is their efficiency. Often the information that is required to completely optimize a tree is simply not available. This is especially the case when a criterion based on probabilities is used. It is shown how it is often possible, despite the absence of this information, to improve the design of the tree. The approach is based on algebraic methods for manipulating decision trees and the identification of some particularly desirable forms.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Yun, Jooyeol, Jun won Seo e Taeseon Yoon. "Fuzzy Decision Tree". International Journal of Fuzzy Logic Systems 4, n. 3 (31 luglio 2014): 7–11. http://dx.doi.org/10.5121/ijfls.2014.4302.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Manwani, N., e P. S. Sastry. "Geometric Decision Tree". IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42, n. 1 (febbraio 2012): 181–92. http://dx.doi.org/10.1109/tsmcb.2011.2163392.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Zhou, Zhi-Hua, e Zhao-Qian Chen. "Hybrid decision tree". Knowledge-Based Systems 15, n. 8 (novembre 2002): 515–28. http://dx.doi.org/10.1016/s0950-7051(02)00038-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Koodiaroff, Sally. "Oncology Decision Tree". Collegian 7, n. 3 (gennaio 2000): 34–36. http://dx.doi.org/10.1016/s1322-7696(08)60375-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Hayes, Karen W., e Becky Wojcik. "Decision Tree Structure". Physical Therapy 69, n. 12 (1 dicembre 1989): 1120–22. http://dx.doi.org/10.1093/ptj/69.12.1120.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Cockett, J. R. B., e J. A. Herrera. "Decision tree reduction". Journal of the ACM 37, n. 4 (ottobre 1990): 815–42. http://dx.doi.org/10.1145/96559.96576.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

López-Chau, Asdrúbal, Jair Cervantes, Lourdes López-García e Farid García Lamont. "Fisher’s decision tree". Expert Systems with Applications 40, n. 16 (novembre 2013): 6283–91. http://dx.doi.org/10.1016/j.eswa.2013.05.044.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Maazouzi, Faiz, e Halima Bahi. "Using multi decision tree technique to improving decision tree classifier". International Journal of Business Intelligence and Data Mining 7, n. 4 (2012): 274. http://dx.doi.org/10.1504/ijbidm.2012.051712.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Parlindungan e HariSupriadi. "Implementation Decision Tree Algorithm for Ecommerce Website". International Journal of Psychosocial Rehabilitation 24, n. 02 (13 febbraio 2020): 3611–14. http://dx.doi.org/10.37200/ijpr/v24i2/pr200682.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Li, Jiawei, Yiming Li, Xingchun Xiang, Shu-Tao Xia, Siyi Dong e Yun Cai. "TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation". Entropy 22, n. 11 (24 ottobre 2020): 1203. http://dx.doi.org/10.3390/e22111203.

Testo completo
Abstract (sommario):
Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Rautenberg, Tamlyn, Annette Gerritsen e Martin Downes. "Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer". Diagnostics 10, n. 3 (14 marzo 2020): 158. http://dx.doi.org/10.3390/diagnostics10030158.

Testo completo
Abstract (sommario):
Health economics is a discipline of economics applied to health care. One method used in health economics is decision tree modelling, which extrapolates the cost and effectiveness of competing interventions over time. Such decision tree models are the basis of reimbursement decisions in countries using health technology assessment for decision making. In many instances, these competing interventions are diagnostic technologies. Despite a wealth of excellent resources describing the decision analysis of diagnostics, two critical errors persist: not including diagnostic test accuracy in the structure of decision trees and treating sequential diagnostics as independent. These errors have consequences for the accuracy of model results, and thereby impact on decision making. This paper sets out to overcome these errors using color to link fundamental epidemiological calculations to decision tree models in a visually and intuitively appealing pictorial format. The paper is a must-read for modelers developing decision trees in the area of diagnostics for the first time and decision makers reviewing diagnostic reimbursement models.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

ZANTEMA, HANS, e HANS L. BODLAENDER. "FINDING SMALL EQUIVALENT DECISION TREES IS HARD". International Journal of Foundations of Computer Science 11, n. 02 (giugno 2000): 343–54. http://dx.doi.org/10.1142/s0129054100000193.

Testo completo
Abstract (sommario):
Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NP-complete. As a consequence, finding a decision tree of minimal size that is decision equivalent to a given decision tree is an NP-hard problem. This result differs from the well-known result of NP-hardness of finding a decision tree of minimal size that is consistent with a given training set. Instead our result is a basic result for decision trees, apart from the setting of inductive inference. On the other hand, this result differs from similar results for BDDs and OBDDs: since in decision trees no sharing is allowed, the notion of decision tree size is essentially different from BDD size.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hernández, Víctor Adrián Sosa, Raúl Monroy, Miguel Angel Medina-Pérez, Octavio Loyola-González e Francisco Herrera. "A Practical Tutorial for Decision Tree Induction". ACM Computing Surveys 54, n. 1 (aprile 2021): 1–38. http://dx.doi.org/10.1145/3429739.

Testo completo
Abstract (sommario):
Experts from different domains have resorted to machine learning techniques to produce explainable models that support decision-making. Among existing techniques, decision trees have been useful in many application domains for classification. Decision trees can make decisions in a language that is closer to that of the experts. Many researchers have attempted to create better decision tree models by improving the components of the induction algorithm. One of the main components that have been studied and improved is the evaluation measure for candidate splits. In this article, we introduce a tutorial that explains decision tree induction. Then, we present an experimental framework to assess the performance of 21 evaluation measures that produce different C4.5 variants considering 110 databases, two performance measures, and 10× 10-fold cross-validation. Furthermore, we compare and rank the evaluation measures by using a Bayesian statistical analysis. From our experimental results, we present the first two performance rankings in the literature of C4.5 variants. Moreover, we organize the evaluation measures into two groups according to their performance. Finally, we introduce meta-models that automatically determine the group of evaluation measures to produce a C4.5 variant for a new database and some further opportunities for decision tree models.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Yang, Bin-Bin, Song-Qing Shen e Wei Gao. "Weighted Oblique Decision Trees". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 5621–27. http://dx.doi.org/10.1609/aaai.v33i01.33015621.

Testo completo
Abstract (sommario):
Decision trees have attracted much attention during the past decades. Previous decision trees include axis-parallel and oblique decision trees; both of them try to find the best splits via exhaustive search or heuristic algorithms in each iteration. Oblique decision trees generally simplify tree structure and take better performance, but are always accompanied with higher computation, as well as the initialization with the best axis-parallel splits. This work presents the Weighted Oblique Decision Tree (WODT) based on continuous optimization with random initialization. We consider different weights of each instance for child nodes at all internal nodes, and then obtain a split by optimizing the continuous and differentiable objective function of weighted information entropy. Extensive experiments show the effectiveness of the proposed algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Chang, Namsik, e Olivia R. Liu Sheng. "Decision-Tree-Based Knowledge Discovery: Single- vs. Multi-Decision-Tree Induction". INFORMS Journal on Computing 20, n. 1 (febbraio 2008): 46–54. http://dx.doi.org/10.1287/ijoc.1060.0215.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Hajjej, Fahima, Manal Abdullah Alohali, Malek Badr e Md Adnan Rahman. "A Comparison of Decision Tree Algorithms in the Assessment of Biomedical Data". BioMed Research International 2022 (7 luglio 2022): 1–9. http://dx.doi.org/10.1155/2022/9449497.

Testo completo
Abstract (sommario):
By comparing the performance of various tree algorithms, we can determine which one is most useful for analyzing biomedical data. In artificial intelligence, decision trees are a classification model known for their visual aid in making decisions. WEKA software will evaluate biological data from real patients to see how well the decision tree classification algorithm performs. Another goal of this comparison is to assess whether or not decision trees can serve as an effective tool for medical diagnosis in general. In doing so, we will be able to see which algorithms are the most efficient and appropriate to use when delving into this data and arrive at an informed decision.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Cai, Yuliang, Huaguang Zhang, Qiang He e Shaoxin Sun. "New classification technique: fuzzy oblique decision tree". Transactions of the Institute of Measurement and Control 41, n. 8 (11 giugno 2018): 2185–95. http://dx.doi.org/10.1177/0142331218774614.

Testo completo
Abstract (sommario):
Based on axiomatic fuzzy set (AFS) theory and fuzzy information entropy, a novel fuzzy oblique decision tree (FODT) algorithm is proposed in this paper. Traditional axis-parallel decision trees only consider a single feature at each non-leaf node, while oblique decision trees partition the feature space with an oblique hyperplane. By contrast, the FODT takes dynamic mining fuzzy rules as a decision function. The main idea of the FODT is to use these fuzzy rules to construct leaf nodes for each class in each layer of the tree; the samples that cannot be covered by the fuzzy rules are then put into an additional node – the only non-leaf node in this layer. Construction of the FODT consists of four major steps: (a) generation of fuzzy membership functions automatically by AFS theory according to the raw data distribution; (b) extraction of dynamically fuzzy rules in each non-leaf node by the fuzzy rule extraction algorithm (FREA); (c) construction of the FODT by the fuzzy rules obtained from step (b); and (d) determination of the optimal threshold [Formula: see text] to generate a final tree. Compared with five traditional decision trees (C4.5, LADtree (LAD), Best-first tree (BFT), SimpleCart (SC) and NBTree (NBT)) and a recently obtained fuzzy rules decision tree (FRDT) on eight UCI machine learning data sets and one biomedical data set (ALLAML), the experimental results demonstrate that the proposed algorithm outperforms the other decision trees in both classification accuracy and tree size.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Luna, José Marcio, Efstathios D. Gennatas, Lyle H. Ungar, Eric Eaton, Eric S. Diffenderfer, Shane T. Jensen, Charles B. Simone, Jerome H. Friedman, Timothy D. Solberg e Gilmer Valdes. "Building more accurate decision trees with the additive tree". Proceedings of the National Academy of Sciences 116, n. 40 (16 settembre 2019): 19887–93. http://dx.doi.org/10.1073/pnas.1816748116.

Testo completo
Abstract (sommario):
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Jiang, Daniel R., Lina Al-Kanj e Warren B. Powell. "Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds". Operations Research 68, n. 6 (novembre 2020): 1678–97. http://dx.doi.org/10.1287/opre.2019.1939.

Testo completo
Abstract (sommario):
In the paper, “Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds,” the authors propose an extension to Monte Carlo tree search that uses the idea of “sampling the future” to produce noisy upper bounds on nodes in the decision tree. These upper bounds can help guide the tree expansion process and produce decision trees that are deeper rather than wider, in effect concentrating computation toward more useful parts of the state space. The algorithm’s effectiveness is illustrated in a ride-sharing setting, where a driver/vehicle needs to make dynamic decisions regarding trip acceptance and relocations.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Wei Wei, Wei Wei, Mingwei Hui Wei Wei, Beibei Zhang Mingwei Hui, Rafal Scherer Beibei Zhang e Robertas Damaševičius Rafal Scherer. "Research on Decision Tree Based on Rough Set". 網際網路技術學刊 22, n. 6 (novembre 2021): 1385–94. http://dx.doi.org/10.53106/160792642021112206015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Brunello, Andrea, Enrico Marzano, Angelo Montanari e Guido Sciavicco. "Decision Tree Pruning via Multi-Objective Evolutionary Computation". International Journal of Machine Learning and Computing 7, n. 6 (dicembre 2017): 167–75. http://dx.doi.org/10.18178/ijmlc.2017.7.6.641.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Murphy, P. M., e M. J. Pazzani. "Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction". Journal of Artificial Intelligence Research 1 (1 marzo 1994): 257–75. http://dx.doi.org/10.1613/jair.41.

Testo completo
Abstract (sommario):
We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

SudarsanaReddy, C., V. Vasu e B. Kumara Swamy Achari. "Effective Decision Tree Learning". International Journal of Computer Applications 82, n. 9 (15 novembre 2013): 1–6. http://dx.doi.org/10.5120/14141-7690.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Shazadi, Komal. "Decision Tree in Biology". European Journal of Biology 6, n. 1 (7 gennaio 2021): 1–15. http://dx.doi.org/10.47672/ejb.642.

Testo completo
Abstract (sommario):
Purpose: Human biology is an essential field in scientific research as it helps in understanding the human body for adequate care. Technology has improved the way scientists do their biological research. One of the critical technologies is artificial intelligence (AI), which is revolutionizing the world. Scientists have applied AI in biological studies, using several methods to gain different types of data. Machine learning is a branch of artificial intelligence that helps computers learn from data and create predictions without being explicitly programmed. Methodology: One critical methodology in the machine is using the tree-based decision. It is extensively used in biological research, as it helps in classifying complex data into simple and easy to interpret graphs. This paper aims to give a beginner-friendly view of the tree-based model, analyzing its use and advantages over other methods. Finding: Artificial intelligence has greatly improved the collection, analysis, and prediction of biological and medical information. Machine learning, a subgroup of artificial intelligence, is useful in creating prediction models, which help a wide range of fields, including computational and systems biology. Contribution and future recommendation also discussed in this study.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

PURDILA, V., e S. G. PENTIUC. "Fast Decision Tree Algorithm". Advances in Electrical and Computer Engineering 14, n. 1 (2014): 65–68. http://dx.doi.org/10.4316/aece.2014.01010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Quinlan, J. R. "Learning decision tree classifiers". ACM Computing Surveys 28, n. 1 (marzo 1996): 71–72. http://dx.doi.org/10.1145/234313.234346.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Sok, Hong Kuan, Melanie Po-Leen Ooi e Ye Chow Kuang. "Sparse alternating decision tree". Pattern Recognition Letters 60-61 (agosto 2015): 57–64. http://dx.doi.org/10.1016/j.patrec.2015.03.002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Meadows, S., K. Baker e J. Butler. "The Incident Decision Tree". Clinical Risk 11, n. 2 (1 marzo 2005): 66–68. http://dx.doi.org/10.1258/1356262053429732.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Haimes, Yacov Y., Duan Li e Vijay Tulsiani. "Multiobjective Decision-Tree Analysis1". Risk Analysis 10, n. 1 (marzo 1990): 111–27. http://dx.doi.org/10.1111/j.1539-6924.1990.tb01026.x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Lu, Songfeng, e Samuel L. Braunstein. "Quantum decision tree classifier". Quantum Information Processing 13, n. 3 (19 novembre 2013): 757–70. http://dx.doi.org/10.1007/s11128-013-0687-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Xia, Fen, Wensheng Zhang, Fuxin Li e Yanwu Yang. "Ranking with decision tree". Knowledge and Information Systems 17, n. 3 (8 gennaio 2008): 381–95. http://dx.doi.org/10.1007/s10115-007-0118-y.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

McTavish, Hayden, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin e Margo Seltzer. "Fast Sparse Decision Tree Optimization via Reference Ensembles". Proceedings of the AAAI Conference on Artificial Intelligence 36, n. 9 (28 giugno 2022): 9604–13. http://dx.doi.org/10.1609/aaai.v36i9.21194.

Testo completo
Abstract (sommario):
Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have been made on the problem only within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. The guesses come from knowledge gleaned from black box models. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

LAST, MARK, ODED MAIMON e EINAT MINKOV. "IMPROVING STABILITY OF DECISION TREES". International Journal of Pattern Recognition and Artificial Intelligence 16, n. 02 (marzo 2002): 145–59. http://dx.doi.org/10.1142/s0218001402001599.

Testo completo
Abstract (sommario):
Decision-tree algorithms are known to be unstable: small variations in the training set can result in different trees and different predictions for the same validation examples. Both accuracy and stability can be improved by learning multiple models from bootstrap samples of training data, but the "meta-learner" approach makes the extracted knowledge hardly interpretable. In the following paper, we present the Info-Fuzzy Network (IFN), a novel information-theoretic method for building stable and comprehensible decision-tree models. The stability of the IFN algorithm is ensured by restricting the tree structure to using the same feature for all nodes of the same tree level and by the built-in statistical significance tests. The IFN method is shown empirically to produce more compact and stable models than the "meta-learner" techniques, while preserving a reasonable level of predictive accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

C, Kishor Kumar Reddy, e Vijaya Babu. "A Survey on Issues of Decision Tree and Non-Decision Tree Algorithms". International Journal of Artificial Intelligence and Applications for Smart Devices 4, n. 1 (31 maggio 2016): 9–32. http://dx.doi.org/10.14257/ijaiasd.2016.4.1.02.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Kim, Soo Y., e Arun Upneja. "Predicting restaurant financial distress using decision tree and AdaBoosted decision tree models". Economic Modelling 36 (gennaio 2014): 354–62. http://dx.doi.org/10.1016/j.econmod.2013.10.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

De la Cruz-García, Jazmín S., Juan Bory-Reyes e Aldo Ramirez-Arellano. "A Two-Parameter Fractional Tsallis Decision Tree". Entropy 24, n. 5 (19 aprile 2022): 572. http://dx.doi.org/10.3390/e24050572.

Testo completo
Abstract (sommario):
Decision trees are decision support data mining tools that create, as the name suggests, a tree-like model. The classical C4.5 decision tree, based on the Shannon entropy, is a simple algorithm to calculate the gain ratio and then split the attributes based on this entropy measure. Tsallis and Renyi entropies (instead of Shannon) can be employed to generate a decision tree with better results. In practice, the entropic index parameter of these entropies is tuned to outperform the classical decision trees. However, this process is carried out by testing a range of values for a given database, which is time-consuming and unfeasible for massive data. This paper introduces a decision tree based on a two-parameter fractional Tsallis entropy. We propose a constructionist approach to the representation of databases as complex networks that enable us an efficient computation of the parameters of this entropy using the box-covering algorithm and renormalization of the complex network. The experimental results support the conclusion that the two-parameter fractional Tsallis entropy is a more sensitive measure than parametric Renyi, Tsallis, and Gini index precedents for a decision tree classifier.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Scott, Jessie, e David Betters. "Economic Analysis of Urban Tree Replacement Decisions". Arboriculture & Urban Forestry 26, n. 2 (1 marzo 2000): 69–77. http://dx.doi.org/10.48044/jauf.2000.008.

Testo completo
Abstract (sommario):
Urban forest managers often are required to make decisions about whether to retain or replace an existing tree. In part, this decision relies on an economic analysis of the benefits and costs of the alternatives. This paper presents an economic methodology that helps address the tree replacement problem. The procedures apply to analyzing the benefits and costs of existing trees as well as future replacement trees. A case study, involving a diseased American elm (Uimus americana) is used to illustrate an application of the methodology. The procedures should prove useful in developing economic guides for tree replacement/retention decisions.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Nock, Richard, e Pascal Jappy. "Decision tree based induction of decision lists". Intelligent Data Analysis 3, n. 3 (1 maggio 1999): 227–40. http://dx.doi.org/10.3233/ida-1999-3306.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Nock, R. "Decision tree based induction of decision lists". Intelligent Data Analysis 3, n. 3 (settembre 1999): 227–40. http://dx.doi.org/10.1016/s1088-467x(99)00020-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Lootsma, Freerk A. "Multicriteria decision analysis in a decision tree". European Journal of Operational Research 101, n. 3 (settembre 1997): 442–51. http://dx.doi.org/10.1016/s0377-2217(96)00208-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

许, 美玲. "Decision Tree Analysis for Inconsistent Decision Tables". Computer Science and Application 06, n. 10 (2016): 597–606. http://dx.doi.org/10.12677/csa.2016.610074.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Okada, Hugo Kenji Rodrigues, Andre Ricardo Nascimento das Neves e Ricardo Shitsuka. "Analysis of Decision Tree Induction Algorithms". Research, Society and Development 8, n. 11 (24 agosto 2019): e298111473. http://dx.doi.org/10.33448/rsd-v8i11.1473.

Testo completo
Abstract (sommario):
Decision trees are data structures or computational methods that enable nonparametric supervised machine learning and are used in classification and regression tasks. The aim of this paper is to present a comparison between the decision tree induction algorithms C4.5 and CART. A quantitative study is performed in which the two methods are compared by analyzing the following aspects: operation and complexity. The experiments presented practically equal hit percentages in the execution time for tree induction, however, the CART algorithm was approximately 46.24% slower than C4.5 and was considered to be more effective.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

RAHMANI, MOHSEN, SATTAR HASHEMI, ALI HAMZEH e ASHKAN SAMI. "AGENT BASED DECISION TREE LEARNING: A NOVEL APPROACH". International Journal of Software Engineering and Knowledge Engineering 19, n. 07 (novembre 2009): 1015–22. http://dx.doi.org/10.1142/s0218194009004477.

Testo completo
Abstract (sommario):
Decision trees are one of the most effective and widely used induction methods that have received a great deal of attention over the past twenty years. When decision tree induction algorithms were used with uncertain rather than deterministic data, the result is a complete tree, which can classify most of the unseen samples correctly. This tree would be pruned in order to reduce its classification error and over-fitting. Recently, multi agent researchers concentrated on learning from large databases. In this paper we present a novel multi agent learning method that is able to induce a decision tree from distributed training sets. Our method is based on combination of separate decision trees each provided by one agent. Hence an agent is provided to aggregate results of the other agents and induces the final tree. Our empirical results suggest that the proposed method can provide significant benefits to distributed data classification.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia