Artículos de revistas sobre el tema "Decision tree"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Decision tree.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Decision tree".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

TOFAN, Cezarina Adina. "Optimization Techniques of Decision Making - Decision Tree". Advances in Social Sciences Research Journal 1, n.º 5 (30 de septiembre de 2014): 142–48. http://dx.doi.org/10.14738/assrj.15.437.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Naylor, Mike. "Decision Tree". Mathematics Teacher: Learning and Teaching PK-12 113, n.º 7 (julio de 2020): 612. http://dx.doi.org/10.5951/mtlt.2020.0081.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Oo, Aung Nway y Thin Naing. "Decision Tree Models for Medical Diagnosis". International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (30 de abril de 2019): 1697–99. http://dx.doi.org/10.31142/ijtsrd23510.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

BRESLOW, LEONARD A. y DAVID W. AHA. "Simplifying decision trees: A survey". Knowledge Engineering Review 12, n.º 01 (enero de 1997): 1–40. http://dx.doi.org/10.1017/s0269888997000015.

Texto completo
Resumen
Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

ZANTEMA, HANS y HANS L. BODLAENDER. "SIZES OF ORDERED DECISION TREES". International Journal of Foundations of Computer Science 13, n.º 03 (junio de 2002): 445–58. http://dx.doi.org/10.1142/s0129054102001205.

Texto completo
Resumen
Decision tables provide a natural framework for knowledge acquisition and representation in the area of knowledge based information systems. Decision trees provide a standard method for inductive inference in the area of machine learning. In this paper we show how decision tables can be considered as ordered decision trees: decision trees satisfying an ordering restriction on the nodes. Every decision tree can be represented by an equivalent ordered decision tree, but we show that doing so may exponentially blow up sizes, even if the choice of the order is left free. Our main result states that finding an ordered decision tree of minimal size that represents the same function as a given ordered decision tree is an NP-hard problem; in earlier work we obtained a similar result for unordered decision trees.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

TOFAN, Cezarina Adina. "Method of decision tree applied in adopting the decision for promoting a company". Annals of "Spiru Haret". Economic Series 15, n.º 3 (30 de septiembre de 2015): 47. http://dx.doi.org/10.26458/1535.

Texto completo
Resumen
The decision can be defined as the way chosen from several possible to achieve an objective. An important role in the functioning of the decisional-informational system is held by the decision-making methods. Decision trees are proving to be very useful tools for taking financial decisions or regarding the numbers, where a large amount of complex information must be considered. They provide an effective structure in which alternative decisions and the implications of their choice can be assessed, and help to form a correct and balanced vision of the risks and rewards that may result from a certain choice. For these reasons, the content of this communication will review a series of decision-making criteria. Also, it will analyse the benefits of using the decision tree method in the decision-making process by providing a numerical example. On this basis, it can be concluded that the procedure may prove useful in making decisions for companies operating on markets where competition intensity is differentiated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Cockett, J. R. B. "Decision Expression Optimization1". Fundamenta Informaticae 10, n.º 1 (1 de enero de 1987): 93–114. http://dx.doi.org/10.3233/fi-1987-10107.

Texto completo
Resumen
A basic concern when using decision trees for the solution of taxonomic or similar problems is their efficiency. Often the information that is required to completely optimize a tree is simply not available. This is especially the case when a criterion based on probabilities is used. It is shown how it is often possible, despite the absence of this information, to improve the design of the tree. The approach is based on algebraic methods for manipulating decision trees and the identification of some particularly desirable forms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yun, Jooyeol, Jun won Seo y Taeseon Yoon. "Fuzzy Decision Tree". International Journal of Fuzzy Logic Systems 4, n.º 3 (31 de julio de 2014): 7–11. http://dx.doi.org/10.5121/ijfls.2014.4302.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Manwani, N. y P. S. Sastry. "Geometric Decision Tree". IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42, n.º 1 (febrero de 2012): 181–92. http://dx.doi.org/10.1109/tsmcb.2011.2163392.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhou, Zhi-Hua y Zhao-Qian Chen. "Hybrid decision tree". Knowledge-Based Systems 15, n.º 8 (noviembre de 2002): 515–28. http://dx.doi.org/10.1016/s0950-7051(02)00038-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Koodiaroff, Sally. "Oncology Decision Tree". Collegian 7, n.º 3 (enero de 2000): 34–36. http://dx.doi.org/10.1016/s1322-7696(08)60375-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Hayes, Karen W. y Becky Wojcik. "Decision Tree Structure". Physical Therapy 69, n.º 12 (1 de diciembre de 1989): 1120–22. http://dx.doi.org/10.1093/ptj/69.12.1120.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Cockett, J. R. B. y J. A. Herrera. "Decision tree reduction". Journal of the ACM 37, n.º 4 (octubre de 1990): 815–42. http://dx.doi.org/10.1145/96559.96576.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

López-Chau, Asdrúbal, Jair Cervantes, Lourdes López-García y Farid García Lamont. "Fisher’s decision tree". Expert Systems with Applications 40, n.º 16 (noviembre de 2013): 6283–91. http://dx.doi.org/10.1016/j.eswa.2013.05.044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Maazouzi, Faiz y Halima Bahi. "Using multi decision tree technique to improving decision tree classifier". International Journal of Business Intelligence and Data Mining 7, n.º 4 (2012): 274. http://dx.doi.org/10.1504/ijbidm.2012.051712.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Parlindungan y HariSupriadi. "Implementation Decision Tree Algorithm for Ecommerce Website". International Journal of Psychosocial Rehabilitation 24, n.º 02 (13 de febrero de 2020): 3611–14. http://dx.doi.org/10.37200/ijpr/v24i2/pr200682.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Li, Jiawei, Yiming Li, Xingchun Xiang, Shu-Tao Xia, Siyi Dong y Yun Cai. "TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation". Entropy 22, n.º 11 (24 de octubre de 2020): 1203. http://dx.doi.org/10.3390/e22111203.

Texto completo
Resumen
Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Rautenberg, Tamlyn, Annette Gerritsen y Martin Downes. "Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer". Diagnostics 10, n.º 3 (14 de marzo de 2020): 158. http://dx.doi.org/10.3390/diagnostics10030158.

Texto completo
Resumen
Health economics is a discipline of economics applied to health care. One method used in health economics is decision tree modelling, which extrapolates the cost and effectiveness of competing interventions over time. Such decision tree models are the basis of reimbursement decisions in countries using health technology assessment for decision making. In many instances, these competing interventions are diagnostic technologies. Despite a wealth of excellent resources describing the decision analysis of diagnostics, two critical errors persist: not including diagnostic test accuracy in the structure of decision trees and treating sequential diagnostics as independent. These errors have consequences for the accuracy of model results, and thereby impact on decision making. This paper sets out to overcome these errors using color to link fundamental epidemiological calculations to decision tree models in a visually and intuitively appealing pictorial format. The paper is a must-read for modelers developing decision trees in the area of diagnostics for the first time and decision makers reviewing diagnostic reimbursement models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

ZANTEMA, HANS y HANS L. BODLAENDER. "FINDING SMALL EQUIVALENT DECISION TREES IS HARD". International Journal of Foundations of Computer Science 11, n.º 02 (junio de 2000): 343–54. http://dx.doi.org/10.1142/s0129054100000193.

Texto completo
Resumen
Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NP-complete. As a consequence, finding a decision tree of minimal size that is decision equivalent to a given decision tree is an NP-hard problem. This result differs from the well-known result of NP-hardness of finding a decision tree of minimal size that is consistent with a given training set. Instead our result is a basic result for decision trees, apart from the setting of inductive inference. On the other hand, this result differs from similar results for BDDs and OBDDs: since in decision trees no sharing is allowed, the notion of decision tree size is essentially different from BDD size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Hernández, Víctor Adrián Sosa, Raúl Monroy, Miguel Angel Medina-Pérez, Octavio Loyola-González y Francisco Herrera. "A Practical Tutorial for Decision Tree Induction". ACM Computing Surveys 54, n.º 1 (abril de 2021): 1–38. http://dx.doi.org/10.1145/3429739.

Texto completo
Resumen
Experts from different domains have resorted to machine learning techniques to produce explainable models that support decision-making. Among existing techniques, decision trees have been useful in many application domains for classification. Decision trees can make decisions in a language that is closer to that of the experts. Many researchers have attempted to create better decision tree models by improving the components of the induction algorithm. One of the main components that have been studied and improved is the evaluation measure for candidate splits. In this article, we introduce a tutorial that explains decision tree induction. Then, we present an experimental framework to assess the performance of 21 evaluation measures that produce different C4.5 variants considering 110 databases, two performance measures, and 10× 10-fold cross-validation. Furthermore, we compare and rank the evaluation measures by using a Bayesian statistical analysis. From our experimental results, we present the first two performance rankings in the literature of C4.5 variants. Moreover, we organize the evaluation measures into two groups according to their performance. Finally, we introduce meta-models that automatically determine the group of evaluation measures to produce a C4.5 variant for a new database and some further opportunities for decision tree models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Yang, Bin-Bin, Song-Qing Shen y Wei Gao. "Weighted Oblique Decision Trees". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 5621–27. http://dx.doi.org/10.1609/aaai.v33i01.33015621.

Texto completo
Resumen
Decision trees have attracted much attention during the past decades. Previous decision trees include axis-parallel and oblique decision trees; both of them try to find the best splits via exhaustive search or heuristic algorithms in each iteration. Oblique decision trees generally simplify tree structure and take better performance, but are always accompanied with higher computation, as well as the initialization with the best axis-parallel splits. This work presents the Weighted Oblique Decision Tree (WODT) based on continuous optimization with random initialization. We consider different weights of each instance for child nodes at all internal nodes, and then obtain a split by optimizing the continuous and differentiable objective function of weighted information entropy. Extensive experiments show the effectiveness of the proposed algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Chang, Namsik y Olivia R. Liu Sheng. "Decision-Tree-Based Knowledge Discovery: Single- vs. Multi-Decision-Tree Induction". INFORMS Journal on Computing 20, n.º 1 (febrero de 2008): 46–54. http://dx.doi.org/10.1287/ijoc.1060.0215.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Hajjej, Fahima, Manal Abdullah Alohali, Malek Badr y Md Adnan Rahman. "A Comparison of Decision Tree Algorithms in the Assessment of Biomedical Data". BioMed Research International 2022 (7 de julio de 2022): 1–9. http://dx.doi.org/10.1155/2022/9449497.

Texto completo
Resumen
By comparing the performance of various tree algorithms, we can determine which one is most useful for analyzing biomedical data. In artificial intelligence, decision trees are a classification model known for their visual aid in making decisions. WEKA software will evaluate biological data from real patients to see how well the decision tree classification algorithm performs. Another goal of this comparison is to assess whether or not decision trees can serve as an effective tool for medical diagnosis in general. In doing so, we will be able to see which algorithms are the most efficient and appropriate to use when delving into this data and arrive at an informed decision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Cai, Yuliang, Huaguang Zhang, Qiang He y Shaoxin Sun. "New classification technique: fuzzy oblique decision tree". Transactions of the Institute of Measurement and Control 41, n.º 8 (11 de junio de 2018): 2185–95. http://dx.doi.org/10.1177/0142331218774614.

Texto completo
Resumen
Based on axiomatic fuzzy set (AFS) theory and fuzzy information entropy, a novel fuzzy oblique decision tree (FODT) algorithm is proposed in this paper. Traditional axis-parallel decision trees only consider a single feature at each non-leaf node, while oblique decision trees partition the feature space with an oblique hyperplane. By contrast, the FODT takes dynamic mining fuzzy rules as a decision function. The main idea of the FODT is to use these fuzzy rules to construct leaf nodes for each class in each layer of the tree; the samples that cannot be covered by the fuzzy rules are then put into an additional node – the only non-leaf node in this layer. Construction of the FODT consists of four major steps: (a) generation of fuzzy membership functions automatically by AFS theory according to the raw data distribution; (b) extraction of dynamically fuzzy rules in each non-leaf node by the fuzzy rule extraction algorithm (FREA); (c) construction of the FODT by the fuzzy rules obtained from step (b); and (d) determination of the optimal threshold [Formula: see text] to generate a final tree. Compared with five traditional decision trees (C4.5, LADtree (LAD), Best-first tree (BFT), SimpleCart (SC) and NBTree (NBT)) and a recently obtained fuzzy rules decision tree (FRDT) on eight UCI machine learning data sets and one biomedical data set (ALLAML), the experimental results demonstrate that the proposed algorithm outperforms the other decision trees in both classification accuracy and tree size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Luna, José Marcio, Efstathios D. Gennatas, Lyle H. Ungar, Eric Eaton, Eric S. Diffenderfer, Shane T. Jensen, Charles B. Simone, Jerome H. Friedman, Timothy D. Solberg y Gilmer Valdes. "Building more accurate decision trees with the additive tree". Proceedings of the National Academy of Sciences 116, n.º 40 (16 de septiembre de 2019): 19887–93. http://dx.doi.org/10.1073/pnas.1816748116.

Texto completo
Resumen
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Jiang, Daniel R., Lina Al-Kanj y Warren B. Powell. "Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds". Operations Research 68, n.º 6 (noviembre de 2020): 1678–97. http://dx.doi.org/10.1287/opre.2019.1939.

Texto completo
Resumen
In the paper, “Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds,” the authors propose an extension to Monte Carlo tree search that uses the idea of “sampling the future” to produce noisy upper bounds on nodes in the decision tree. These upper bounds can help guide the tree expansion process and produce decision trees that are deeper rather than wider, in effect concentrating computation toward more useful parts of the state space. The algorithm’s effectiveness is illustrated in a ride-sharing setting, where a driver/vehicle needs to make dynamic decisions regarding trip acceptance and relocations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Wei Wei, Wei Wei, Mingwei Hui Wei Wei, Beibei Zhang Mingwei Hui, Rafal Scherer Beibei Zhang y Robertas Damaševičius Rafal Scherer. "Research on Decision Tree Based on Rough Set". 網際網路技術學刊 22, n.º 6 (noviembre de 2021): 1385–94. http://dx.doi.org/10.53106/160792642021112206015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Brunello, Andrea, Enrico Marzano, Angelo Montanari y Guido Sciavicco. "Decision Tree Pruning via Multi-Objective Evolutionary Computation". International Journal of Machine Learning and Computing 7, n.º 6 (diciembre de 2017): 167–75. http://dx.doi.org/10.18178/ijmlc.2017.7.6.641.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Murphy, P. M. y M. J. Pazzani. "Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction". Journal of Artificial Intelligence Research 1 (1 de marzo de 1994): 257–75. http://dx.doi.org/10.1613/jair.41.

Texto completo
Resumen
We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

SudarsanaReddy, C., V. Vasu y B. Kumara Swamy Achari. "Effective Decision Tree Learning". International Journal of Computer Applications 82, n.º 9 (15 de noviembre de 2013): 1–6. http://dx.doi.org/10.5120/14141-7690.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Shazadi, Komal. "Decision Tree in Biology". European Journal of Biology 6, n.º 1 (7 de enero de 2021): 1–15. http://dx.doi.org/10.47672/ejb.642.

Texto completo
Resumen
Purpose: Human biology is an essential field in scientific research as it helps in understanding the human body for adequate care. Technology has improved the way scientists do their biological research. One of the critical technologies is artificial intelligence (AI), which is revolutionizing the world. Scientists have applied AI in biological studies, using several methods to gain different types of data. Machine learning is a branch of artificial intelligence that helps computers learn from data and create predictions without being explicitly programmed. Methodology: One critical methodology in the machine is using the tree-based decision. It is extensively used in biological research, as it helps in classifying complex data into simple and easy to interpret graphs. This paper aims to give a beginner-friendly view of the tree-based model, analyzing its use and advantages over other methods. Finding: Artificial intelligence has greatly improved the collection, analysis, and prediction of biological and medical information. Machine learning, a subgroup of artificial intelligence, is useful in creating prediction models, which help a wide range of fields, including computational and systems biology. Contribution and future recommendation also discussed in this study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

PURDILA, V. y S. G. PENTIUC. "Fast Decision Tree Algorithm". Advances in Electrical and Computer Engineering 14, n.º 1 (2014): 65–68. http://dx.doi.org/10.4316/aece.2014.01010.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Quinlan, J. R. "Learning decision tree classifiers". ACM Computing Surveys 28, n.º 1 (marzo de 1996): 71–72. http://dx.doi.org/10.1145/234313.234346.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Sok, Hong Kuan, Melanie Po-Leen Ooi y Ye Chow Kuang. "Sparse alternating decision tree". Pattern Recognition Letters 60-61 (agosto de 2015): 57–64. http://dx.doi.org/10.1016/j.patrec.2015.03.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Meadows, S., K. Baker y J. Butler. "The Incident Decision Tree". Clinical Risk 11, n.º 2 (1 de marzo de 2005): 66–68. http://dx.doi.org/10.1258/1356262053429732.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Haimes, Yacov Y., Duan Li y Vijay Tulsiani. "Multiobjective Decision-Tree Analysis1". Risk Analysis 10, n.º 1 (marzo de 1990): 111–27. http://dx.doi.org/10.1111/j.1539-6924.1990.tb01026.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Lu, Songfeng y Samuel L. Braunstein. "Quantum decision tree classifier". Quantum Information Processing 13, n.º 3 (19 de noviembre de 2013): 757–70. http://dx.doi.org/10.1007/s11128-013-0687-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Xia, Fen, Wensheng Zhang, Fuxin Li y Yanwu Yang. "Ranking with decision tree". Knowledge and Information Systems 17, n.º 3 (8 de enero de 2008): 381–95. http://dx.doi.org/10.1007/s10115-007-0118-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

McTavish, Hayden, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin y Margo Seltzer. "Fast Sparse Decision Tree Optimization via Reference Ensembles". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 9604–13. http://dx.doi.org/10.1609/aaai.v36i9.21194.

Texto completo
Resumen
Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have been made on the problem only within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. The guesses come from knowledge gleaned from black box models. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

LAST, MARK, ODED MAIMON y EINAT MINKOV. "IMPROVING STABILITY OF DECISION TREES". International Journal of Pattern Recognition and Artificial Intelligence 16, n.º 02 (marzo de 2002): 145–59. http://dx.doi.org/10.1142/s0218001402001599.

Texto completo
Resumen
Decision-tree algorithms are known to be unstable: small variations in the training set can result in different trees and different predictions for the same validation examples. Both accuracy and stability can be improved by learning multiple models from bootstrap samples of training data, but the "meta-learner" approach makes the extracted knowledge hardly interpretable. In the following paper, we present the Info-Fuzzy Network (IFN), a novel information-theoretic method for building stable and comprehensible decision-tree models. The stability of the IFN algorithm is ensured by restricting the tree structure to using the same feature for all nodes of the same tree level and by the built-in statistical significance tests. The IFN method is shown empirically to produce more compact and stable models than the "meta-learner" techniques, while preserving a reasonable level of predictive accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

C, Kishor Kumar Reddy y Vijaya Babu. "A Survey on Issues of Decision Tree and Non-Decision Tree Algorithms". International Journal of Artificial Intelligence and Applications for Smart Devices 4, n.º 1 (31 de mayo de 2016): 9–32. http://dx.doi.org/10.14257/ijaiasd.2016.4.1.02.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Kim, Soo Y. y Arun Upneja. "Predicting restaurant financial distress using decision tree and AdaBoosted decision tree models". Economic Modelling 36 (enero de 2014): 354–62. http://dx.doi.org/10.1016/j.econmod.2013.10.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

De la Cruz-García, Jazmín S., Juan Bory-Reyes y Aldo Ramirez-Arellano. "A Two-Parameter Fractional Tsallis Decision Tree". Entropy 24, n.º 5 (19 de abril de 2022): 572. http://dx.doi.org/10.3390/e24050572.

Texto completo
Resumen
Decision trees are decision support data mining tools that create, as the name suggests, a tree-like model. The classical C4.5 decision tree, based on the Shannon entropy, is a simple algorithm to calculate the gain ratio and then split the attributes based on this entropy measure. Tsallis and Renyi entropies (instead of Shannon) can be employed to generate a decision tree with better results. In practice, the entropic index parameter of these entropies is tuned to outperform the classical decision trees. However, this process is carried out by testing a range of values for a given database, which is time-consuming and unfeasible for massive data. This paper introduces a decision tree based on a two-parameter fractional Tsallis entropy. We propose a constructionist approach to the representation of databases as complex networks that enable us an efficient computation of the parameters of this entropy using the box-covering algorithm and renormalization of the complex network. The experimental results support the conclusion that the two-parameter fractional Tsallis entropy is a more sensitive measure than parametric Renyi, Tsallis, and Gini index precedents for a decision tree classifier.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Scott, Jessie y David Betters. "Economic Analysis of Urban Tree Replacement Decisions". Arboriculture & Urban Forestry 26, n.º 2 (1 de marzo de 2000): 69–77. http://dx.doi.org/10.48044/jauf.2000.008.

Texto completo
Resumen
Urban forest managers often are required to make decisions about whether to retain or replace an existing tree. In part, this decision relies on an economic analysis of the benefits and costs of the alternatives. This paper presents an economic methodology that helps address the tree replacement problem. The procedures apply to analyzing the benefits and costs of existing trees as well as future replacement trees. A case study, involving a diseased American elm (Uimus americana) is used to illustrate an application of the methodology. The procedures should prove useful in developing economic guides for tree replacement/retention decisions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Nock, Richard y Pascal Jappy. "Decision tree based induction of decision lists". Intelligent Data Analysis 3, n.º 3 (1 de mayo de 1999): 227–40. http://dx.doi.org/10.3233/ida-1999-3306.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Nock, R. "Decision tree based induction of decision lists". Intelligent Data Analysis 3, n.º 3 (septiembre de 1999): 227–40. http://dx.doi.org/10.1016/s1088-467x(99)00020-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Lootsma, Freerk A. "Multicriteria decision analysis in a decision tree". European Journal of Operational Research 101, n.º 3 (septiembre de 1997): 442–51. http://dx.doi.org/10.1016/s0377-2217(96)00208-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

许, 美玲. "Decision Tree Analysis for Inconsistent Decision Tables". Computer Science and Application 06, n.º 10 (2016): 597–606. http://dx.doi.org/10.12677/csa.2016.610074.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Okada, Hugo Kenji Rodrigues, Andre Ricardo Nascimento das Neves y Ricardo Shitsuka. "Analysis of Decision Tree Induction Algorithms". Research, Society and Development 8, n.º 11 (24 de agosto de 2019): e298111473. http://dx.doi.org/10.33448/rsd-v8i11.1473.

Texto completo
Resumen
Decision trees are data structures or computational methods that enable nonparametric supervised machine learning and are used in classification and regression tasks. The aim of this paper is to present a comparison between the decision tree induction algorithms C4.5 and CART. A quantitative study is performed in which the two methods are compared by analyzing the following aspects: operation and complexity. The experiments presented practically equal hit percentages in the execution time for tree induction, however, the CART algorithm was approximately 46.24% slower than C4.5 and was considered to be more effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

RAHMANI, MOHSEN, SATTAR HASHEMI, ALI HAMZEH y ASHKAN SAMI. "AGENT BASED DECISION TREE LEARNING: A NOVEL APPROACH". International Journal of Software Engineering and Knowledge Engineering 19, n.º 07 (noviembre de 2009): 1015–22. http://dx.doi.org/10.1142/s0218194009004477.

Texto completo
Resumen
Decision trees are one of the most effective and widely used induction methods that have received a great deal of attention over the past twenty years. When decision tree induction algorithms were used with uncertain rather than deterministic data, the result is a complete tree, which can classify most of the unseen samples correctly. This tree would be pruned in order to reduce its classification error and over-fitting. Recently, multi agent researchers concentrated on learning from large databases. In this paper we present a novel multi agent learning method that is able to induce a decision tree from distributed training sets. Our method is based on combination of separate decision trees each provided by one agent. Hence an agent is provided to aggregate results of the other agents and induces the final tree. Our empirical results suggest that the proposed method can provide significant benefits to distributed data classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía