Статті в журналах з теми "Decision tree"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Decision tree.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Decision tree".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

TOFAN, Cezarina Adina. "Optimization Techniques of Decision Making - Decision Tree." Advances in Social Sciences Research Journal 1, no. 5 (September 30, 2014): 142–48. http://dx.doi.org/10.14738/assrj.15.437.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Naylor, Mike. "Decision Tree." Mathematics Teacher: Learning and Teaching PK-12 113, no. 7 (July 2020): 612. http://dx.doi.org/10.5951/mtlt.2020.0081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Oo, Aung Nway, and Thin Naing. "Decision Tree Models for Medical Diagnosis." International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (April 30, 2019): 1697–99. http://dx.doi.org/10.31142/ijtsrd23510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

BRESLOW, LEONARD A., and DAVID W. AHA. "Simplifying decision trees: A survey." Knowledge Engineering Review 12, no. 01 (January 1997): 1–40. http://dx.doi.org/10.1017/s0269888997000015.

Повний текст джерела
Анотація:
Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

ZANTEMA, HANS, and HANS L. BODLAENDER. "SIZES OF ORDERED DECISION TREES." International Journal of Foundations of Computer Science 13, no. 03 (June 2002): 445–58. http://dx.doi.org/10.1142/s0129054102001205.

Повний текст джерела
Анотація:
Decision tables provide a natural framework for knowledge acquisition and representation in the area of knowledge based information systems. Decision trees provide a standard method for inductive inference in the area of machine learning. In this paper we show how decision tables can be considered as ordered decision trees: decision trees satisfying an ordering restriction on the nodes. Every decision tree can be represented by an equivalent ordered decision tree, but we show that doing so may exponentially blow up sizes, even if the choice of the order is left free. Our main result states that finding an ordered decision tree of minimal size that represents the same function as a given ordered decision tree is an NP-hard problem; in earlier work we obtained a similar result for unordered decision trees.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

TOFAN, Cezarina Adina. "Method of decision tree applied in adopting the decision for promoting a company." Annals of "Spiru Haret". Economic Series 15, no. 3 (September 30, 2015): 47. http://dx.doi.org/10.26458/1535.

Повний текст джерела
Анотація:
The decision can be defined as the way chosen from several possible to achieve an objective. An important role in the functioning of the decisional-informational system is held by the decision-making methods. Decision trees are proving to be very useful tools for taking financial decisions or regarding the numbers, where a large amount of complex information must be considered. They provide an effective structure in which alternative decisions and the implications of their choice can be assessed, and help to form a correct and balanced vision of the risks and rewards that may result from a certain choice. For these reasons, the content of this communication will review a series of decision-making criteria. Also, it will analyse the benefits of using the decision tree method in the decision-making process by providing a numerical example. On this basis, it can be concluded that the procedure may prove useful in making decisions for companies operating on markets where competition intensity is differentiated.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cockett, J. R. B. "Decision Expression Optimization1." Fundamenta Informaticae 10, no. 1 (January 1, 1987): 93–114. http://dx.doi.org/10.3233/fi-1987-10107.

Повний текст джерела
Анотація:
A basic concern when using decision trees for the solution of taxonomic or similar problems is their efficiency. Often the information that is required to completely optimize a tree is simply not available. This is especially the case when a criterion based on probabilities is used. It is shown how it is often possible, despite the absence of this information, to improve the design of the tree. The approach is based on algebraic methods for manipulating decision trees and the identification of some particularly desirable forms.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yun, Jooyeol, Jun won Seo, and Taeseon Yoon. "Fuzzy Decision Tree." International Journal of Fuzzy Logic Systems 4, no. 3 (July 31, 2014): 7–11. http://dx.doi.org/10.5121/ijfls.2014.4302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Manwani, N., and P. S. Sastry. "Geometric Decision Tree." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42, no. 1 (February 2012): 181–92. http://dx.doi.org/10.1109/tsmcb.2011.2163392.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhou, Zhi-Hua, and Zhao-Qian Chen. "Hybrid decision tree." Knowledge-Based Systems 15, no. 8 (November 2002): 515–28. http://dx.doi.org/10.1016/s0950-7051(02)00038-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Koodiaroff, Sally. "Oncology Decision Tree." Collegian 7, no. 3 (January 2000): 34–36. http://dx.doi.org/10.1016/s1322-7696(08)60375-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hayes, Karen W., and Becky Wojcik. "Decision Tree Structure." Physical Therapy 69, no. 12 (December 1, 1989): 1120–22. http://dx.doi.org/10.1093/ptj/69.12.1120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cockett, J. R. B., and J. A. Herrera. "Decision tree reduction." Journal of the ACM 37, no. 4 (October 1990): 815–42. http://dx.doi.org/10.1145/96559.96576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

López-Chau, Asdrúbal, Jair Cervantes, Lourdes López-García, and Farid García Lamont. "Fisher’s decision tree." Expert Systems with Applications 40, no. 16 (November 2013): 6283–91. http://dx.doi.org/10.1016/j.eswa.2013.05.044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Maazouzi, Faiz, and Halima Bahi. "Using multi decision tree technique to improving decision tree classifier." International Journal of Business Intelligence and Data Mining 7, no. 4 (2012): 274. http://dx.doi.org/10.1504/ijbidm.2012.051712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Parlindungan and HariSupriadi. "Implementation Decision Tree Algorithm for Ecommerce Website." International Journal of Psychosocial Rehabilitation 24, no. 02 (February 13, 2020): 3611–14. http://dx.doi.org/10.37200/ijpr/v24i2/pr200682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Li, Jiawei, Yiming Li, Xingchun Xiang, Shu-Tao Xia, Siyi Dong, and Yun Cai. "TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation." Entropy 22, no. 11 (October 24, 2020): 1203. http://dx.doi.org/10.3390/e22111203.

Повний текст джерела
Анотація:
Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Rautenberg, Tamlyn, Annette Gerritsen, and Martin Downes. "Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer." Diagnostics 10, no. 3 (March 14, 2020): 158. http://dx.doi.org/10.3390/diagnostics10030158.

Повний текст джерела
Анотація:
Health economics is a discipline of economics applied to health care. One method used in health economics is decision tree modelling, which extrapolates the cost and effectiveness of competing interventions over time. Such decision tree models are the basis of reimbursement decisions in countries using health technology assessment for decision making. In many instances, these competing interventions are diagnostic technologies. Despite a wealth of excellent resources describing the decision analysis of diagnostics, two critical errors persist: not including diagnostic test accuracy in the structure of decision trees and treating sequential diagnostics as independent. These errors have consequences for the accuracy of model results, and thereby impact on decision making. This paper sets out to overcome these errors using color to link fundamental epidemiological calculations to decision tree models in a visually and intuitively appealing pictorial format. The paper is a must-read for modelers developing decision trees in the area of diagnostics for the first time and decision makers reviewing diagnostic reimbursement models.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

ZANTEMA, HANS, and HANS L. BODLAENDER. "FINDING SMALL EQUIVALENT DECISION TREES IS HARD." International Journal of Foundations of Computer Science 11, no. 02 (June 2000): 343–54. http://dx.doi.org/10.1142/s0129054100000193.

Повний текст джерела
Анотація:
Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NP-complete. As a consequence, finding a decision tree of minimal size that is decision equivalent to a given decision tree is an NP-hard problem. This result differs from the well-known result of NP-hardness of finding a decision tree of minimal size that is consistent with a given training set. Instead our result is a basic result for decision trees, apart from the setting of inductive inference. On the other hand, this result differs from similar results for BDDs and OBDDs: since in decision trees no sharing is allowed, the notion of decision tree size is essentially different from BDD size.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hernández, Víctor Adrián Sosa, Raúl Monroy, Miguel Angel Medina-Pérez, Octavio Loyola-González, and Francisco Herrera. "A Practical Tutorial for Decision Tree Induction." ACM Computing Surveys 54, no. 1 (April 2021): 1–38. http://dx.doi.org/10.1145/3429739.

Повний текст джерела
Анотація:
Experts from different domains have resorted to machine learning techniques to produce explainable models that support decision-making. Among existing techniques, decision trees have been useful in many application domains for classification. Decision trees can make decisions in a language that is closer to that of the experts. Many researchers have attempted to create better decision tree models by improving the components of the induction algorithm. One of the main components that have been studied and improved is the evaluation measure for candidate splits. In this article, we introduce a tutorial that explains decision tree induction. Then, we present an experimental framework to assess the performance of 21 evaluation measures that produce different C4.5 variants considering 110 databases, two performance measures, and 10× 10-fold cross-validation. Furthermore, we compare and rank the evaluation measures by using a Bayesian statistical analysis. From our experimental results, we present the first two performance rankings in the literature of C4.5 variants. Moreover, we organize the evaluation measures into two groups according to their performance. Finally, we introduce meta-models that automatically determine the group of evaluation measures to produce a C4.5 variant for a new database and some further opportunities for decision tree models.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yang, Bin-Bin, Song-Qing Shen, and Wei Gao. "Weighted Oblique Decision Trees." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5621–27. http://dx.doi.org/10.1609/aaai.v33i01.33015621.

Повний текст джерела
Анотація:
Decision trees have attracted much attention during the past decades. Previous decision trees include axis-parallel and oblique decision trees; both of them try to find the best splits via exhaustive search or heuristic algorithms in each iteration. Oblique decision trees generally simplify tree structure and take better performance, but are always accompanied with higher computation, as well as the initialization with the best axis-parallel splits. This work presents the Weighted Oblique Decision Tree (WODT) based on continuous optimization with random initialization. We consider different weights of each instance for child nodes at all internal nodes, and then obtain a split by optimizing the continuous and differentiable objective function of weighted information entropy. Extensive experiments show the effectiveness of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chang, Namsik, and Olivia R. Liu Sheng. "Decision-Tree-Based Knowledge Discovery: Single- vs. Multi-Decision-Tree Induction." INFORMS Journal on Computing 20, no. 1 (February 2008): 46–54. http://dx.doi.org/10.1287/ijoc.1060.0215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hajjej, Fahima, Manal Abdullah Alohali, Malek Badr, and Md Adnan Rahman. "A Comparison of Decision Tree Algorithms in the Assessment of Biomedical Data." BioMed Research International 2022 (July 7, 2022): 1–9. http://dx.doi.org/10.1155/2022/9449497.

Повний текст джерела
Анотація:
By comparing the performance of various tree algorithms, we can determine which one is most useful for analyzing biomedical data. In artificial intelligence, decision trees are a classification model known for their visual aid in making decisions. WEKA software will evaluate biological data from real patients to see how well the decision tree classification algorithm performs. Another goal of this comparison is to assess whether or not decision trees can serve as an effective tool for medical diagnosis in general. In doing so, we will be able to see which algorithms are the most efficient and appropriate to use when delving into this data and arrive at an informed decision.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cai, Yuliang, Huaguang Zhang, Qiang He, and Shaoxin Sun. "New classification technique: fuzzy oblique decision tree." Transactions of the Institute of Measurement and Control 41, no. 8 (June 11, 2018): 2185–95. http://dx.doi.org/10.1177/0142331218774614.

Повний текст джерела
Анотація:
Based on axiomatic fuzzy set (AFS) theory and fuzzy information entropy, a novel fuzzy oblique decision tree (FODT) algorithm is proposed in this paper. Traditional axis-parallel decision trees only consider a single feature at each non-leaf node, while oblique decision trees partition the feature space with an oblique hyperplane. By contrast, the FODT takes dynamic mining fuzzy rules as a decision function. The main idea of the FODT is to use these fuzzy rules to construct leaf nodes for each class in each layer of the tree; the samples that cannot be covered by the fuzzy rules are then put into an additional node – the only non-leaf node in this layer. Construction of the FODT consists of four major steps: (a) generation of fuzzy membership functions automatically by AFS theory according to the raw data distribution; (b) extraction of dynamically fuzzy rules in each non-leaf node by the fuzzy rule extraction algorithm (FREA); (c) construction of the FODT by the fuzzy rules obtained from step (b); and (d) determination of the optimal threshold [Formula: see text] to generate a final tree. Compared with five traditional decision trees (C4.5, LADtree (LAD), Best-first tree (BFT), SimpleCart (SC) and NBTree (NBT)) and a recently obtained fuzzy rules decision tree (FRDT) on eight UCI machine learning data sets and one biomedical data set (ALLAML), the experimental results demonstrate that the proposed algorithm outperforms the other decision trees in both classification accuracy and tree size.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Luna, José Marcio, Efstathios D. Gennatas, Lyle H. Ungar, Eric Eaton, Eric S. Diffenderfer, Shane T. Jensen, Charles B. Simone, Jerome H. Friedman, Timothy D. Solberg, and Gilmer Valdes. "Building more accurate decision trees with the additive tree." Proceedings of the National Academy of Sciences 116, no. 40 (September 16, 2019): 19887–93. http://dx.doi.org/10.1073/pnas.1816748116.

Повний текст джерела
Анотація:
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Jiang, Daniel R., Lina Al-Kanj, and Warren B. Powell. "Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds." Operations Research 68, no. 6 (November 2020): 1678–97. http://dx.doi.org/10.1287/opre.2019.1939.

Повний текст джерела
Анотація:
In the paper, “Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds,” the authors propose an extension to Monte Carlo tree search that uses the idea of “sampling the future” to produce noisy upper bounds on nodes in the decision tree. These upper bounds can help guide the tree expansion process and produce decision trees that are deeper rather than wider, in effect concentrating computation toward more useful parts of the state space. The algorithm’s effectiveness is illustrated in a ride-sharing setting, where a driver/vehicle needs to make dynamic decisions regarding trip acceptance and relocations.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wei Wei, Wei Wei, Mingwei Hui Wei Wei, Beibei Zhang Mingwei Hui, Rafal Scherer Beibei Zhang, and Robertas Damaševičius Rafal Scherer. "Research on Decision Tree Based on Rough Set." 網際網路技術學刊 22, no. 6 (November 2021): 1385–94. http://dx.doi.org/10.53106/160792642021112206015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Brunello, Andrea, Enrico Marzano, Angelo Montanari, and Guido Sciavicco. "Decision Tree Pruning via Multi-Objective Evolutionary Computation." International Journal of Machine Learning and Computing 7, no. 6 (December 2017): 167–75. http://dx.doi.org/10.18178/ijmlc.2017.7.6.641.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Murphy, P. M., and M. J. Pazzani. "Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction." Journal of Artificial Intelligence Research 1 (March 1, 1994): 257–75. http://dx.doi.org/10.1613/jair.41.

Повний текст джерела
Анотація:
We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

SudarsanaReddy, C., V. Vasu, and B. Kumara Swamy Achari. "Effective Decision Tree Learning." International Journal of Computer Applications 82, no. 9 (November 15, 2013): 1–6. http://dx.doi.org/10.5120/14141-7690.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Shazadi, Komal. "Decision Tree in Biology." European Journal of Biology 6, no. 1 (January 7, 2021): 1–15. http://dx.doi.org/10.47672/ejb.642.

Повний текст джерела
Анотація:
Purpose: Human biology is an essential field in scientific research as it helps in understanding the human body for adequate care. Technology has improved the way scientists do their biological research. One of the critical technologies is artificial intelligence (AI), which is revolutionizing the world. Scientists have applied AI in biological studies, using several methods to gain different types of data. Machine learning is a branch of artificial intelligence that helps computers learn from data and create predictions without being explicitly programmed. Methodology: One critical methodology in the machine is using the tree-based decision. It is extensively used in biological research, as it helps in classifying complex data into simple and easy to interpret graphs. This paper aims to give a beginner-friendly view of the tree-based model, analyzing its use and advantages over other methods. Finding: Artificial intelligence has greatly improved the collection, analysis, and prediction of biological and medical information. Machine learning, a subgroup of artificial intelligence, is useful in creating prediction models, which help a wide range of fields, including computational and systems biology. Contribution and future recommendation also discussed in this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

PURDILA, V., and S. G. PENTIUC. "Fast Decision Tree Algorithm." Advances in Electrical and Computer Engineering 14, no. 1 (2014): 65–68. http://dx.doi.org/10.4316/aece.2014.01010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Quinlan, J. R. "Learning decision tree classifiers." ACM Computing Surveys 28, no. 1 (March 1996): 71–72. http://dx.doi.org/10.1145/234313.234346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sok, Hong Kuan, Melanie Po-Leen Ooi, and Ye Chow Kuang. "Sparse alternating decision tree." Pattern Recognition Letters 60-61 (August 2015): 57–64. http://dx.doi.org/10.1016/j.patrec.2015.03.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Meadows, S., K. Baker, and J. Butler. "The Incident Decision Tree." Clinical Risk 11, no. 2 (March 1, 2005): 66–68. http://dx.doi.org/10.1258/1356262053429732.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Haimes, Yacov Y., Duan Li, and Vijay Tulsiani. "Multiobjective Decision-Tree Analysis1." Risk Analysis 10, no. 1 (March 1990): 111–27. http://dx.doi.org/10.1111/j.1539-6924.1990.tb01026.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Lu, Songfeng, and Samuel L. Braunstein. "Quantum decision tree classifier." Quantum Information Processing 13, no. 3 (November 19, 2013): 757–70. http://dx.doi.org/10.1007/s11128-013-0687-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Xia, Fen, Wensheng Zhang, Fuxin Li, and Yanwu Yang. "Ranking with decision tree." Knowledge and Information Systems 17, no. 3 (January 8, 2008): 381–95. http://dx.doi.org/10.1007/s10115-007-0118-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

McTavish, Hayden, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, and Margo Seltzer. "Fast Sparse Decision Tree Optimization via Reference Ensembles." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9604–13. http://dx.doi.org/10.1609/aaai.v36i9.21194.

Повний текст джерела
Анотація:
Sparse decision tree optimization has been one of the most fundamental problems in AI since its inception and is a challenge at the core of interpretable machine learning. Sparse decision tree optimization is computationally hard, and despite steady effort since the 1960's, breakthroughs have been made on the problem only within the past few years, primarily on the problem of finding optimal sparse decision trees. However, current state-of-the-art algorithms often require impractical amounts of computation time and memory to find optimal or near-optimal trees for some real-world datasets, particularly those having several continuous-valued features. Given that the search spaces of these decision tree optimization problems are massive, can we practically hope to find a sparse decision tree that competes in accuracy with a black box machine learning model? We address this problem via smart guessing strategies that can be applied to any optimal branch-and-bound-based decision tree algorithm. The guesses come from knowledge gleaned from black box models. We show that by using these guesses, we can reduce the run time by multiple orders of magnitude while providing bounds on how far the resulting trees can deviate from the black box's accuracy and expressive power. Our approach enables guesses about how to bin continuous features, the size of the tree, and lower bounds on the error for the optimal decision tree. Our experiments show that in many cases we can rapidly construct sparse decision trees that match the accuracy of black box models. To summarize: when you are having trouble optimizing, just guess.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

LAST, MARK, ODED MAIMON, and EINAT MINKOV. "IMPROVING STABILITY OF DECISION TREES." International Journal of Pattern Recognition and Artificial Intelligence 16, no. 02 (March 2002): 145–59. http://dx.doi.org/10.1142/s0218001402001599.

Повний текст джерела
Анотація:
Decision-tree algorithms are known to be unstable: small variations in the training set can result in different trees and different predictions for the same validation examples. Both accuracy and stability can be improved by learning multiple models from bootstrap samples of training data, but the "meta-learner" approach makes the extracted knowledge hardly interpretable. In the following paper, we present the Info-Fuzzy Network (IFN), a novel information-theoretic method for building stable and comprehensible decision-tree models. The stability of the IFN algorithm is ensured by restricting the tree structure to using the same feature for all nodes of the same tree level and by the built-in statistical significance tests. The IFN method is shown empirically to produce more compact and stable models than the "meta-learner" techniques, while preserving a reasonable level of predictive accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

C, Kishor Kumar Reddy, and Vijaya Babu. "A Survey on Issues of Decision Tree and Non-Decision Tree Algorithms." International Journal of Artificial Intelligence and Applications for Smart Devices 4, no. 1 (May 31, 2016): 9–32. http://dx.doi.org/10.14257/ijaiasd.2016.4.1.02.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kim, Soo Y., and Arun Upneja. "Predicting restaurant financial distress using decision tree and AdaBoosted decision tree models." Economic Modelling 36 (January 2014): 354–62. http://dx.doi.org/10.1016/j.econmod.2013.10.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

De la Cruz-García, Jazmín S., Juan Bory-Reyes, and Aldo Ramirez-Arellano. "A Two-Parameter Fractional Tsallis Decision Tree." Entropy 24, no. 5 (April 19, 2022): 572. http://dx.doi.org/10.3390/e24050572.

Повний текст джерела
Анотація:
Decision trees are decision support data mining tools that create, as the name suggests, a tree-like model. The classical C4.5 decision tree, based on the Shannon entropy, is a simple algorithm to calculate the gain ratio and then split the attributes based on this entropy measure. Tsallis and Renyi entropies (instead of Shannon) can be employed to generate a decision tree with better results. In practice, the entropic index parameter of these entropies is tuned to outperform the classical decision trees. However, this process is carried out by testing a range of values for a given database, which is time-consuming and unfeasible for massive data. This paper introduces a decision tree based on a two-parameter fractional Tsallis entropy. We propose a constructionist approach to the representation of databases as complex networks that enable us an efficient computation of the parameters of this entropy using the box-covering algorithm and renormalization of the complex network. The experimental results support the conclusion that the two-parameter fractional Tsallis entropy is a more sensitive measure than parametric Renyi, Tsallis, and Gini index precedents for a decision tree classifier.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Scott, Jessie, and David Betters. "Economic Analysis of Urban Tree Replacement Decisions." Arboriculture & Urban Forestry 26, no. 2 (March 1, 2000): 69–77. http://dx.doi.org/10.48044/jauf.2000.008.

Повний текст джерела
Анотація:
Urban forest managers often are required to make decisions about whether to retain or replace an existing tree. In part, this decision relies on an economic analysis of the benefits and costs of the alternatives. This paper presents an economic methodology that helps address the tree replacement problem. The procedures apply to analyzing the benefits and costs of existing trees as well as future replacement trees. A case study, involving a diseased American elm (Uimus americana) is used to illustrate an application of the methodology. The procedures should prove useful in developing economic guides for tree replacement/retention decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Nock, Richard, and Pascal Jappy. "Decision tree based induction of decision lists." Intelligent Data Analysis 3, no. 3 (May 1, 1999): 227–40. http://dx.doi.org/10.3233/ida-1999-3306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nock, R. "Decision tree based induction of decision lists." Intelligent Data Analysis 3, no. 3 (September 1999): 227–40. http://dx.doi.org/10.1016/s1088-467x(99)00020-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lootsma, Freerk A. "Multicriteria decision analysis in a decision tree." European Journal of Operational Research 101, no. 3 (September 1997): 442–51. http://dx.doi.org/10.1016/s0377-2217(96)00208-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

许, 美玲. "Decision Tree Analysis for Inconsistent Decision Tables." Computer Science and Application 06, no. 10 (2016): 597–606. http://dx.doi.org/10.12677/csa.2016.610074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Okada, Hugo Kenji Rodrigues, Andre Ricardo Nascimento das Neves, and Ricardo Shitsuka. "Analysis of Decision Tree Induction Algorithms." Research, Society and Development 8, no. 11 (August 24, 2019): e298111473. http://dx.doi.org/10.33448/rsd-v8i11.1473.

Повний текст джерела
Анотація:
Decision trees are data structures or computational methods that enable nonparametric supervised machine learning and are used in classification and regression tasks. The aim of this paper is to present a comparison between the decision tree induction algorithms C4.5 and CART. A quantitative study is performed in which the two methods are compared by analyzing the following aspects: operation and complexity. The experiments presented practically equal hit percentages in the execution time for tree induction, however, the CART algorithm was approximately 46.24% slower than C4.5 and was considered to be more effective.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

RAHMANI, MOHSEN, SATTAR HASHEMI, ALI HAMZEH, and ASHKAN SAMI. "AGENT BASED DECISION TREE LEARNING: A NOVEL APPROACH." International Journal of Software Engineering and Knowledge Engineering 19, no. 07 (November 2009): 1015–22. http://dx.doi.org/10.1142/s0218194009004477.

Повний текст джерела
Анотація:
Decision trees are one of the most effective and widely used induction methods that have received a great deal of attention over the past twenty years. When decision tree induction algorithms were used with uncertain rather than deterministic data, the result is a complete tree, which can classify most of the unseen samples correctly. This tree would be pruned in order to reduce its classification error and over-fitting. Recently, multi agent researchers concentrated on learning from large databases. In this paper we present a novel multi agent learning method that is able to induce a decision tree from distributed training sets. Our method is based on combination of separate decision trees each provided by one agent. Hence an agent is provided to aggregate results of the other agents and induces the final tree. Our empirical results suggest that the proposed method can provide significant benefits to distributed data classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії