Academic literature on the topic 'Decision tree'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Decision tree.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Decision tree"

1

TOFAN, Cezarina Adina. "Optimization Techniques of Decision Making - Decision Tree." Advances in Social Sciences Research Journal 1, no. 5 (September 30, 2014): 142–48. http://dx.doi.org/10.14738/assrj.15.437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Babar, Kiran Nitin. "Performance Evaluation of Decision Trees with Machine Learning Algorithm." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 17, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34179.

Full text
Abstract:
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. Decision trees are considered to be one of the most popular approaches for representing classifiers. Researchers from various disciplines such as statistics, machine learning, pattern recognition and Data Mining have dealt with the issue of growing a decision tree from available data. Decision trees in machine learning will be used for classification problems, to categorize objects to gain an understanding of similar features. Decision trees helps in decision-making by representing complex choices in a hierarchical structure. Every node in decision tree verifies specific attributes, guiding decisions based on different data values in the dataset. Leaf nodes provide final outcomes and result which gives a clear and interpretable path for decision analysis in machine learning. Therefore implementation of Decision tree algorithm using python is presented in this paper Keywords— Decision Trees, Classification. Machine learning, statistics, regression
APA, Harvard, Vancouver, ISO, and other styles
3

Sullivan, Colin, Mo Tiwari, and Sebastian Thrun. "MAPTree: Beating “Optimal” Decision Trees with Bayesian Decision Trees." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 9019–26. http://dx.doi.org/10.1609/aaai.v38i8.28751.

Full text
Abstract:
Decision trees remain one of the most popular machine learning models today, largely due to their out-of-the-box performance and interpretability. In this work, we present a Bayesian approach to decision tree induction via maximum a posteriori inference of a posterior distribution over trees. We first demonstrate a connection between maximum a posteriori inference of decision trees and AND/OR search. Using this connection, we propose an AND/OR search algorithm, dubbed MAPTree, which is able to recover the maximum a posteriori tree. Lastly, we demonstrate the empirical performance of the maximum a posteriori tree both on synthetic data and in real world settings. On 16 real world datasets, MAPTree either outperforms baselines or demonstrates comparable performance but with much smaller trees. On a synthetic dataset, MAPTree also demonstrates greater robustness to noise and better generalization than existing approaches. Finally, MAPTree recovers the maxiumum a posteriori tree faster than existing sampling approaches and, in contrast with those algorithms, is able to provide a certificate of optimality. The code for our experiments is available at https://github.com/ThrunGroup/maptree.
APA, Harvard, Vancouver, ISO, and other styles
4

Guidotti, Riccardo, Anna Monreale, Mattia Setzu, and Giulia Volpi. "Generative Model for Decision Trees." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21116–24. http://dx.doi.org/10.1609/aaai.v38i19.30104.

Full text
Abstract:
Decision trees are among the most popular supervised models due to their interpretability and knowledge representation resembling human reasoning. Commonly-used decision tree induction algorithms are based on greedy top-down strategies. Although these approaches are known to be an efficient heuristic, the resulting trees are only locally optimal and tend to have overly complex structures. On the other hand, optimal decision tree algorithms attempt to create an entire decision tree at once to achieve global optimality. We place our proposal between these approaches by designing a generative model for decision trees. Our method first learns a latent decision tree space through a variational architecture using pre-trained decision tree models. Then, it adopts a genetic procedure to explore such latent space to find a compact decision tree with good predictive performance. We compare our proposal against classical tree induction methods, optimal approaches, and ensemble models. The results show that our proposal can generate accurate and shallow, i.e., interpretable, decision trees.
APA, Harvard, Vancouver, ISO, and other styles
5

Naylor, Mike. "Decision Tree." Mathematics Teacher: Learning and Teaching PK-12 113, no. 7 (July 2020): 612. http://dx.doi.org/10.5951/mtlt.2020.0081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

BRESLOW, LEONARD A., and DAVID W. AHA. "Simplifying decision trees: A survey." Knowledge Engineering Review 12, no. 01 (January 1997): 1–40. http://dx.doi.org/10.1017/s0269888997000015.

Full text
Abstract:
Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.
APA, Harvard, Vancouver, ISO, and other styles
7

ZANTEMA, HANS, and HANS L. BODLAENDER. "SIZES OF ORDERED DECISION TREES." International Journal of Foundations of Computer Science 13, no. 03 (June 2002): 445–58. http://dx.doi.org/10.1142/s0129054102001205.

Full text
Abstract:
Decision tables provide a natural framework for knowledge acquisition and representation in the area of knowledge based information systems. Decision trees provide a standard method for inductive inference in the area of machine learning. In this paper we show how decision tables can be considered as ordered decision trees: decision trees satisfying an ordering restriction on the nodes. Every decision tree can be represented by an equivalent ordered decision tree, but we show that doing so may exponentially blow up sizes, even if the choice of the order is left free. Our main result states that finding an ordered decision tree of minimal size that represents the same function as a given ordered decision tree is an NP-hard problem; in earlier work we obtained a similar result for unordered decision trees.
APA, Harvard, Vancouver, ISO, and other styles
8

Oo, Aung Nway, and Thin Naing. "Decision Tree Models for Medical Diagnosis." International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (April 30, 2019): 1697–99. http://dx.doi.org/10.31142/ijtsrd23510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ostonov, Azimkhon, and Mikhail Moshkov. "Comparative Analysis of Deterministic and Nondeterministic Decision Trees for Decision Tables from Closed Classes." Entropy 26, no. 6 (June 17, 2024): 519. http://dx.doi.org/10.3390/e26060519.

Full text
Abstract:
In this paper, we consider classes of decision tables with many-valued decisions closed under operations of the removal of columns, the changing of decisions, the permutation of columns, and the duplication of columns. We study relationships among three parameters of these tables: the complexity of a decision table (if we consider the depth of the decision trees, then the complexity of a decision table is the number of columns in it), the minimum complexity of a deterministic decision tree, and the minimum complexity of a nondeterministic decision tree. We consider the rough classification of functions characterizing relationships and enumerate all possible seven types of relationships.
APA, Harvard, Vancouver, ISO, and other styles
10

Cockett, J. R. B. "Decision Expression Optimization1." Fundamenta Informaticae 10, no. 1 (January 1, 1987): 93–114. http://dx.doi.org/10.3233/fi-1987-10107.

Full text
Abstract:
A basic concern when using decision trees for the solution of taxonomic or similar problems is their efficiency. Often the information that is required to completely optimize a tree is simply not available. This is especially the case when a criterion based on probabilities is used. It is shown how it is often possible, despite the absence of this information, to improve the design of the tree. The approach is based on algebraic methods for manipulating decision trees and the identification of some particularly desirable forms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Decision tree"

1

Yu, Peng. "Improving Decision Tree Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT037.

Full text
Abstract:
La modélisation par arbres de décision est reconnue pour son efficacité et sa lisibilité, notamment pour les données structurées. Cette thèse s’attaque à deux défis majeurs : l’interprétabilité des arbres profonds et la gestion des variables catégorielles.Nous présentons l’algorithme Linear Tree- Shap, qui facilite l’explication du processus décisionnel en attribuant des scores d’importance à chaque noeud et variable. Parallèlement, nous proposons un cadre méthodologique pour traiter directement les variables catégorielles, améliorant à la fois la précision et la robustesse du modèle. Notre approche inclut la méthode stochastique BSplitZ, conçue pour simplifier la répartition d’un grand nombre de catégories, et explore l’emploi du critère Mean Absolute Error (MAE).Nous démontrons notamment l’inexistence d’un encodage optimal pour le MAE et résolvons un problème d’optimisation (le coût unimodal 2-median) essentiel aux opérations de scission. Ces travaux contribuent à la conception de modèles d’arbres de décision plus robustes et plus explicables, ouvrant de nouvelles perspectives pour l’apprentissage automatique
Decision tree models are widely recognized for their efficiency and interpretability, particularly when working with structured data. This thesis addresses two main challenges: improving the interpretability of deep tree-based models and handling categorical variables. We introduce the Linear TreeShap algorithm, which illuminates the model’s decision processby assigning importance scores to each node and feature. In parallel, we propose a methodological framework enabling decision trees to split directly on categorical variables, enhancing both accuracy and robustness. Our approach includes the stochastic BSplitZ method, designed to efficiently handle large sets of categories, and provides a thorough investigation ofthe Mean Absolute Error (MAE) criterion. In particular, we prove that no optimal numerical encoding exists under MAE and solve a related optimization problem (the unimodal cost 2-median) central to tree splitting.Our contributions advance the theoretical foundations and real-world applicability of decision tree models,paving the way for more robust and interpretable solutions in machine learning
APA, Harvard, Vancouver, ISO, and other styles
2

Shi, Haijian. "Best-first Decision Tree Learning." The University of Waikato, 2007. http://hdl.handle.net/10289/2317.

Full text
Abstract:
In best-first top-down induction of decision trees, the best split is added in each step (e.g. the split that maximally reduces the Gini index). This is in contrast to the standard depth-first traversal of a tree. The resulting tree will be the same, just how it is built is different. The objective of this project is to investigate whether it is possible to determine an appropriate tree size on practical datasets by combining best-first decision tree growth with cross-validation-based selection of the number of expansions that are performed. Pre-pruning, post-pruning, CART-pruning can be performed this way to compare.
APA, Harvard, Vancouver, ISO, and other styles
3

Vella, Alan. "Hyper-heuristic decision tree induction." Thesis, Heriot-Watt University, 2012. http://hdl.handle.net/10399/2540.

Full text
Abstract:
A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Bogdan, Vukobratović. "Hardware Acceleration of Nonincremental Algorithms for the Induction of Decision Trees and Decision Tree Ensembles." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=102520&source=NDLTD&language=en.

Full text
Abstract:
The thesis proposes novel full decision tree and decision tree ensembleinduction algorithms EFTI and EEFTI, and various possibilities for theirimplementations are explored. The experiments show that the proposed EFTIalgorithm is able to infer much smaller DTs on average, without thesignificant loss in accuracy, when compared to the top-down incremental DTinducers. On the other hand, when compared to other full tree inductionalgorithms, it was able to produce more accurate DTs, with similar sizes, inshorter times. Also, the hardware architectures for acceleration of thesealgorithms (EFTIP and EEFTIP) are proposed and it is shown in experimentsthat they can offer substantial speedups.
У овоj дисертациjи, представљени су нови алгоритми EFTI и EEFTI заформирање стабала одлуке и њихових ансамбала неинкременталномметодом, као и разне могућности за њихову имплементациjу.Експерименти показуjу да jе предложени EFTI алгоритам у могућностида произведе драстично мања стабла без губитка тачности у односу напостојеће top-down инкременталне алгоритме, а стабла знатно већетачности у односу на постојеће неинкременталне алгоритме. Такође супредложене хардверске архитектуре за акцелерацију ових алгоритама(EFTIP и EEFTIP) и показано је да је уз помоћ ових архитектура могућеостварити знатна убрзања.
U ovoj disertaciji, predstavljeni su novi algoritmi EFTI i EEFTI zaformiranje stabala odluke i njihovih ansambala neinkrementalnommetodom, kao i razne mogućnosti za njihovu implementaciju.Eksperimenti pokazuju da je predloženi EFTI algoritam u mogućnostida proizvede drastično manja stabla bez gubitka tačnosti u odnosu napostojeće top-down inkrementalne algoritme, a stabla znatno većetačnosti u odnosu na postojeće neinkrementalne algoritme. Takođe supredložene hardverske arhitekture za akceleraciju ovih algoritama(EFTIP i EEFTIP) i pokazano je da je uz pomoć ovih arhitektura mogućeostvariti znatna ubrzanja.
APA, Harvard, Vancouver, ISO, and other styles
5

Qureshi, Taimur. "Contributions to decision tree based learning." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20051/document.

Full text
Abstract:
Advances in data collection methods, storage and processing technology are providing a unique challenge and opportunity for automated data learning techniques which aim at producing high-level information, or models, from data. A Typical knowledge discovery process consists of data selection, data preparation, data transformation, data mining and interpretation/validation of the results. Thus, we develop automatic learning techniques which contribute to the data preparation, transformation and mining tasks of knowledge discovery. In doing so, we try to improve the prediction accuracy of the overall learning process. Our work focuses on decision tree based learning and thus, we introduce various preprocessing and transformation techniques such as discretization, fuzzy partitioning and dimensionality reduction to improve this type of learning. However, these techniques can be used in other learning methods e.g. discretization can also be used for naive-bayes classifiers. The data preparation step represents almost 80 percent of the problem and is both time consuming and critical for the quality of modeling. Discretization of continuous features is an important problem that has effects on accuracy, complexity, variance and understandability of the induction models. In this thesis, we propose and develop resampling based aggregation techniques that improve the quality of discretization. Later, we validate by comparing with other discretization techniques and with an optimal partitioning method on 10 benchmark data sets.The second part of our thesis concerns with automatic fuzzy partitioning for soft decision tree induction. Soft or fuzzy decision tree is an extension of the classical crisp tree induction such that fuzzy logic is embedded into the induction process with the effect of more accurate models and reduced variance, but still interpretable and autonomous. We modify the above resampling based partitioning method to generate fuzzy partitions. In addition we propose, develop and validate another fuzzy partitioning method that improves the accuracy of the decision tree.Finally, we adopt a topological learning scheme and perform non-linear dimensionality reduction. We modify an existing manifold learning based technique and see whether it can enhance the predictive power and interpretability of classification
La recherche avancée dans les méthodes d'acquisition de données ainsi que les méthodes de stockage et les technologies d'apprentissage, s'attaquent défi d'automatiser de manière systématique les techniques d'apprentissage de données en vue d'extraire des connaissances valides et utilisables.La procédure de découverte de connaissances s'effectue selon les étapes suivants: la sélection des données, la préparation de ces données, leurs transformation, le fouille de données et finalement l'interprétation et validation des résultats trouvés. Dans ce travail de thèse, nous avons développé des techniques qui contribuent à la préparation et la transformation des données ainsi qu'a des méthodes de fouille des données pour extraire les connaissances. A travers ces travaux, on a essayé d'améliorer l'exactitude de la prédiction durant tout le processus d'apprentissage. Les travaux de cette thèse se basent sur les arbres de décision. On a alors introduit plusieurs approches de prétraitement et des techniques de transformation; comme le discrétisation, le partitionnement flou et la réduction des dimensions afin d'améliorer les performances des arbres de décision. Cependant, ces techniques peuvent être utilisées dans d'autres méthodes d'apprentissage comme la discrétisation qui peut être utilisées pour la classification bayesienne.Dans le processus de fouille de données, la phase de préparation de données occupe généralement 80 percent du temps. En autre, elle est critique pour la qualité de la modélisation. La discrétisation des attributs continus demeure ainsi un problème très important qui affecte la précision, la complexité, la variance et la compréhension des modèles d'induction. Dans cette thèse, nous avons proposes et développé des techniques qui ce basent sur le ré-échantillonnage. Nous avons également étudié d'autres alternatives comme le partitionnement flou pour une induction floue des arbres de décision. Ainsi la logique floue est incorporée dans le processus d'induction pour augmenter la précision des modèles et réduire la variance, en maintenant l'interprétabilité.Finalement, nous adoptons un schéma d'apprentissage topologique qui vise à effectuer une réduction de dimensions non-linéaire. Nous modifions une technique d'apprentissage à base de variété topologiques `manifolds' pour savoir si on peut augmenter la précision et l'interprétabilité de la classification
APA, Harvard, Vancouver, ISO, and other styles
6

Ardeshir, G. "Decision tree simplification for classifier ensembles." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843022/.

Full text
Abstract:
Design of ensemble classifiers involves three factors: 1) a learning algorithm to produce a classifier (base classifier), 2) an ensemble method to generate diverse classifiers, and 3) a combining method to combine decisions made by base classifiers. With regard to the first factor, a good choice for constructing a classifier is a decision tree learning algorithm. However, a possible problem with this learning algorithm is its complexity which has only been addressed previously in the context of pruning methods for individual trees. Furthermore, the ensemble method may require the learning algorithm to produce a complex classifier. Considering the fact that performance of simplification methods as well as ensemble methods changes from one domain to another, our main contribution is to address a simplification method (post-pruning) in the context of ensemble methods including Bagging, Boosting and Error-Correcting Output Code (ECOC). Using a statistical test, the performance of ensembles made by Bagging, Boosting and ECOC as well as five pruning methods in the context of ensembles is compared. In addition to the implementation a supporting theory called Margin, is discussed and the relationship of Pruning to bias and variance is explained. For ECOC, the effect of parameters such as code length and size of training set on performance of Pruning methods is also studied. Decomposition methods such as ECOC are considered as a solution to reduce complexity of multi-class problems in many real problems such as face recognition. Focusing on the decomposition methods, AdaBoost.OC which is a combination of Boosting and ECOC is compared with the pseudo-loss based version of Boosting, AdaBoost.M2. In addition, the influence of pruning on the performance of ensembles is studied. Motivated by the result that both pruned and unpruned ensembles made by AdaBoost.OC have similar accuracy, pruned ensembles are compared with ensembles of single node decision trees. This results in the hypothesis that ensembles of simple classifiers may give better performance as shown for AdaBoost.OC on the identification problem in face recognition. The implication is that in some problems to achieve best accuracy of an ensemble, it is necessary to select base classifier complexity.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmad, Amir. "Data Transformation for Decision Tree Ensembles." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.508528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Jingfeng. "Decision Tree Pruning Using Expert Knowledge." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1158279616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Shuning. "Optimal instance selection for improved decision tree." [Ames, Iowa : Iowa State University], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sinnamon, Roslyn M. "Binary decision diagrams for fault tree analysis." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/7424.

Full text
Abstract:
This thesis develops a new approach to fault tree analysis, namely the Binary Decision Diagram (BDD) method. Conventional qualitative fault tree analysis techniques such as the "top-down" or "bottom-up" approaches are now so well developed that further refinement is unlikely to result in vast improvements in terms of their computational capability. The BDD method has exhibited potential gains to be made in terms of speed and efficiency in determining the minimal cut sets. Further, the nature of the binary decision diagram is such that it is more suited to Boolean manipulation. The BDD method has been programmed and successfully applied to a number of benchmark fault trees. The analysis capabilities of the technique have been extended such that all quantitative fault tree top event parameters, which can be determined by conventional Kinetic Tree Theory, can now be derived directly from the BDD. Parameters such as the top event probability, frequency of occurrence and expected number of occurrences can be calculated exactly using this method, removing the need for the approximations previously required. Thus the BDD method is proven to have advantages in terms of both accuracy and efficiency. Initiator/enabler event analysis and importance measures have been incorporated to extend this method into a full analysis procedure.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Decision tree"

1

Gladwin, Christina. Ethnographic decision tree modeling. Newbury Park: Sage, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gladwin, Christina H. Ethnographic decision tree modeling. Newbury Park: Sage, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ken, Friedman. The decision tree: A novel. Rainier, Wash: Heart Pub., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Euler, Bryan L. EDDT: Emotional Disturbance Decision Tree. Lutz, FL: Psychological Assessment Resources, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grąbczewski, Krzysztof. Meta-Learning in Decision Tree Induction. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-00960-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Association, American Bankers. Analyzing financial statements: A decision tree approach. Washington, D.C: American Bankers Association, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barros, Rodrigo C., André C. P. L. F. de Carvalho, and Alex A. Freitas. Automatic Design of Decision-Tree Induction Algorithms. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14231-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

David, Landgrebe, and United States. National Aeronautics and Space Administration., eds. A survey of decision tree classifier methodology. West Lafayatte, Ind: School of Electrical Engineering, Purdue University, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

National Flood Proofing Committee (U.S.), ed. Flood proofing: How to evaluate your options : decision tree. [Fort Belvoir, Va.?]: US Army Corps of Engineers, National Flood Proofing Committee, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

National Flood Proofing Committee (U.S.), ed. Flood proofing: How to evaluate your options : decision tree. [Fort Belvoir, Va.?]: US Army Corps of Engineers, National Flood Proofing Committee, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Decision tree"

1

Ayyadevara, V. Kishore. "Decision Tree." In Pro Machine Learning Algorithms, 71–103. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nahler, Gerhard. "decision tree." In Dictionary of Pharmaceutical Medicine, 48. Vienna: Springer Vienna, 2009. http://dx.doi.org/10.1007/978-3-211-89836-9_366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Webb, Geoffrey I., Johannes Fürnkranz, Johannes Fürnkranz, Johannes Fürnkranz, Geoffrey Hinton, Claude Sammut, Joerg Sander, et al. "Decision Tree." In Encyclopedia of Machine Learning, 263–67. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berrar, Daniel, and Werner Dubitzky. "Decision Tree." In Encyclopedia of Systems Biology, 551–55. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Hang. "Decision Tree." In Machine Learning Methods, 77–102. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3917-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Panda, Rajendra Mohan, and B. S. Daya Sagar. "Decision Tree." In Encyclopedia of Mathematical Geosciences, 1–7. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-26050-7_81-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Panda, Rajendra Mohan, and B. S. Daya Sagar. "Decision Tree." In Encyclopedia of Mathematical Geosciences, 1–6. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-26050-7_81-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jo, Taeho. "Decision Tree." In Machine Learning Foundations, 141–65. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65900-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fürnkranz, Johannes. "Decision Tree." In Encyclopedia of Machine Learning and Data Mining, 1–5. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_66-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fürnkranz, Johannes. "Decision Tree." In Encyclopedia of Machine Learning and Data Mining, 330–35. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Decision tree"

1

Agrawal, Kartikay, Ayon Borthakur, Ayush Kumar Singh, Perambuduri Srikaran, Digjoy Nandi, and Omkaradithya Pujari. "Neural Decision Tree for Bio-TinyML." In 2024 IEEE Biomedical Circuits and Systems Conference (BioCAS), 1–5. IEEE, 2024. https://doi.org/10.1109/biocas61083.2024.10798396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Lidong, Rui Huang, Xiaoqin Li, Yongbin Luo, and Rong Zhang. "Analysis of mini program user usage market based on decision tree classification and decision tree regression algorithm." In Fourth International Conference on Advanced Algorithms and Signal Image Processing (AASIP 2024), edited by Grigorios Beligiannis and Daniel-Ioan Curiac, 25. SPIE, 2024. http://dx.doi.org/10.1117/12.3045543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vos, Daniël, and Sicco Verwer. "Optimal Decision Tree Policies for Markov Decision Processes." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/606.

Full text
Abstract:
Interpretability of reinforcement learning policies is essential for many real-world tasks but learning such interpretable policies is a hard problem. Particularly, rule-based policies such as decision trees and rules lists are difficult to optimize due to their non-differentiability. While existing techniques can learn verifiable decision tree policies, there is no guarantee that the learners generate a policy that performs optimally. In this work, we study the optimization of size-limited decision trees for Markov Decision Processes (MPDs) and propose OMDTs: Optimal MDP Decision Trees. Given a user-defined size limit and MDP formulation, OMDT directly maximizes the expected discounted return for the decision tree using Mixed-Integer Linear Programming. By training optimal tree policies for different MDPs we empirically study the optimality gap for existing imitation learning techniques and find that they perform sub-optimally. We show that this is due to an inherent shortcoming of imitation learning, namely that complex policies cannot be represented using size-limited trees. In such cases, it is better to directly optimize the tree for expected return. While there is generally a trade-off between the performance and interpretability of machine learning models, we find that on small MDPs, depth 3 OMDTs often perform close to optimally.
APA, Harvard, Vancouver, ISO, and other styles
4

Yawata, Koichiro, Yoshihiro Osakabe, Takuya Okuyama, and Akinori Asahara. "QUBO Decision Tree: Annealing Machine Extends Decision Tree Splitting." In 2022 IEEE International Conference on Knowledge Graph (ICKG). IEEE, 2022. http://dx.doi.org/10.1109/ickg55886.2022.00052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Desai, Ankit, and Sanjay Chaudhary. "Distributed Decision Tree." In ACM COMPUTE '16: Ninth Annual ACM India Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2998476.2998478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nowozin, Sebastian, Carsten Rother, Shai Bagon, Toby Sharp, Bangpeng Yao, and Pushmeet Kohli. "Decision tree fields." In 2011 IEEE International Conference on Computer Vision (ICCV). IEEE, 2011. http://dx.doi.org/10.1109/iccv.2011.6126429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gavankar, Sachin S., and Sudhirkumar D. Sawarkar. "Eager decision tree." In 2017 2nd International Conference for Convergence in Technology (I2CT). IEEE, 2017. http://dx.doi.org/10.1109/i2ct.2017.8226246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Meng, Qing-wu, Qiang He, Ning Li, Xiang-ran Du, and Li-na Su. "Crisp Decision Tree Induction Based on Fuzzy Decision Tree Algorithm." In 2009 First International Conference on Information Science and Engineering. IEEE, 2009. http://dx.doi.org/10.1109/icise.2009.440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Sieh-Chuen, Hsuan-Lei Shao, and Robert B. Leflar. "Applying decision tree analysis to family court decisions." In ICAIL '21: Eighteenth International Conference for Artificial Intelligence and Law. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462757.3466076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

COUVREUR, Jean-Michel, and Duy-Tung NGUYEN. "Tree Data Decision Diagrams." In Second International Workshop on Verification and Evaluation of Computer and Communication Systems (VECoS 2008). BCS Learning & Development, 2008. http://dx.doi.org/10.14236/ewic/vecos2008.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Decision tree"

1

Hamilton, Jill, and Tuan Nguyen. Asbestos Inspection/Reinspection Decision Tree. Fort Belvoir, VA: Defense Technical Information Center, October 1999. http://dx.doi.org/10.21236/ada370454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Narlikar, Girija J. A Parallel, Multithreaded Decision Tree Builder. Fort Belvoir, VA: Defense Technical Information Center, December 1998. http://dx.doi.org/10.21236/ada363531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Quiller, Ryan. Decision Tree Technique for Particle Identification. Office of Scientific and Technical Information (OSTI), September 2003. http://dx.doi.org/10.2172/815649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mughal, Mohamed. Biological Weapons Response Template and Decision Tree. Fort Belvoir, VA: Defense Technical Information Center, April 2001. http://dx.doi.org/10.21236/ada385897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dakin, Gordon, and Sankar Virdhagriswaran. Misleading Information Detection Through Probabilistic Decision Tree Classifiers. Fort Belvoir, VA: Defense Technical Information Center, September 2002. http://dx.doi.org/10.21236/ada406823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kwon, Theresa Hyunjin, Erin Cho, and Youn-Kyung Kim. Identifying Sustainable Style Consumers with Decision Tree Predictive Model. Ames: Iowa State University, Digital Repository, November 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eccleston, C. H. The decision - identification tree: A new EIS scoping tool. Office of Scientific and Technical Information (OSTI), April 1997. http://dx.doi.org/10.2172/16876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mikulski, Dariusz G. Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers. Fort Belvoir, VA: Defense Technical Information Center, September 2006. http://dx.doi.org/10.21236/ada489077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Song, So Young, Erin Cho, Youn-Kyung Kim, and Theresa Hyunjin Kwon. Clothing Communication via Social Media: A Decision Tree Predictive Model. Ames: Iowa State University, Digital Repository, November 2015. http://dx.doi.org/10.31274/itaa_proceedings-180814-102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zaman, Md Mostafa, Theresa Hyunjin Kwon, Katrina Laemmerhirt, and Youn-Kyung Kim. Profiling Second-hand Clothing Shoppers with Decision Tree Predictive Model. Ames: Iowa State University, Digital Repository, 2017. http://dx.doi.org/10.31274/itaa_proceedings-180814-407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography