To see the other types of publications on this topic, follow the link: DECISION TRESS.

Journal articles on the topic 'DECISION TRESS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'DECISION TRESS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Koyuncugil, Ali Serhan, and Nermin Ozgulbas. "Detecting Road Maps for Capacity Utilization Decisions by Clustering Analysis and CHAID Decision Tress." Journal of Medical Systems 34, no. 4 (February 10, 2009): 459–69. http://dx.doi.org/10.1007/s10916-009-9258-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arkin, Esther M., Henk Meijer, Joseph S. B. Mitchell, David Rappaport, and Steven S. Skiena. "Decision Trees for Geometric Models." International Journal of Computational Geometry & Applications 08, no. 03 (June 1998): 343–63. http://dx.doi.org/10.1142/s0218195998000175.

Full text
Abstract:
A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies ("decision trees") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a ⌈l g k⌉ height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are non-degenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NP-complete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show that, in the general case, it yields a decision tree whose height is at most ⌈l g k⌉ times that of an optimal tree. Finally, we discuss some restricted cases whose special structure allows for improved results.
APA, Harvard, Vancouver, ISO, and other styles
3

Reddy, M. R. S. Surya Narayana, T. Narayana Reddy, and C. Viswanatha Reddy. "Decision Tress Analysis on Employee Job Satisfaction and HRD Climate: Role of Demographics." International Journal of Management Studies VI, no. 2(1) (April 30, 2019): 22. http://dx.doi.org/10.18843/ijms/v6i2(1)/03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Tianyu, Xuandong Mo, Mingjun Chen, and Changfeng Yao. "Machine-learning-assisted microstructure–property linkages of carbon nanotube-reinforced aluminum matrix nanocomposites produced by laser powder bed fusion." Nanotechnology Reviews 10, no. 1 (January 1, 2021): 1410–24. http://dx.doi.org/10.1515/ntrev-2021-0093.

Full text
Abstract:
Abstract In this study, the cellular microstructural features in a subgrain size of carbon nanotube (CNT)-reinforced aluminum matrix nanocomposites produced by laser powder bed fusion (LPBF) (a size range between 0.5–1 μm) were quantitatively extracted and calculated from scanning electron microscopy images by applying a cell segmentation method and various image analysis techniques. Over 80 geometric features for each cellular cell were extracted and statistically analyzed using machine learning techniques to explore the structure–property linkages of carbon nanotube reinforced AlSi10Mg nanocomposites. Predictive models for hardness and relative mass density were established using these subgrain cellular microstructural features. Data dimension reduction using principal component analysis was conducted to reduce the feature number to 3. The results showed that even AlSi10Mg nanocomposite specimens produced using different laser parameters exhibited similar Al–Si eutectic microstructures, displaying a large difference in their mechanical properties including hardness and relative mass density due to cellular structure variance. For hardness prediction, the Extra Tress regression models showed a relative error of 2.47% for prediction accuracies. For the relative mass density prediction, the Decision Tress regression models showed a relative error of 1.42% for prediction accuracies. The results demonstrate that the developed models deliver satisfactory performance for hardness and relative mass density prediction of AlSi10Mg nanocomposites. The framework established in this study can be applied to the LPBF process optimization and mechanical properties manipulation of AlSi10Mg-based alloys and other additive manufacturing newly designed alloys or composites.
APA, Harvard, Vancouver, ISO, and other styles
5

Wawrzyk, Martyna. "Semi-supervised learning with the clustering and Decision Trees classifier for the task of cognitive workload study." Journal of Computer Sciences Institute 15 (June 30, 2020): 214–18. http://dx.doi.org/10.35784/jcsi.1725.

Full text
Abstract:
The paper is focused on application of the clustering algorithm and Decision Tress classifier (DTs) as a semi-supervised method for the task of cognitive workload level classification. The analyzed data were collected during examination of Digit Symbol Substitution Test (DSST) with use of eye-tracker device. 26 participants took part in examination as volunteers. There were conducted three parts of DSST test with different levels of difficulty. As a results there were obtained three versions of data: low, middle and high level of cognitive workload. The case study covered clustering of collected data by using k-means algorithm to detect three clusters or more. The obtained clusters were evaluated by three internal indices to measure the quality of clustering. The David-Boudin index detected the best results in case of four clusters. Based on this information it is possible to formulate the hypothesis of the existence of four clusters. The obtained clusters were adopted as classes in supervised learning and have been subjected to classification. The DTs was applied in classification. There were obtained the 0.85 mean accuracy for three-class classification and 0.73 mean accuracy for four-class classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Zaborski, Daniel, Witold Stanisław Proskura, Katarzyna Wojdak-Maksymiec, and Wilhelm Grzesiak. "Identification of Cows Susceptible to Mastitis based on Selected Genotypes by Using Decision Trees and A Generalized Linear Model." Acta Veterinaria 66, no. 3 (September 1, 2016): 317–35. http://dx.doi.org/10.1515/acve-2016-0028.

Full text
Abstract:
AbstractThe aim of the present study was to: 1) check whether it would be possible to detect cows susceptible to mastitis at an early stage of their utilization based on selected genotypes and basic production traits in the first three lactations using ensemble data mining methods (boosted classification tress – BT and random forest – RF), 2) find out whether the inclusion of additional production variables for subsequent lactations will improve detection performance of the models, 3) identify the most significant predictors of susceptibility to mastitis, and 4) compare the results obtained by using BT and RF with those for the more traditional generalized linear model (GLZ). A total of 801 records for Polish Holstein-Friesian Black-and-White cows were analyzed. The maximum sensitivity, specificity and accuracy of the test set were 72.13%, 39.73%, 55.90% (BT), 86.89%, 17.81%, 59.49% (RF) and 90.16%, 8.22%, 58.97% (GLZ), respectively. Inclusion of additional variables did not have a significant effect on the model performance. The most significant predictors of susceptibility to mastitis were: milk yield, days in milk, sire’s rank, percentage of Holstein-Friesian genes, whereas calving season and genotypes (lactoferrin, tumor necrosis factor alpha, lysozyme and defensins) were ranked much lower. The applied models (both data mining ones and GLZ) showed low accuracy in detecting cows susceptible to mastitis and therefore some other more discriminating predictors should be used in future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Chauhan, Rahul. "Prediction of Employee Turnover based on Machine Learning Models." Mathematical Statistician and Engineering Applications 70, no. 2 (February 26, 2021): 1767–75. http://dx.doi.org/10.17762/msea.v70i2.2469.

Full text
Abstract:
As a result of the fact that the procedure of decision making constitutes a vital component in the management of a company, the personnel of that company are seen as a valuable kind of asset by the latter. Therefore, the procedure of employing them in the first place by making the appropriate choices is generally recognised as a well-known obstacle by administrative authorities. Employee turnover may be a time-consuming and difficult process because recruiting new workers requires not only additional time but also a significant amount of financial expenditure. In addition to this, there are a number of additional elements that play a role in the selection and hiring of a qualified applicant, who in turn would provide economic returns for an organisation. In this research, I propose building a model to predict employee turnover rate using data from three datasets acquired from the Kaggle repository and a subset of their features. To analyse staff traits and forecast turnover and churn rate, the work summarised here employs machine learning approaches and pre-processing techniques. Logistic regression, AdaBoost, XGBoost, KNN, decision tress, and Naive Bayes are only some of the machine learning algorithms tried out on extracted datasets in the report's implementation experiments. Evaluating qualities against evaluation parameters like accuracy and precision factors follows thorough research and training of selected attributes.
APA, Harvard, Vancouver, ISO, and other styles
8

Niapele, Sabaria, and Tamrin Salim. "Vegetation Analysis of the Tagafura Protected Forest in the City of Tidore Islands." Agrikan: Jurnal Agribisnis Perikanan 13, no. 2 (December 3, 2020): 426. http://dx.doi.org/10.29239/j.agrikan.13.2.426-434.

Full text
Abstract:
The existence of Forest park vegetation in Tagafura as the vegetation cover are important to be maintain and preserved, since it’s effective for the human live on the earth. The function of this forest park is to defend the field around the forest in several ways such as, the water cycle, avoid the flood, erosion scheming and the soil fruitfulness keeper. The Tagafura Forest Park has a lot of natural resource, but the structure and the composition of the field are not completely found yet. Based on the the statement above the Researcher are interested to conduct the research entitled “ VEGETATION ANALYSIS OF THE TAGAFURA FOREST PARK IN TIDORE ISLAND to know about the structure and the composition of the vegetation in the Forest park of Tagafura and be able to being as the government substance while made a decision about the Forest park. This research used purposive sampling with a combination of to track and double plot to placement the plot. The data then analyzing used the density and relative density formula, domination and relative domination formula, frequency and relative frequency formula and The Importance Value Index (INP). Based on the research result, the data was founded that the forest has 25 structures include 15 types of Seedlings, 10 type of Stakes, 13 type of poles and 12 types of tress. The domination of the composition type amount the growth based on the INP is (1). Augenia aromatic with the INP in Seedling are 45,49. INP for Stand are 18,05. INP for the are 23,67 and the INP for the trees are 132.08. (2). Myristica fragrans has the INP for the seedlings are 31.44. INP for Stand are 15.11. INP for the poles are 30.27 and the INP for the trees are 47.25. (3). Gnetum gnemo has the INP for the seedlings are 19,48. INP for Stand are 24.21. INP for the poles are 49.92 and the INP for the trees are 10.83. (4). Arenga Pinnata has the INP for the seedlings are 18,13. INP for Stand are 36.11. INP for the poles are 24.04 and the INP for the trees are 17.51. (5). Cinnamomum verum has the INP for the seedlings are 11.84. INP for Stand are 33.17. INP for the poles are 26.42 and the INP for the trees are 7.36.
APA, Harvard, Vancouver, ISO, and other styles
9

Azad, Mohammad, Igor Chikalov, and Mikhail Moshkov. "Representation of Knowledge by Decision Trees for Decision Tables with Multiple Decisions." Procedia Computer Science 176 (2020): 653–59. http://dx.doi.org/10.1016/j.procs.2020.09.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mărginean, Nicolae, Janetta Sîrbu, and Dan Racoviţan. "Decision Trees – A Perspective Of Electronic Decisional Support." Annales Universitatis Apulensis Series Oeconomica 2, no. 12 (December 31, 2010): 631–37. http://dx.doi.org/10.29302/oeconomica.2010.12.2.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nwosisi, Christopher, Sung Hyuk Cha, Yoo Jung An, Charles C. Tappert, and Evan Lipsitz. "Predicting Deep Venous Thrombosis Using Binary Decision Trees." International Journal of Engineering and Technology 3, no. 5 (2011): 467–72. http://dx.doi.org/10.7763/ijet.2011.v3.271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

TOFAN, Cezarina Adina. "Method of decision tree applied in adopting the decision for promoting a company." Annals of "Spiru Haret". Economic Series 15, no. 3 (September 30, 2015): 47. http://dx.doi.org/10.26458/1535.

Full text
Abstract:
The decision can be defined as the way chosen from several possible to achieve an objective. An important role in the functioning of the decisional-informational system is held by the decision-making methods. Decision trees are proving to be very useful tools for taking financial decisions or regarding the numbers, where a large amount of complex information must be considered. They provide an effective structure in which alternative decisions and the implications of their choice can be assessed, and help to form a correct and balanced vision of the risks and rewards that may result from a certain choice. For these reasons, the content of this communication will review a series of decision-making criteria. Also, it will analyse the benefits of using the decision tree method in the decision-making process by providing a numerical example. On this basis, it can be concluded that the procedure may prove useful in making decisions for companies operating on markets where competition intensity is differentiated.
APA, Harvard, Vancouver, ISO, and other styles
13

Coadou, Yann. "Decision trees." EPJ Web of Conferences 4 (2010): 02003. http://dx.doi.org/10.1051/epjconf/20100402003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Barr, John, Michael Littman, and Marie desJardins. "Decision trees." ACM Inroads 10, no. 3 (August 6, 2019): 56. http://dx.doi.org/10.1145/3350749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

de Ville, Barry. "Decision trees." Wiley Interdisciplinary Reviews: Computational Statistics 5, no. 6 (October 4, 2013): 448–55. http://dx.doi.org/10.1002/wics.1278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Azad, Mohammad, and Mikhail Moshkov. "Multi-stage optimization of decision and inhibitory trees for decision tables with many-valued decisions." European Journal of Operational Research 263, no. 3 (December 2017): 910–21. http://dx.doi.org/10.1016/j.ejor.2017.06.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Barrak, Mashael A., and Muna Al-Razgan. "Predicting Students Final GPA Using Decision Trees: A Case Study." International Journal of Information and Education Technology 6, no. 7 (2016): 528–33. http://dx.doi.org/10.7763/ijiet.2016.v6.745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Contreras Morales, Evelyn Francisca, Francisca Mercedes Ferreira Correa, and Mauricio A. Valle. "DISEÑO DE UN MODELO PREDICTIVO DE FUGA DE CLIENTES UTILIZANDO ÁRBOLES DE DECISIÓN." REVISTA INGENIERIA INDUSTRTIAL 16, no. 1 (April 1, 2017): 07–23. http://dx.doi.org/10.22320/s07179103/2017.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Quinlan, J. R. "Decision trees and decision-making." IEEE Transactions on Systems, Man, and Cybernetics 20, no. 2 (1990): 339–46. http://dx.doi.org/10.1109/21.52545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zaborski, Daniel, Witold S. Proskura, and Wilhelm Grzesiak. "CLASSIFICATION OF CALVING DIFFICULTY SCORES USING DIFFERENT TYPES OF DECISION TREES." Acta Scientiarum Polonorum Zootechnica 15, no. 4 (January 10, 2017): 55–70. http://dx.doi.org/10.21005/asp.2016.15.4.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mihai, Dana, and Mihai Mocanu. "Processing GIS Data Using Decision Trees and an Inductive Learning Method." International Journal of Machine Learning and Computing 11, no. 6 (November 2021): 393–98. http://dx.doi.org/10.18178/ijmlc.2021.11.6.1067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rada, Roy, and Hayden Wimmer. "Decision Trees and Financial Variables." International Journal of Decision Support System Technology 9, no. 1 (January 2017): 1–15. http://dx.doi.org/10.4018/ijdsst.2017010101.

Full text
Abstract:
A decision tree program for forecasting stock performance is applied to Compustat's Global financial statement data augmented with International Monetary Fund data. The hypothesis is that certain Compustat variables will be most used by the decision tree program and will provide insight as to how to make investing decisions. Surprisingly, the authors' experiments show that the most frequently used variables come from the International Monetary Fund and that variables provided exclusively for Financial Industry stocks were not useful for forecasting financial stock performance. These experiments might be part of a constellation of such experiments that help people map financial forecasting problems to the variables most useful for solving those problems. The research shows the value of using decision tree methodologies as applied to finance.
APA, Harvard, Vancouver, ISO, and other styles
23

Barach, P., V. Levashenko, and E. Zaitseva. "Fuzzy Decision Trees in Medical Decision Making Support Systems." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 8, no. 1 (September 2019): 37–42. http://dx.doi.org/10.1177/2327857919081009.

Full text
Abstract:
Fuzzy decision trees represent classification knowledge more naturally to the way of human thinking and are more robust in tolerating imprecise, conflict, and missing information. Decision Making Support Systems are used widely in clinical medicine because decisions play an important role in diagnostic processes. Decision trees are a very suitable candidate for induction of simple decision-making models with the possibility of automatic learning. The goal of this paper is to demonstrate a new approach for predictive data mining models in clinical medicine. This approach is based on induction of fuzzy decision trees. This approach allows us to build decision-making modesl with different properties (ordered, stability etc.). Three new types of fuzzy decision trees (non-ordered, ordered and stable) are considered in the paper. Induction of these fuzzy decision trees is based on cumulative information estimates. Results of experimental investigation are presented. Predictive data mining is becoming an essential instrument for researchers and clinical practitioners in medicine. Using new approaches based on fuzzy decision trees allows to increase the prediction accuracy. Decision trees are a very suitable candidate for induction using simple decision-making models with the possibility of automatic and AI learning.
APA, Harvard, Vancouver, ISO, and other styles
24

Šprogar, Matej, Peter Kokol, Špela Hleb Babič, Vili Podgorelec, and Milan Zorman. "Vector decision trees." Intelligent Data Analysis 4, no. 3-4 (July 1, 2000): 305–21. http://dx.doi.org/10.3233/ida-2000-43-410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kargupta, K., B. H. Park, and H. Dutta. "Orthogonal decision trees." IEEE Transactions on Knowledge and Data Engineering 18, no. 8 (August 2006): 1028–42. http://dx.doi.org/10.1109/tkde.2006.127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yildiz, C. T., and E. Alpaydin. "Omnivariate decision trees." IEEE Transactions on Neural Networks 12, no. 6 (2001): 1539–46. http://dx.doi.org/10.1109/72.963795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Jiuyong, Saisai Ma, Thuc Le, Lin Liu, and Jixue Liu. "Causal Decision Trees." IEEE Transactions on Knowledge and Data Engineering 29, no. 2 (February 1, 2017): 257–71. http://dx.doi.org/10.1109/tkde.2016.2619350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Quinlan, J. R. "Simplifying decision trees." International Journal of Man-Machine Studies 27, no. 3 (September 1987): 221–34. http://dx.doi.org/10.1016/s0020-7373(87)80053-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Brodley, Carla E., and Paul E. Utgoff. "Multivariate decision trees." Machine Learning 19, no. 1 (April 1995): 45–77. http://dx.doi.org/10.1007/bf00994660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Coles, Susan, and Jennifer Rowley. "Revisiting decision trees." Management Decision 33, no. 8 (October 1995): 46–50. http://dx.doi.org/10.1108/00251749510093932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

QUINLAN, J. R. "Simplifying decision trees." International Journal of Human-Computer Studies 51, no. 2 (August 1999): 497–510. http://dx.doi.org/10.1006/ijhc.1987.0321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Krupa, Tadeusz, and Teresa Ostrowska. "Decision-Making in Flat and Hierarchical Decision Problems." Foundations of Management 4, no. 2 (December 1, 2012): 23–36. http://dx.doi.org/10.2478/fman-2013-0008.

Full text
Abstract:
Abstract The article is dedicated to the modelling of the essence of decision-taking processes in flat and hierarchical decision problems. In flat decision problems particular attention is drawn to the effectiveness of strategies in seeking decision variants on solution decomposition trees, taking into account the strength of their predefined contradictions. For hierarchical decision processes, the issue of iterative balancing of global (hierarchical) decisions is expressed, based on the valuation of the significance of flat decisions.
APA, Harvard, Vancouver, ISO, and other styles
33

Dortmans, Eric, and Teade Punter. "Behavior Trees for Smart Robots Practical Guidelines for Robot Software Development." Journal of Robotics 2022 (September 7, 2022): 1–9. http://dx.doi.org/10.1155/2022/3314084.

Full text
Abstract:
Behavior Trees are a promising approach to model the autonomous behaviour of robots in dynamic environments. Behavior Trees represent action selection decisions as a tree of decision nodes. The hierarchy of these decision nodes provides the planning of actions of the robot including its reactions on exceptions. Behavior Trees enable flexible planning and replanning of robot behavior while supporting better maintainable decision-making than traditional Finite State Machines. This paper presents an overview of lessons, which we have learned when applying Behavior Trees to various autonomous robots. We present these lessons as a sequence of steps that is meant to support robot software practitioners to develop their systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Rodríguez Garcés, Carlos, and Daniela Sandoval Muñoz. "Consumo tecnológico: Análisis de los determinantes del equipamiento doméstico mediante Arboles de Decisión." Revista Internacional de Investigación en Ciencias Sociales 11, no. 1 (August 3, 2015): 70–85. http://dx.doi.org/10.18004/riics.2015.julio.70-85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Filipas, Ana Marija, Nenad Vretenar, and Ivan Prudky. "DECISION TREES DO NOT LIE: CURIOSITIES IN PREFERENCES OF CROATIAN ONLINE CONSUMERS." Zbornik radova Ekonomskog fakulteta u Rijeci: časopis za ekonomsku teoriju i praksu/Proceedings of Rijeka Faculty of Economics: Journal of Economics and Business 41, no. 1 (June 30, 2023): 157–81. http://dx.doi.org/10.18045/zbefri.2023.1.157.

Full text
Abstract:
Understanding consumers’ preferences has always been important for economic theory and for business practitioners in operations management, supply chain management, marketing, etc. While preferences are often considered stable in simplified theoretical modelling, this is not the case in real-world decision-making. Therefore, it is crucial to understand consumers’ preferences when a market disruption occurs. This research aims to recognise consumers’ preferences with respect to online shopping after the COVID-19 outbreak hit markets. To this purpose, we conducted an empirical study among Croatian consumers with prior experience in online shopping using an online questionnaire. The questionnaire was completed by 350 respondents who met the criteria. We selected decision-tree models using the J48 algorithm to determine the influences of the found shopping factors and demographic characteristics on a consumer’s preference indicator. The main components of our indicators that influence consumer behaviour are the stimulators and destimulators of online shopping and the importance of social incidence. Our results show significant differences between men and women, with men tending to use fewer variables to make decisions. In addition, the analysis revealed that four product groups and a range of shopping mode-specific influencing factors are required to evaluate consumers’ purchase points when constructing the consumers’ preference indicator.
APA, Harvard, Vancouver, ISO, and other styles
36

AMPUŁA, Dariusz. "Prediction of Post-Diagnostic Decisions for Tested Hand Grenades’ Fuzes Using Decision Trees." Problems of Mechatronics Armament Aviation Safety Engineering 12, no. 2 (June 30, 2021): 39–54. http://dx.doi.org/10.5604/01.3001.0014.9332.

Full text
Abstract:
The article presents a brief history of creation of decision trees and defines the purpose of the undertaken works. The process of building a classification tree, according to the CHAID method, is shown paying particular attention to the disadvantages, advantages, and characteristics features of this method, as well as to the formal requirements that are necessary to build this model. The tree’s building method for UZRGM (Universal Modernised Fuze of Hand Grenades) fuzes was characterized, specifying the features of the tested hand grenade fuzes and the predictors used that are necessary to create the correct tree model. A classification tree was built basing on the test results, assuming the accepted post-diagnostic decision as a qualitative dependent variable. A schema of the designed tree for the first diagnostic tests, its full structure and the size of individual classes of the node are shown. The matrix of incorrect classifications was determined, which determines the accuracy of incorrect predictions, i.e., correctness of the performed classification. A sheet with risk assessment and standard error for the learning sample and the v-fold cross-check were presented. On the selected examples, the quality of the resulting predictive model was assessed by means of a graph of the cumulative value of the lift coefficient and the "ROC" curve
APA, Harvard, Vancouver, ISO, and other styles
37

Azad, Mohammad, Beata Zielosko, Mikhail Moshkov, and Igor Chikalov. "Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study." Procedia Computer Science 22 (2013): 87–94. http://dx.doi.org/10.1016/j.procs.2013.09.084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kašćelan, Ljiljana, and Vladimir Kašćelan. "Component-Based Decision Trees." International Journal of Operations Research and Information Systems 6, no. 4 (October 2015): 1–18. http://dx.doi.org/10.4018/ijoris.2015100101.

Full text
Abstract:
Popular decision tree (DT) algorithms such as ID3, C4.5, CART, CHAID and QUEST may have different results using same data set. They consist of components which have similar functionalities. These components implemented on different ways and they have different performance. The best way to get an optimal DT for a data set is one that use component-based design, which enables user to intelligently select in advance implemented components well suited to specific data set. In this article the authors proposed component-based design of the optimal DT for classification of securities account holders. Research results showed that the optimal algorithm is not one of the original DT algorithms. This fact confirms that the component design provided algorithms with better performance than the original ones. Also, the authors found how the specificities of the data influence the DT components performance. Obtained results of classification can be useful to the future investors in the Montenegrin capital market.
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Bin-Bin, Song-Qing Shen, and Wei Gao. "Weighted Oblique Decision Trees." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5621–27. http://dx.doi.org/10.1609/aaai.v33i01.33015621.

Full text
Abstract:
Decision trees have attracted much attention during the past decades. Previous decision trees include axis-parallel and oblique decision trees; both of them try to find the best splits via exhaustive search or heuristic algorithms in each iteration. Oblique decision trees generally simplify tree structure and take better performance, but are always accompanied with higher computation, as well as the initialization with the best axis-parallel splits. This work presents the Weighted Oblique Decision Tree (WODT) based on continuous optimization with random initialization. We consider different weights of each instance for child nodes at all internal nodes, and then obtain a split by optimizing the continuous and differentiable objective function of weighted information entropy. Extensive experiments show the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Nogueira, Ana Rita, Carlos Abreu Ferreira, and João Gama. "Semi-causal decision trees." Progress in Artificial Intelligence 11, no. 1 (October 18, 2021): 105–19. http://dx.doi.org/10.1007/s13748-021-00262-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

An, Shuang, Hong Shi, Qinghua Hu, and Jianwu Dang. "Fuzzy Rough Decision Trees." Fundamenta Informaticae 132, no. 3 (2014): 381–99. http://dx.doi.org/10.3233/fi-2014-1050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pedrycz, W., and Z. A. Sosnowski. "C–Fuzzy Decision Trees." IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 35, no. 4 (November 2005): 498–511. http://dx.doi.org/10.1109/tsmcc.2004.843205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Basak, Jayanta. "Online Adaptive Decision Trees." Neural Computation 16, no. 9 (September 1, 2004): 1959–81. http://dx.doi.org/10.1162/0899766041336396.

Full text
Abstract:
Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to changing environment, whereas neural networks are inherently capable of online learning and adpativity. Here we provide a classification scheme called online adaptive decision trees (OADT), which is a tree-structured network like the decision trees and capable of online learning like neural networks. A new objective measure is derived for supervised learning with OADT. Experimental results validate the effectiveness of the proposed classification scheme. Also, with certain real-life data sets, we find that OADT performs better than two widely used models: the hierarchical mixture of experts and multilayer perceptron.
APA, Harvard, Vancouver, ISO, and other styles
44

Qian, Yuhua, Hang Xu, Jiye Liang, Bing Liu, and Jieting Wang. "Fusing Monotonic Decision Trees." IEEE Transactions on Knowledge and Data Engineering 27, no. 10 (October 1, 2015): 2717–28. http://dx.doi.org/10.1109/tkde.2015.2429133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sok, Hong Kuan, Melanie Po-Leen Ooi, Ye Chow Kuang, and Serge Demidenko. "Multivariate alternating decision trees." Pattern Recognition 50 (February 2016): 195–209. http://dx.doi.org/10.1016/j.patcog.2015.08.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yıldız, Olcay Taner, and Onur Dikmen. "Parallel univariate decision trees." Pattern Recognition Letters 28, no. 7 (May 2007): 825–32. http://dx.doi.org/10.1016/j.patrec.2006.11.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Apolloni, Bruno, Giacomo Zamponi, and Anna Maria Zanaboni. "Learning fuzzy decision trees." Neural Networks 11, no. 5 (July 1998): 885–95. http://dx.doi.org/10.1016/s0893-6080(98)00030-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Oliver, Jonathan J., and David Hand. "Averaging over decision trees." Journal of Classification 13, no. 2 (September 1996): 281–97. http://dx.doi.org/10.1007/bf01246103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

BHATT, RAJEN B., and M. GOPAL. "NEURO-FUZZY DECISION TREES." International Journal of Neural Systems 16, no. 01 (February 2006): 63–78. http://dx.doi.org/10.1142/s0129065706000470.

Full text
Abstract:
Fuzzy decision trees are powerful, top-down, hierarchical search methodology to extract human interpretable classification rules. However, they are often criticized to result in poor learning accuracy. In this paper, we propose Neuro-Fuzzy Decision Trees (N-FDTs); a fuzzy decision tree structure with neural like parameter adaptation strategy. In the forward cycle, we construct fuzzy decision trees using any of the standard induction algorithms like fuzzy ID3. In the feedback cycle, parameters of fuzzy decision trees have been adapted using stochastic gradient descent algorithm by traversing back from leaf to root nodes. With this strategy, during the parameter adaptation stage, we keep the hierarchical structure of fuzzy decision trees intact. The proposed approach of applying backpropagation algorithm directly on the structure of fuzzy decision trees improves its learning accuracy without compromising the comprehensibility (interpretability). The proposed methodology has been validated using computational experiments on real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
50

Kingsford, Carl, and Steven L. Salzberg. "What are decision trees?" Nature Biotechnology 26, no. 9 (September 2008): 1011–13. http://dx.doi.org/10.1038/nbt0908-1011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography