Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: DECISION TREE TECHNIQUE.

Статті в журналах з теми "DECISION TREE TECHNIQUE"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "DECISION TREE TECHNIQUE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Dr. S.Vijayarani, Dr S. Vijayarani, and M. Sangeetha M. Sangeetha. "An Efficient Technique for Privacy Preserving Decision Tree Learning." Indian Journal of Applied Research 3, no. 9 (October 1, 2011): 127–30. http://dx.doi.org/10.15373/2249555x/sept2013/40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cai, Yuliang, Huaguang Zhang, Qiang He, and Shaoxin Sun. "New classification technique: fuzzy oblique decision tree." Transactions of the Institute of Measurement and Control 41, no. 8 (June 11, 2018): 2185–95. http://dx.doi.org/10.1177/0142331218774614.

Повний текст джерела
Анотація:
Based on axiomatic fuzzy set (AFS) theory and fuzzy information entropy, a novel fuzzy oblique decision tree (FODT) algorithm is proposed in this paper. Traditional axis-parallel decision trees only consider a single feature at each non-leaf node, while oblique decision trees partition the feature space with an oblique hyperplane. By contrast, the FODT takes dynamic mining fuzzy rules as a decision function. The main idea of the FODT is to use these fuzzy rules to construct leaf nodes for each class in each layer of the tree; the samples that cannot be covered by the fuzzy rules are then put into an additional node – the only non-leaf node in this layer. Construction of the FODT consists of four major steps: (a) generation of fuzzy membership functions automatically by AFS theory according to the raw data distribution; (b) extraction of dynamically fuzzy rules in each non-leaf node by the fuzzy rule extraction algorithm (FREA); (c) construction of the FODT by the fuzzy rules obtained from step (b); and (d) determination of the optimal threshold [Formula: see text] to generate a final tree. Compared with five traditional decision trees (C4.5, LADtree (LAD), Best-first tree (BFT), SimpleCart (SC) and NBTree (NBT)) and a recently obtained fuzzy rules decision tree (FRDT) on eight UCI machine learning data sets and one biomedical data set (ALLAML), the experimental results demonstrate that the proposed algorithm outperforms the other decision trees in both classification accuracy and tree size.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Maazouzi, Faiz, and Halima Bahi. "Using multi decision tree technique to improving decision tree classifier." International Journal of Business Intelligence and Data Mining 7, no. 4 (2012): 274. http://dx.doi.org/10.1504/ijbidm.2012.051712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kaur, Amanpreet. "IMAGE COMPRESSION USING DECISION TREE TECHNIQUE." International Journal of Advanced Research in Computer Science 8, no. 8 (August 30, 2017): 682–88. http://dx.doi.org/10.26483/ijarcs.v8i8.4812.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Olaru, Cristina, and Louis Wehenkel. "A complete fuzzy decision tree technique." Fuzzy Sets and Systems 138, no. 2 (September 2003): 221–54. http://dx.doi.org/10.1016/s0165-0114(03)00089-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sharma, Dr Nirmla, and Sameera Iqbal Muhmmad Iqbal. "Applying Decision Tree Algorithm Classification and Regression Tree (CART) Algorithm to Gini Techniques Binary Splits." International Journal of Engineering and Advanced Technology 12, no. 5 (June 30, 2023): 77–81. http://dx.doi.org/10.35940/ijeat.e4195.0612523.

Повний текст джерела
Анотація:
Decision tree study is a predictive modelling tool that is used over many grounds. It is constructed through an algorithmic technique that is divided the dataset in different methods created on varied conditions. Decisions trees are the extreme dominant algorithms that drop under the set of supervised algorithms. However, Decision Trees appearance modest and natural, there is nothing identical modest near how the algorithm drives nearby the procedure determining on splits and how tree snipping happens. The initial object to appreciate in Decision Trees is that it splits the analyst field, i.e., the objective parameter into diverse subsets which are comparatively more similar from the viewpoint of the objective parameter. Gini index is the name of the level task that has applied to assess the binary changes in the dataset and worked with the definite object variable “Success” or “Failure”. Split creation is basically covering the dataset values. Decision trees monitor a top-down, greedy method that has recognized as recursive binary splitting. It has statistics for 15 statistics facts of scholar statistics on pass or fails an online Machine Learning exam. Decision trees are in the class of supervised machine learning. It has been commonly applied as it has informal implement, interpreted certainly, derived to quantitative, qualitative, nonstop, and binary splits, and provided consistent outcomes. The CART tree has regression technique applied to expected standards of nonstop variables. CART regression trees are an actual informal technique of understanding outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Amraee, Turaj, and Soheil Ranjbar. "Transient Instability Prediction Using Decision Tree Technique." IEEE Transactions on Power Systems 28, no. 3 (August 2013): 3028–37. http://dx.doi.org/10.1109/tpwrs.2013.2238684.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bavirthi, Swathi Sowmya, and Supreethi K. P. "Systematic Review of Indexing Spatial Skyline Queries for Decision Support." International Journal of Decision Support System Technology 14, no. 1 (January 2022): 1–15. http://dx.doi.org/10.4018/ijdsst.286685.

Повний текст джерела
Анотація:
Residing in the data age, researchers inferred that huge amount of geo-tagged data is available and identified the importance of Spatial Skyline queries. Spatial or geographic location in conjunction with textual relevance plays a key role in searching Point of Interest (POI) of the user. Efficient indexing techniques like R-Tree, Quad Tree, Z-order curve and variants of these trees are widely available in terms of spatial context. Inverted file is the popular indexing technique for textual data. As Spatial skyline query aims at analyzing both spatial and skyline dominance, there is a necessity for a hybrid indexing technique. This article presents the review of spatial skyline queries evaluation that include a range of indexing techniques which concentrates on disk access, I/O time, CPU time. The investigation and analysis of studies related to skyline queries based upon the indexing model and research gaps are presented in this review.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cho, Sung-bin. "Corporate Bankruptcy Prediction using Decision Tree Ensemble Technique." Journal of the Korea Management Engineers Society 25, no. 4 (December 31, 2020): 63–71. http://dx.doi.org/10.35373/kmes.25.4.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Divyashree, S., and H. R. Divakar. "Prediction of Human Health using Decision Tree Technique." International Journal of Computer Sciences and Engineering 6, no. 6 (June 30, 2018): 805–8. http://dx.doi.org/10.26438/ijcse/v6i6.805808.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kirchner, K., K. H. Tölle, and J. Krieter. "Decision tree technique applied to pig farming datasets." Livestock Production Science 90, no. 2-3 (November 2004): 191–200. http://dx.doi.org/10.1016/j.livprodsci.2004.04.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Farghal, S. A., R. M. El-Dewieny, and M. Roshdy Abdel Aziz. "Generation Expansion Planning Using the Decision Tree Technique." Electric Power Systems Research 13, no. 1 (August 1987): 59–70. http://dx.doi.org/10.1016/0378-7796(87)90051-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhai, Jun-hai. "Fuzzy decision tree based on fuzzy-rough technique." Soft Computing 15, no. 6 (March 10, 2010): 1087–96. http://dx.doi.org/10.1007/s00500-010-0584-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Olaru, Cristina, and Louis Wehenkel. "Erratum to “A complete fuzzy decision tree technique”." Fuzzy Sets and Systems 140, no. 3 (December 2003): 563–65. http://dx.doi.org/10.1016/j.fss.2003.08.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

YIM, J. "Introducing a decision tree-based indoor positioning technique." Expert Systems with Applications 34, no. 2 (February 2008): 1296–302. http://dx.doi.org/10.1016/j.eswa.2006.12.028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Fakir, Y., M. Azalmad, and R. Elaychi. "Study of The ID3 and C4.5 Learning Algorithms." Journal of Medical Informatics and Decision Making 1, no. 2 (April 23, 2020): 29–43. http://dx.doi.org/10.14302/issn.2641-5526.jmid-20-3302.

Повний текст джерела
Анотація:
Data Mining is a process of exploring against large data to find patterns in decision-making. One of the techniques in decision-making is classification. Data classification is a form of data analysis used to extract models describing important data classes. There are many classification algorithms. Each classifier encompasses some algorithms in order to classify object into predefined classes. Decision Tree is one such important technique, which builds a tree structure by incrementally breaking down the datasets in smaller subsets. Decision Trees can be implemented by using popular algorithms such as ID3, C4.5 and CART etc. The present study considers ID3 and C4.5 algorithms to build a decision tree by using the “entropy” and “information gain” measures that are the basics components behind the construction of a classifier model
Стилі APA, Harvard, Vancouver, ISO та ін.
17

M.U. Noormanshah, Wan, Puteri N.E. Nohuddin, and Zuraini Zainol. "Document Categorization Using Decision Tree: Preliminary Study." International Journal of Engineering & Technology 7, no. 4.34 (December 13, 2018): 437. http://dx.doi.org/10.14419/ijet.v7i4.34.26907.

Повний текст джерела
Анотація:
This preliminaries study aims to propose a good classification technique that capable of doing document classification based on text mining technique and create an algorithm to automatically classify document according to its folder based on document’s content while able to do sentiment analyses to data sets and summarize it. The objective of this paper to identify an efficient text mining classification technique which can resulted with highest accuracy of classifying document into document folder, capable of extracting valuable information from context-based term that can be used as an output for algorithm to do automatic classification and evaluate the classification technique. Methodology of this study comprises in 5 modules which is 1) Document collection, 2) Pre-Processing Stage, 3) Term Frequency-Inversed Document Frequency, 4) Classification Technique and Algorithm, and lastly 5) Evaluation and Visualization of the classification result. The proposed framework will have utilized Term Frequency-Inversed Document Frequency (TF-IDF) and Decision Tree technique which TF-IDF used as purposes to rank all the terms based on most frequent to least frequent terms so, while decision tree function as decision making in terms of deciding which folder the document belongs to.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lazarescu, Mihai Mugurel, Terry Caelli, and Svetha Venkatesh. "Extracting Common Subtrees from Decision Trees." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 06 (September 1998): 867–79. http://dx.doi.org/10.1142/s0218001498000476.

Повний текст джерела
Анотація:
This paper explores an efficient technique for the extraction of common subtrees in decision trees. The method is based on a Suffix Tree string matching process and the algorithm is applied to the problem of finding common decision rules in path planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Luna, José Marcio, Efstathios D. Gennatas, Lyle H. Ungar, Eric Eaton, Eric S. Diffenderfer, Shane T. Jensen, Charles B. Simone, Jerome H. Friedman, Timothy D. Solberg, and Gilmer Valdes. "Building more accurate decision trees with the additive tree." Proceedings of the National Academy of Sciences 116, no. 40 (September 16, 2019): 19887–93. http://dx.doi.org/10.1073/pnas.1816748116.

Повний текст джерела
Анотація:
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lavanya, J., M. Ramesh, J. Sravan Kumar, G. Rajaramesh, and Subhani Shaik. "Hate Speech Detection Using Decision Tree Algorithm." Journal of Advances in Mathematics and Computer Science 38, no. 8 (June 19, 2023): 66–75. http://dx.doi.org/10.9734/jamcs/2023/v38i81791.

Повний текст джерела
Анотація:
The advancement of the internet and social media, people has access to various platforms to freely share their thoughts and opinions on various topics. However, this freedom of expression is abused to incite hatred against individuals or groups of people based on race, religion, gender, etc. question. Therefore, to address this emerging problem on social media sites, recent studies have used various feature engineering techniques and machine learning algorithms to automatically detect hate speech posts on different datasets. Advances in machine learning have intrigued researchers seeking and implementing solutions to the problem of hate speech. Currently, we are using decision tree algorithm technique to detect hate speech using text data.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Koli, Sakshi. "Sentiment Analysis with Machine Learning Techniques and Improved J48 Decision Tree Technique." International Journal of Computer Sciences and Engineering 9, no. 6 (June 30, 2021): 77–82. http://dx.doi.org/10.26438/ijcse/v9i6.7782.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Khoshgoftaar, Taghi M., and Naeem Seliya. "Software Quality Classification Modeling Using the SPRINT Decision Tree Algorithm." International Journal on Artificial Intelligence Tools 12, no. 03 (September 2003): 207–25. http://dx.doi.org/10.1142/s0218213003001204.

Повний текст джерела
Анотація:
Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. An in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm is presented. Many classification algorithms have memory limitations including the requirement that datasets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree pruning technique based on the Minimum Description Length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful accuracy. The case study used consists of software metrics collected from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability than those built by CART.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Bala, Kanchan, and Deepinder Kaur. "Image Compression using Decision Tree Based SVD-ASWDR Technique." International Journal of Signal Processing, Image Processing and Pattern Recognition 10, no. 1 (January 31, 2017): 9–16. http://dx.doi.org/10.14257/ijsip.2017.10.1.02.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Matsumoto, Noboru, Kenneth J. Mackin, and Eiichiro Tazaki. "Emergence of Learning Rule in Neural Networks Using Genetic Programming Combined with Decision Trees." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 4 (August 20, 1999): 223–33. http://dx.doi.org/10.20965/jaciii.1999.p0223.

Повний текст джерела
Анотація:
Genetic Programming (GP) combined with Decision Trees is used to evolve the structure and weights for Artificial Neural Networks (ANN). The learning rule of the decision tree is defined as a function of global information using a divide-and-conquer strategy. Learning rules with lower fitness values are replaced by new ones generated by GP techniques. The reciprocal connection between decision tree and GP emerges from the coordination of learning rules. Since there is no constraint on initial network, a more suitable network is found for a given task. Fitness values are improved using a Hybrid GP technique combining GP and Back Propagation. The proposed method is applied to medical diagnosis and results demonstrate that effective learning rules evolve.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Gerjets, Imke, Imke Traulsen, Kerstin Reiners, and Nicole Kemper. "Application of decision-tree technique to assess herd specific risk factors for coliform mastitis in sows." Veterinary Science Development 1, no. 1 (June 6, 2011): 6. http://dx.doi.org/10.4081/vsd.2011.2479.

Повний текст джерела
Анотація:
The aim of the study was to investigate factors associated with coliform mastitis in sows, determined at herd level, by applying the decision-tree technique. Coliform mastitis represents an economically important disease in sows after farrowing that also affects the health, welfare and performance of the piglets. The decision-tree technique, a data mining method, may be an effective tool for making large datasets accessible and different sow herd information comparable. It is based on the C4.5-algorithm which generates trees in a top-down recursive strategy. The technique can be used to detect weak points in farm management. Two datasets of two farms in Germany, consisting of sow-related parameters, were analysed and compared by decision-tree algorithms. Data were collected over the period of April 2007 to August 2010 from 987 sows (499 CM-positive sows and 488 CM-negative sows) and 596 sows (322 CM-positive sows and 274 CM-negative sows), respectively. Depending on the dataset, different graphical trees were built showing relevant factors at the herd level which may lead to coliform mastitis. To our understanding, this is the first time decision-tree modeling was used to assess risk factors for coliform mastitis. Herd specific risk factors for the disease were illustrated what could prove beneficial in disease and herd management.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sreenath, P. G., Gopalakrishnan Praveen Kumare, Sundar Pravin, K. N. Vikram, and M. Saimurugan. "Automobile Gearbox Fault Diagnosis Using Naive Bayes and Decision Tree Algorithm." Applied Mechanics and Materials 813-814 (November 2015): 943–48. http://dx.doi.org/10.4028/www.scientific.net/amm.813-814.943.

Повний текст джерела
Анотація:
Gearbox plays a vital role in various fields in the industries. Failure of any component in the gearbox will lead to machine downtime. Vibration monitoring is the technique used for condition based maintenance of gearbox. This paper discusses the use of machine learning techniques for automating the fault diagnosis of automobile gearbox. Our experimental study monitors the vibration signals of actual automobile gearbox with simulated fault conditions in the gear and bearing. Statistical features are extracted and classified for identifying the faults using decision tree and Naïve bayes technique. Comparison of the techniques for determining the classification accuracy is discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Bacchieri, Antonella, and Ermanno Attanasio. "L’analisi delle decisioni negli studi di farmacoeconomia." Farmeconomia. Health economics and therapeutic pathways 6, no. 2 (June 15, 2005): 141–52. http://dx.doi.org/10.7175/fe.v6i2.831.

Повний текст джерела
Анотація:
This paper is a review of the decision tree methodology. This is a very useful technique in complex decision making, when the consequences of the decisions are distant in time and the information upon which we can rely is uncertain. Decision trees are the basic structure underlying most applications of decision analysis in medicine. However, in this review we only cover their application to the pharmaco-economic field. The main steps of this decision analysis are explained. Thereafter, a case study from the literature is used as an example, i.e. an application of the decision tree analysis to a study aimed at comparing two different drugs in the treatment of gastro-esophageal reflux. The main focus of our paper is on the statistical aspects, which include the definition and quantification of the outcome variables, the definition and quantification of the probabilities of occurrence of the uncertain events considered in the decision tree, and the sensitivity analysis. The knowledge of the basic laws of the probability theory is mandatory for assigning correct values to the parameters of the decision tree (outcomes and probabilities). Finally, the sensitivity analysis is an important part of the work to be performed in the last stage of the decision analysis in order to measure the degree of robustness of the results when varying the assumptions.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Ranzato, Francesco, and Marco Zanella. "Abstract Interpretation of Decision Tree Ensemble Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5478–86. http://dx.doi.org/10.1609/aaai.v34i04.5998.

Повний текст джерела
Анотація:
We study the problem of formally and automatically verifying robustness properties of decision tree ensemble classifiers such as random forests and gradient boosted decision tree models. A recent stream of works showed how abstract interpretation, which is ubiquitously used in static program analysis, can be successfully deployed to formally verify (deep) neural networks. In this work we push forward this line of research by designing a general and principled abstract interpretation-based framework for the formal verification of robustness and stability properties of decision tree ensemble models. Our abstract interpretation-based method may induce complete robustness checks of standard adversarial perturbations and output concrete adversarial attacks. We implemented our abstract verification technique in a tool called silva, which leverages an abstract domain of not necessarily closed real hyperrectangles and is instantiated to verify random forests and gradient boosted decision trees. Our experimental evaluation on the MNIST dataset shows that silva provides a precise and efficient tool which advances the current state of the art in tree ensembles verification.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Naseem, Rashid, Bilal Khan, Arshad Ahmad, Ahmad Almogren, Saima Jabeen, Bashir Hayat, and Muhammad Arif Shah. "Investigating Tree Family Machine Learning Techniques for a Predictive System to Unveil Software Defects." Complexity 2020 (November 30, 2020): 1–21. http://dx.doi.org/10.1155/2020/6688075.

Повний текст джерела
Анотація:
Software defects prediction at the initial period of the software development life cycle remains a critical and important assignment. Defect prediction and correctness leads to the assurance of the quality of software systems and has remained integral to study in the previous years. The quick forecast of imperfect or defective modules in software development can serve the development squad to use the existing assets competently and effectively to provide remarkable software products in a given short timeline. Hitherto, several researchers have industrialized defect prediction models by utilizing statistical and machine learning techniques that are operative and effective approaches to pinpoint the defective modules. Tree family machine learning techniques are well-thought-out to be one of the finest and ordinarily used supervised learning methods. In this study, different tree family machine learning techniques are employed for software defect prediction using ten benchmark datasets. These techniques include Credal Decision Tree (CDT), Cost-Sensitive Decision Forest (CS-Forest), Decision Stump (DS), Forest by Penalizing Attributes (Forest-PA), Hoeffding Tree (HT), Decision Tree (J48), Logistic Model Tree (LMT), Random Forest (RF), Random Tree (RT), and REP-Tree (REP-T). Performance of each technique is evaluated using different measures, i.e., mean absolute error (MAE), relative absolute error (RAE), root mean squared error (RMSE), root relative squared error (RRSE), specificity, precision, recall, F-measure (FM), G-measure (GM), Matthew’s correlation coefficient (MCC), and accuracy. The overall outcomes of this paper suggested RF technique by producing best results in terms of reducing error rates as well as increasing accuracy on five datasets, i.e., AR3, PC1, PC2, PC3, and PC4. The average accuracy achieved by RF is 90.2238%. The comprehensive outcomes of this study can be used as a reference point for other researchers. Any assertion concerning the enhancement in prediction through any new model, technique, or framework can be benchmarked and verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Luthfiarta, Ardytha, Junta Zeniarja, Edi Faisal, and Wibowo Wicaksono. "Prediction on Deposit Subscription of Customer based on Bank Telemarketing using Decision Tree with Entropy Comparison." Journal of Applied Intelligent System 4, no. 2 (March 6, 2020): 57–66. http://dx.doi.org/10.33633/jais.v4i2.2772.

Повний текст джерела
Анотація:
Banking system collect enormous amounts of data every day. This data can be in the form of customer information, transaction details, risk profiles, credit card details, limits and collateral details, compliance Anti Money Laundering (AML) related information, trade finance data, SWIFT and telex messages. In addition, Thousands of decision are made in Banking system. For example, banks everyday creates credit decisions, relationship start up, investment decisions, AML and Illegal financing related decision. To create this decision, comprehensive review on various reports and drills down tools provided by the banking systems is needed. However, this is a manual process which is error prone and time consuming due to large volume of transactional and historical data available. Hence, automatic knowledge mining is needed to ease the decision making process. This research focuses on data mining techniques to handle the mentioned problem. The technique will focus on classification method using Decision Tree algorithms. This research provides an overview of the data mining techniques and procedures will be performed. It also provides an insight into how these techniques can be used in deposit subscription in banking system to make a decision making process easier and more productive. Keywords - Telemarketing, bank deposit, decision tree, classification, data mining, entropy.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Mia, Mia, Anis Fitri Nur Masruriyah, and Adi Rizky Pratama. "The Utilization of Decision Tree Algorithm In Order to Predict Heart Disease." JURNAL SISFOTEK GLOBAL 12, no. 2 (September 30, 2022): 138. http://dx.doi.org/10.38101/sisfotek.v12i2.551.

Повний текст джерела
Анотація:
The data on heart disease patients obtained from the Ministry of Health of the Republic of Indonesia in 2020 explains that heart disease has increased every year and ranks as the highest cause of death in Indonesia, especially at productive ages. If people with heart disease are not treated properly, then in their effective period a patient can experience death more quickly. Thus, a predictive model that is able to help medical personnel solve health problems is built. This study employed the Random Forest and Decision Tree algorithm classification process by processing cardiac patient data to create a predictive model and based on the data obtained, showing that the data on heart disease was not balanced. Thus, to overcome the imbalance, an oversampling technique was carried out using ADASYN and SMOTE. This study proved that the performance of the ADASYN and SMOTE oversampling techniques on the C45 algorithm and the Random Forest Classifier had a significant effect on the prediction results. The usage of oversampling techniques to analyze data aims to handle unbalanced datasets, and the confusion matrix is used for testing Precision, Recall, and F1-SCORE, as well as Accuracy. Based on the results of research that has been carried out with the K-Fold 10 testing technique and oversampling technique, SMOTE + RF is one of the best oversampling techniques which has a greater Accuracy of 93.58% compared to Random Forest without SMOTE of 90.51% and SMOTE + ADASYN of 93.55%. The application of the SMOTE technique was proven to be able to overcome the problem of data imbalance and get better classification results than the application of the ADASYN technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Ramana, P. V. "Naïve Bayesto Machine Learning Approach for Structural Dynamic Complications." Proceedings of the 12th Structural Engineering Convention, SEC 2022: Themes 1-2 1, no. 1 (December 19, 2022): 1283–91. http://dx.doi.org/10.38208/acp.v1.652.

Повний текст джерела
Анотація:
Learning is one of the most powerful concepts in artificial intelligence research. It allows a system to learn from its environment and automatically modify its behavior to suit its needs. On par with human champions, the world’s best computer backgammon player is a computer program that learns by playing against itself. The learning algorithm and available data set limit computer learning. Several techniques are available to perform machine learning. Decision trees, the naïve Bayes approach, and the more general Bayes net approach are a few choices. The naïve Bayes approach is an instance of the more general Bayes nets. This paper examines and analyzes the naïve Bayes and decision tree approaches to learning. Various techniques to avoid over-fitting, such as ensemble construction and cross-validation, are also implemented and analyzed. A novel hybrid between the naïve Bayes approach and the decision tree method is presented. The hybrid system produces a spectrum of options that could be used for learning by merely changing parameter values. At one end lies the naïve Bayes approach, while at the other lies the decision tree technique. The proposed hybrid scheme solves the problem of poor naïve Bayes performance in a domain with dependent attributes and the memory consumption problem of the decision tree. One can analyze this idea and show encouraging experimental data that backs the need for such a solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Muqasqas, Saed A., Qasem A. Al Radaideh, and Bilal A. Abul-Huda. "A Hybrid Classification Approach Based on Decision Tree and Naïve Bays Methods." International Journal of Information Retrieval Research 4, no. 4 (October 2014): 61–72. http://dx.doi.org/10.4018/ijirr.2014100104.

Повний текст джерела
Анотація:
Data classification as one of the main tasks of data mining has an important role in many fields. Classification techniques differ mainly in the accuracy of their models, which depends on the method adopted during the learning phase. Several researchers attempted to enhance the classification accuracy by combining different classification methods in the same learning process; resulting in a hybrid-based classifier. In this paper, the authors propose and build a hybrid classifier technique based on Naïve Bayes and C4.5 classifiers. The main goal of the proposed model is to reduce the complexity of the NBTree technique, which is a well known hybrid classification technique, and to improve the overall classification accuracy. Thirty six samples of UCI datasets were used in evaluation. Results have shown that the proposed technique significantly outperforms the NBTree technique and some other classifiers proposed in the literature in term of classification accuracy. The proposed classification approach yields an overall average accuracy equal to 85.70% over the 36 datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Varade, Rashmi V., and Blessy Thankanchan. "Academic Performance Prediction of Undergraduate Students using Decision Tree Algorithm." SAMRIDDHI : A Journal of Physical Sciences, Engineering and Technology 13, SUP 1 (June 30, 2021): 97–100. http://dx.doi.org/10.18090/samriddhi.v13is1.22.

Повний текст джерела
Анотація:
Data mining is a technique for extracting meaningful information or patterns from large amounts of data. These techniques are frequently utilised for analysis and prediction in practically all fields around the world. It's employed in a variety of fields, including education, business, health care, fraud detection, financial banking, and manufacturing engineering. This study explores the Decision Tree data mining methodology for predicting undergraduate students' academic performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

B, Adeyemi. "Comparative Evaluation of SOM-Ward Clustering and Decision Tree for Conducting Customer-Portfolio Analysis." Advances in Multidisciplinary and scientific Research Journal Publication 8, no. 1 (March 30, 2018): 1161–128. http://dx.doi.org/10.22624/aims/cisdi/v8n1p11.

Повний текст джерела
Анотація:
Analyzing customer’s base for the purpose of retaining and attracting the most valuable customers still stands the main problem facing companies in this modern age. The process of conducting customer portfolio analysis (CPA) makes most existing customer relationship management systems (CRMS) lack ability to extract hidden information and knowledge from pool of data stored in customer databases or data warehouses, to conduct market segmentation due to the implementation techniques used. In this paper, a two-level hybrid approach that combines Self-Organizing Maps-Ward’s clustering denoted as (SOM-Ward) and decision trees data mining technique is proposed. The dataset used in this study were acquired from the loyalty card system, containing 1,480,662 customers, and sales information from several department stores. SOM-Ward clustering was used to conduct customer segmentation, by dividing the customer base into distinct segment of customers with similar characteristics and behavior. A Decision Tree was used to partition a large collection of data into smaller sets in order to identify the characteristics that can tell high-spending customers from low-spending ones. The result of the prediction performance of the decision tree shows that the overall accuracy of the model is 72.6%. 82.8% of the high spending customers are correctly classified, while only 62.4% of the low spending customers are correctly classified. This study revealed that the approach that combines SOM-Ward clustering and decision trees data mining technique outperforms SOM-based approach and Decision Trees individually for market segmentation, classification, and data exploration problems. Keywords: Customer relationship management (CRM), customer portfolio analysis (CPA), Self-Organizing Maps (SOM), Ward’s clustering, decision trees.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chae, Deok-Jin, Ye-Ho Sin, Tae-Yeong Cheon, Heung-Seon Go, Geun-Ho Ryu, and Bu-Hyeon Hwang. "The Training Data Generation and a Technique of Phylogenetic Tree Generation using Decision Tree." KIPS Transactions:PartD 10D, no. 6 (January 1, 2003): 897–906. http://dx.doi.org/10.3745/kipstd.2003.10d.6.897.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

R N, Rithesh, Vignesh R., and Anala M. R. "Autonomous Traffic Signal Control using Decision Tree." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 3 (June 1, 2018): 1522. http://dx.doi.org/10.11591/ijece.v8i3.pp1522-1529.

Повний текст джерела
Анотація:
<p class="normal">The objective of this paper is to introduce an effective and efficient way of traffic signal light control to optimize the traffic signal duration across each lanes and thereby, to minimize or completely eliminate traffic congestion. This paper introduces a new approach to resolve the traffic congestion problem at junctions by making use of decision trees. The vehicle count in the real time traffic video is determined by Image Processing technique. This information is fed to the decision tree based on which the decision is made regarding the status of traffic signal lights of each lane at the junction at any given instant of time.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Adedayo, Adejumoke, Benjamin Aribisala, and Taofik Ajagbe. "Breast Cancer Diagnosis Using Shape Analysis and Decision Tree Technique." Journal of Computer Science and Its Application 28, no. 2 (August 25, 2022): 19–25. http://dx.doi.org/10.4314/jcsia.v28i2.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Sumbaly, Ronak, N. Vishnusri, and S. Jeyalatha. "Diagnosis of Breast Cancer using Decision Tree Data Mining Technique." International Journal of Computer Applications 98, no. 10 (July 18, 2014): 16–24. http://dx.doi.org/10.5120/17219-7456.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Jung, Kwang Young, and Jaeheon Lee. "Multivariate process control procedure using a decision tree learning technique." Journal of the Korean Data and Information Science Society 26, no. 3 (May 31, 2015): 639–52. http://dx.doi.org/10.7465/jkdi.2015.26.3.639.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cho, Yerim, Yeon-Choel Kim, and Yoonseok Shin. "Prediction Model of Construction Safety Accidents using Decision Tree Technique." Journal of the Korea Institute of Building Construction 17, no. 3 (June 20, 2017): 295–303. http://dx.doi.org/10.5345/jkibc.2017.17.3.295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Shirazi, Syed Atir Raza, Sania Shamim, Abdul Hannan Khan, and Aqsa Anwar. "Intrusion detection using decision tree classifier with feature reduction technique." Mehran University Research Journal of Engineering and Technology 42, no. 2 (March 28, 2023): 30. http://dx.doi.org/10.22581/muet1982.2302.04.

Повний текст джерела
Анотація:
The number of internet users and network services is increasing rapidly in the recent decade gradually. A Large volume of data is produced and transmitted over the network. Number of security threats to the network has also been increased. Although there are many machine learning approaches and methods are used in intrusion detection systems to detect the attacks, but generally they are not efficient for large datasets and real time detection. Machine learning classifiers using all features of datasets minimized the accuracy of detection for classifier. A reduced feature selection technique that selects the most relevant features to detect the attack with ML approach has been used to obtain higher accuracy. In this paper, we used recursive feature elimination technique and selected more relevant features with machine learning approaches for big data to meet the challenge of detecting the attack. We applied this technique and classifier to NSL KDD dataset. Results showed that selecting all features for detection can maximize the complexity in the context of large data and performance of classifier can be increased by feature selection best in terms of efficiency and accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

NGUYEN, LE MINH, XUAN HIEU PHAN, SUSUMU HORIGUCHI, and AKIRA SHIMAZU. "A NEW SENTENCE REDUCTION TECHNIQUE BASED ON A DECISION TREE MODEL." International Journal on Artificial Intelligence Tools 16, no. 01 (February 2007): 129–37. http://dx.doi.org/10.1142/s0218213007003242.

Повний текст джерела
Анотація:
This paper addresses a novel sentence reduction algorithm based on a decision tree model in which semantic information is used to enhance the accuracy of sentence reduction. The proposed algorithm is able to deal with the changeable order problem in sentence reduction. Furthermore, the use of decision list to solve the fragment problem in sentence reduction based decision tree model is also discussed. Our experimental results show an improvement when compared with earlier methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

SYNKO, ANNA, and PAVLO ZHEZHNYCH. "METHOD OF AUTOMATED DETECTION OF ARTICLE TERMS USING A DECISION TREE." Herald of Khmelnytskyi National University. Technical sciences 319, no. 2 (April 27, 2023): 338–43. http://dx.doi.org/10.31891/2307-5732-2023-319-1-338-343.

Повний текст джерела
Анотація:
Every day, the number of users of virtual communities is increasing, and therefore the data that occurs during communication between them. The posted data can contain valuable information because they contain not only the manufacturer’s opinion, but also consumer experience about a certain product. But, due to the fact that virtual communities have a weak structure in terms of providing information, they are more focused on entertaining content – they may contain data that do not carry a meaningful load, and also, when placing data, not all users foresee techniques that will help increase the relevance of the search for this data. Therefore, the search for target data requires significant time costs. To improve the search for data in the article, a method is proposed that allows you to analyze the content of posted posts and identify keywords from a certain subject area. This method is automated and works on the basis of a previously developed dictionary of key phrases or regular expressions with weighting coefficients of belonging to one or another term. As a result, a decision-making tree is built for each term, which determines the weight of the term to the content of the post, article. At the same time, the level of location of the post in the discussion is taken into account, because the discussion contains a set of chronologically ordered posts. Posts placed at higher levels have a higher coefficient in the calculation. While posts are placed at lower levels – lower weighting factors. Identified key phrases before the specified term are ordered in descending order of weight. At each level of the tree, the total weight of key phrases must be equal to one. To process the data from the virtual communities, they were downloaded using the data consolidation technique. As a result, the concept of consolidated data storage was introduced, which allows collecting data from disparate sources. The paper presents the weight calculation for one term from part of the CodeProject community post.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Mabuni, D., and S. Aquter Babu. "High Accurate and a Variant of k-fold Cross Validation Technique for Predicting the Decision Tree Classifier Accuracy." International Journal of Innovative Technology and Exploring Engineering 10, no. 2 (January 10, 2021): 105–10. http://dx.doi.org/10.35940/ijitee.c8403.0110321.

Повний текст джерела
Анотація:
In machine learning data usage is the most important criterion than the logic of the program. With very big and moderate sized datasets it is possible to obtain robust and high classification accuracies but not with small and very small sized datasets. In particular only large training datasets are potential datasets for producing robust decision tree classification results. The classification results obtained by using only one training and one testing dataset pair are not reliable. Cross validation technique uses many random folds of the same dataset for training and validation. In order to obtain reliable and statistically correct classification results there is a need to apply the same algorithm on different pairs of training and validation datasets. To overcome the problem of the usage of only a single training dataset and a single testing dataset the existing k-fold cross validation technique uses cross validation plan for obtaining increased decision tree classification accuracy results. In this paper a new cross validation technique called prime fold is proposed and it is experimentally tested thoroughly and then verified correctly using many bench mark UCI machine learning datasets. It is observed that the prime fold based decision tree classification accuracy results obtained after experimentation are far better than the existing techniques of finding decision tree classification accuracies.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Villani, Maria Teresa, Daria Morini, Giorgia Spaggiari, Chiara Furini, Beatrice Melli, Alessia Nicoli, Francesca Iannotti, et al. "The (decision) tree of fertility: an innovative decision-making algorithm in assisted reproduction technique." Journal of Assisted Reproduction and Genetics 39, no. 2 (January 27, 2022): 395–408. http://dx.doi.org/10.1007/s10815-021-02353-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

G., Girish Kumar. "Analysis of Accuracy in Heart Disease Diagnosis System Using Decision Tree Classifier Over Logistic Regression Based on Recursive Feature Selection." ECS Transactions 107, no. 1 (April 24, 2022): 15661–74. http://dx.doi.org/10.1149/10701.15661ecst.

Повний текст джерела
Анотація:
The aim of the research paper is to find the accuracy in a better way using Decision Tree compared with Logistic Regression by using Recursive Feature Elimination Technique. Materials and Methods: In this study, there are two groups, namely Decision Tree and Logistic Regression. Accuracy was computed for the dataset with sample size of 40. The innovative method used is Recursive Feature Elimination. It is used to find the subset of features which gives more accuracy. Result: It was observed that the Decision Tree algorithm obtains accuracy of 85.5% and Logistic Regression with 83.4% of accuracy. Decision Tree appears to have better significance than Logistic Regression technique with the value of p<0.05. The significant value obtained from statistical analysis is 0.01. Conclusion: The result proves that Decision Tree classifier shows better accuracy than Logistic Regression classifier based on Recursive Feature Selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

B., Lalithadevi. "Novel Technique for Price Prediction by Using Logistic, Linear and Decision Tree Algorithm on Deep Belief Network." International Journal of Psychosocial Rehabilitation 24, no. 5 (March 31, 2020): 1751–61. http://dx.doi.org/10.37200/ijpr/v24i5/pr201846.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Mansi, Mansi, Sukhdeep S. Dhami, and Vanraj Vanraj. "MODWT, PCA and Decision Tree based Fault diagnosis of Gear." Journal of University of Shanghai for Science and Technology 23, no. 07 (July 7, 2021): 376–86. http://dx.doi.org/10.51201/jusst/21/07161.

Повний текст джерела
Анотація:
A gearbox is an important power transmission equipment. Its maintenance is a top requirement because it is prone to a variety of failures. For gearbox fault diagnosis, techniques such as vibration monitoring have been widely used. Also, when it comes to machine Condition monitoring and fault diagnostics, feature extraction is the crucial step. For a classifier to perform accurately, it must have the appropriate discriminative information or features. Hence, this paper proposes a signal processing methodology based on Maximal overlap discrete wavelet transform (MODWT) and a dimensionality reduction technique i.eprincipal component analysis (PCA) to reduce the dimensionality of the feature space and obtain an ideal subspace for machine fault classification. Firstly, the raw vibration signature is denoised with the help of a state-of-the-art MODWT signal processing technique to identify the hidden fault signatures. Then various traditional statistical features are extracted from this denoised signal. These multi-dimensional features are then processed with PCA and further, the Decision Tree is used for fault classification. Performance comparison of the proposed method with traditional raw analysis and without application of PCA is presented and the proposed method outperforms at every level.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

D N, Ashwini, and Soumya Dass B. "Role of Data Mining Technique: A Boon to Society." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 657–60. http://dx.doi.org/10.22214/ijraset.2022.43782.

Повний текст джерела
Анотація:
Abstract: Datamining is a method of finding interested patterns from huge volume of data. Datamining techniques helps to make business decisions. It analyses information from multiple sources like DataMart, databases. In this paper, we are focussing on datamining tasks and its variety of applications in different fields, which is boon to the society. Keywords: KDD, Decision Tree, OLAP servers, Cube API, ODBC, Frequent patterns
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії