Добірка наукової літератури з теми "DECISION TRESS"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "DECISION TRESS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "DECISION TRESS"

1

Koyuncugil, Ali Serhan, and Nermin Ozgulbas. "Detecting Road Maps for Capacity Utilization Decisions by Clustering Analysis and CHAID Decision Tress." Journal of Medical Systems 34, no. 4 (February 10, 2009): 459–69. http://dx.doi.org/10.1007/s10916-009-9258-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Arkin, Esther M., Henk Meijer, Joseph S. B. Mitchell, David Rappaport, and Steven S. Skiena. "Decision Trees for Geometric Models." International Journal of Computational Geometry & Applications 08, no. 03 (June 1998): 343–63. http://dx.doi.org/10.1142/s0218195998000175.

Повний текст джерела
Анотація:
A fundamental problem in model-based computer vision is that of identifying which of a given set of geometric models is present in an image. Considering a "probe" to be an oracle that tells us whether or not a model is present at a given point, we study the problem of computing efficient strategies ("decision trees") for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine which single model is present. We show that a ⌈l g k⌉ height binary decision tree always exists for k polygonal models (in fixed position), provided (1) they are non-degenerate (do not share boundaries) and (2) they share a common point of intersection. Further, we give an efficient algorithm for constructing such decision tress when the models are given as a set of polygons in the plane. We show that constructing a minimum height tree is NP-complete if either of the two assumptions is omitted. We provide an efficient greedy heuristic strategy and show that, in the general case, it yields a decision tree whose height is at most ⌈l g k⌉ times that of an optimal tree. Finally, we discuss some restricted cases whose special structure allows for improved results.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Reddy, M. R. S. Surya Narayana, T. Narayana Reddy, and C. Viswanatha Reddy. "Decision Tress Analysis on Employee Job Satisfaction and HRD Climate: Role of Demographics." International Journal of Management Studies VI, no. 2(1) (April 30, 2019): 22. http://dx.doi.org/10.18843/ijms/v6i2(1)/03.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yu, Tianyu, Xuandong Mo, Mingjun Chen, and Changfeng Yao. "Machine-learning-assisted microstructure–property linkages of carbon nanotube-reinforced aluminum matrix nanocomposites produced by laser powder bed fusion." Nanotechnology Reviews 10, no. 1 (January 1, 2021): 1410–24. http://dx.doi.org/10.1515/ntrev-2021-0093.

Повний текст джерела
Анотація:
Abstract In this study, the cellular microstructural features in a subgrain size of carbon nanotube (CNT)-reinforced aluminum matrix nanocomposites produced by laser powder bed fusion (LPBF) (a size range between 0.5–1 μm) were quantitatively extracted and calculated from scanning electron microscopy images by applying a cell segmentation method and various image analysis techniques. Over 80 geometric features for each cellular cell were extracted and statistically analyzed using machine learning techniques to explore the structure–property linkages of carbon nanotube reinforced AlSi10Mg nanocomposites. Predictive models for hardness and relative mass density were established using these subgrain cellular microstructural features. Data dimension reduction using principal component analysis was conducted to reduce the feature number to 3. The results showed that even AlSi10Mg nanocomposite specimens produced using different laser parameters exhibited similar Al–Si eutectic microstructures, displaying a large difference in their mechanical properties including hardness and relative mass density due to cellular structure variance. For hardness prediction, the Extra Tress regression models showed a relative error of 2.47% for prediction accuracies. For the relative mass density prediction, the Decision Tress regression models showed a relative error of 1.42% for prediction accuracies. The results demonstrate that the developed models deliver satisfactory performance for hardness and relative mass density prediction of AlSi10Mg nanocomposites. The framework established in this study can be applied to the LPBF process optimization and mechanical properties manipulation of AlSi10Mg-based alloys and other additive manufacturing newly designed alloys or composites.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wawrzyk, Martyna. "Semi-supervised learning with the clustering and Decision Trees classifier for the task of cognitive workload study." Journal of Computer Sciences Institute 15 (June 30, 2020): 214–18. http://dx.doi.org/10.35784/jcsi.1725.

Повний текст джерела
Анотація:
The paper is focused on application of the clustering algorithm and Decision Tress classifier (DTs) as a semi-supervised method for the task of cognitive workload level classification. The analyzed data were collected during examination of Digit Symbol Substitution Test (DSST) with use of eye-tracker device. 26 participants took part in examination as volunteers. There were conducted three parts of DSST test with different levels of difficulty. As a results there were obtained three versions of data: low, middle and high level of cognitive workload. The case study covered clustering of collected data by using k-means algorithm to detect three clusters or more. The obtained clusters were evaluated by three internal indices to measure the quality of clustering. The David-Boudin index detected the best results in case of four clusters. Based on this information it is possible to formulate the hypothesis of the existence of four clusters. The obtained clusters were adopted as classes in supervised learning and have been subjected to classification. The DTs was applied in classification. There were obtained the 0.85 mean accuracy for three-class classification and 0.73 mean accuracy for four-class classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zaborski, Daniel, Witold Stanisław Proskura, Katarzyna Wojdak-Maksymiec, and Wilhelm Grzesiak. "Identification of Cows Susceptible to Mastitis based on Selected Genotypes by Using Decision Trees and A Generalized Linear Model." Acta Veterinaria 66, no. 3 (September 1, 2016): 317–35. http://dx.doi.org/10.1515/acve-2016-0028.

Повний текст джерела
Анотація:
AbstractThe aim of the present study was to: 1) check whether it would be possible to detect cows susceptible to mastitis at an early stage of their utilization based on selected genotypes and basic production traits in the first three lactations using ensemble data mining methods (boosted classification tress – BT and random forest – RF), 2) find out whether the inclusion of additional production variables for subsequent lactations will improve detection performance of the models, 3) identify the most significant predictors of susceptibility to mastitis, and 4) compare the results obtained by using BT and RF with those for the more traditional generalized linear model (GLZ). A total of 801 records for Polish Holstein-Friesian Black-and-White cows were analyzed. The maximum sensitivity, specificity and accuracy of the test set were 72.13%, 39.73%, 55.90% (BT), 86.89%, 17.81%, 59.49% (RF) and 90.16%, 8.22%, 58.97% (GLZ), respectively. Inclusion of additional variables did not have a significant effect on the model performance. The most significant predictors of susceptibility to mastitis were: milk yield, days in milk, sire’s rank, percentage of Holstein-Friesian genes, whereas calving season and genotypes (lactoferrin, tumor necrosis factor alpha, lysozyme and defensins) were ranked much lower. The applied models (both data mining ones and GLZ) showed low accuracy in detecting cows susceptible to mastitis and therefore some other more discriminating predictors should be used in future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chauhan, Rahul. "Prediction of Employee Turnover based on Machine Learning Models." Mathematical Statistician and Engineering Applications 70, no. 2 (February 26, 2021): 1767–75. http://dx.doi.org/10.17762/msea.v70i2.2469.

Повний текст джерела
Анотація:
As a result of the fact that the procedure of decision making constitutes a vital component in the management of a company, the personnel of that company are seen as a valuable kind of asset by the latter. Therefore, the procedure of employing them in the first place by making the appropriate choices is generally recognised as a well-known obstacle by administrative authorities. Employee turnover may be a time-consuming and difficult process because recruiting new workers requires not only additional time but also a significant amount of financial expenditure. In addition to this, there are a number of additional elements that play a role in the selection and hiring of a qualified applicant, who in turn would provide economic returns for an organisation. In this research, I propose building a model to predict employee turnover rate using data from three datasets acquired from the Kaggle repository and a subset of their features. To analyse staff traits and forecast turnover and churn rate, the work summarised here employs machine learning approaches and pre-processing techniques. Logistic regression, AdaBoost, XGBoost, KNN, decision tress, and Naive Bayes are only some of the machine learning algorithms tried out on extracted datasets in the report's implementation experiments. Evaluating qualities against evaluation parameters like accuracy and precision factors follows thorough research and training of selected attributes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Niapele, Sabaria, and Tamrin Salim. "Vegetation Analysis of the Tagafura Protected Forest in the City of Tidore Islands." Agrikan: Jurnal Agribisnis Perikanan 13, no. 2 (December 3, 2020): 426. http://dx.doi.org/10.29239/j.agrikan.13.2.426-434.

Повний текст джерела
Анотація:
The existence of Forest park vegetation in Tagafura as the vegetation cover are important to be maintain and preserved, since it’s effective for the human live on the earth. The function of this forest park is to defend the field around the forest in several ways such as, the water cycle, avoid the flood, erosion scheming and the soil fruitfulness keeper. The Tagafura Forest Park has a lot of natural resource, but the structure and the composition of the field are not completely found yet. Based on the the statement above the Researcher are interested to conduct the research entitled “ VEGETATION ANALYSIS OF THE TAGAFURA FOREST PARK IN TIDORE ISLAND to know about the structure and the composition of the vegetation in the Forest park of Tagafura and be able to being as the government substance while made a decision about the Forest park. This research used purposive sampling with a combination of to track and double plot to placement the plot. The data then analyzing used the density and relative density formula, domination and relative domination formula, frequency and relative frequency formula and The Importance Value Index (INP). Based on the research result, the data was founded that the forest has 25 structures include 15 types of Seedlings, 10 type of Stakes, 13 type of poles and 12 types of tress. The domination of the composition type amount the growth based on the INP is (1). Augenia aromatic with the INP in Seedling are 45,49. INP for Stand are 18,05. INP for the are 23,67 and the INP for the trees are 132.08. (2). Myristica fragrans has the INP for the seedlings are 31.44. INP for Stand are 15.11. INP for the poles are 30.27 and the INP for the trees are 47.25. (3). Gnetum gnemo has the INP for the seedlings are 19,48. INP for Stand are 24.21. INP for the poles are 49.92 and the INP for the trees are 10.83. (4). Arenga Pinnata has the INP for the seedlings are 18,13. INP for Stand are 36.11. INP for the poles are 24.04 and the INP for the trees are 17.51. (5). Cinnamomum verum has the INP for the seedlings are 11.84. INP for Stand are 33.17. INP for the poles are 26.42 and the INP for the trees are 7.36.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Azad, Mohammad, Igor Chikalov, and Mikhail Moshkov. "Representation of Knowledge by Decision Trees for Decision Tables with Multiple Decisions." Procedia Computer Science 176 (2020): 653–59. http://dx.doi.org/10.1016/j.procs.2020.09.037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mărginean, Nicolae, Janetta Sîrbu, and Dan Racoviţan. "Decision Trees – A Perspective Of Electronic Decisional Support." Annales Universitatis Apulensis Series Oeconomica 2, no. 12 (December 31, 2010): 631–37. http://dx.doi.org/10.29302/oeconomica.2010.12.2.15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "DECISION TRESS"

1

Kustra, Rafal. "Soft decision trees." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq28745.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Máša, Petr. "Finding Optimal Decision Trees." Doctoral thesis, Vysoká škola ekonomická v Praze, 2006. http://www.nusl.cz/ntk/nusl-456.

Повний текст джерела
Анотація:
Rozhodovácí stromy jsou rozšířenou technikou pro popis dat. Používají se často teké pro predikace. Zajímavým problémemje, že konkrétní distribuce může být popsána jedním či více rozhodovacími stromy.Obvykle nás zajímá co nejjednodušší rozhodovací strom(který budeme nazývat též optimální rozhodovací strom).Tato práce navrhuje rozšíření prořezávácí fáze algoritmů pro rozhodovací stromytak, aby umožňovala více prořezávání. V práci byly zkoumány teoretické i praktické vlastnosti tohoto rozšířeného algoritmu. Jako hlavní teoretický výsledek bylo dokázano, že pro jistou třídu distribucí nalezne algoritmus optimální rozhodovací strom(tj.nejmenší rozhodovací strom, který reprezentuje danou distribuci). V praktických testech bylo zkoumáno, jak je schopen algoritmus rekonstruovat známý strom z dat. Zajímalo nás, zdali dosáhne naše rozšíření zlepšení v počtu správně rekonstruovaných stromů zejména v případě, že data jsou dodatečně velká ( z hlediska počtu záznamů). Tato doměnka byla potvrzena praktickými testy. Obdobný výsledek byl před několika lety prokázán pro Bayesovské sítě. Algoritmus navržený v této disertační práci je polynomiální v počtu listů stromu, který je výstupem hladového algoritmu pro růst stromů, což je vylepšení oproti jednoduchému algoritmu prohledávání všech možných stromů, který je exponenciální.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Minguillón, Alfonso Julià. "On cascading small decision trees." Doctoral thesis, Universitat Autònoma de Barcelona, 2002. http://hdl.handle.net/10803/3027.

Повний текст джерела
Анотація:
Aquesta tesi tracta sobre la utilització d'arbres de decisió petits per a la classificació i la mineria de dades. La idea intuïtiva darrera d'aquesta tesi és que una seqüència d'arbres de decisió petits pot rendir millor que un arbre de decisió gran, reduint tan el cost d'entrenament com el d'explotació.
El nostre primer objectiu va ser desenvolupar un sistema capaç de reconèixer diferents tipus d'elements presents en un document com ara el fons, text, línies horitzontals i verticals, dibuixos esquemàtics i imatges. Aleshores, cada element pot ser tractat d'acord a les seves característiques. Per exemple, el fons s'elimina i no és processat, mentre que les altres regions serien comprimides usant l'algorisme apropiat, JPEG amb pèrdua per a les imatges i un mètode sense pèrdua per a la resta, per exemple. Els primers experiments usant arbres de decisió varen mostrar que els arbres de decisió construïts eren massa grans i que patien de sobre-entrenament. Aleshores, vàrem tractar d'aprofitar la redundància espacial present en les imatges, utilitzant una aproximació de resolució múltiple: si un bloc gran no pot ser correctament classificat, trencar-lo en quatre sub-blocs i repetir el procés recursivament per a cada sub-bloc, usant tot el coneixement que s'hagi calculat amb anterioritat. Els blocs que no poden ser processats per una mida de bloc donada s'etiqueten com a "mixed", pel que la paraula progressiu pren sentit: una primera versió de poca resolució de la imatge classificada és obtinguda amb el primer classificador, i és refinada pel segon, el tercer, etc., fins que una versió final és obtinguda amb l'últim classificador del muntatge. De fet, l'ús de l'esquema progressiu porta a l'ús d'arbres de decisió més petits, ja que ja no cal un classificador complex. En lloc de construir un classificador gran i complex per a classificar tot el conjunt d'entrenament, només provem de resoldre la part més fàcil del problema de classificació, retardant la resta per a un segon classificador, etc.
La idea bàsica d'aquesta tesi és, doncs, un compromís entre el cost i la precisió sota una restricció de confiança. Una primera classificació es efectuada a baix cost; si un element és classificat amb una confiança elevada, s'accepta, i si no ho és, es rebutja i s'efectua una segona classificació, etc. És bàsicament, una variació del paradigma de "cascading", on un primer classificador s'usa per a calcular informació addicional per a cada element d'entrada, que serà usada per a millorar la precisió de classificació d'un segon classificador, etc. El que presentem en aquesta tesi és, bàsicament, una extensió del paradigma de "cascading" i una avaluació empírica exhaustiva dels paràmetres involucrats en la creació d'arbres de decisió progressius. Alguns aspectes teòrics relacionats als arbres de decisió progressius com la complexitat del sistema, per exemple, també són tractats.
This thesis is about using small decision trees for classification and data mining. The intuitive idea behind this thesis is that a sequence of small decision trees may perform better than a large decision tree, reducing both training and exploitation costs.
Our first goal was to develop a system capable to recognize several kinds of elements present in a document such as background, text, horizontal and vertical lines, line drawings and images. Then, each element would be treated accordingly to its characteristics. For example, background regions would be removed and not processed at all, while the other regions would be compressed using an appropriate algorithm, the lossy JPEG standard operation mode for images and a lossless method for the rest, for instance. Our first experiments using decision trees showed that the decision trees we built were too large and they suffered from overfitting. Then, we tried to take advantage of spatial redundancy present in images, using a multi-resolution approach: if a large block cannot be correctly classified, split it in four subblocks and repeat the process recursively for each subblock, using all previous computed knowledge about such block. Blocks that could not be processed at a given block size were labeled as mixed, so the word progressive came up: a first low resolution version of the classified image is obtained with the first classifier, and it is refined by the second one, the third one, etc, until a final version is obtained with the last classifier in the ensemble. Furthermore, the use of the progressive scheme yield to the use of smaller decision trees, as we no longer need a complex classifier. Instead of building a large and complex classifier for classifying the whole input training set, we only try to solve the easiest part of the classification problem, delaying the rest for a second classifier, and so.
The basic idea in this thesis is, therefore, a trade-off between cost and accuracy under a confidence constraint. A first classification is performed at a low cost; if an element is classified with a high confidence, it is accepted, and if not, it is rejected and a second classification is performed, and so. It is, basically, a variation of the cascading paradigm, where a first classifier is used to compute additional information from each input sample, information that will be used to improve classification accuracy by a second classifier, and so on. What we present in this thesis, basically, is an extension of the cascading paradigm and an exhaustive empirical evaluation of the parameters involved in the creation of progressive decision trees. Some basic theoretical issues related to progressive decision trees such as system complexity, for example, are also addressed.
Esta tesis trata sobre la utilización de árboles de decisión pequeños para la clasificación y la minería de datos. La idea intuitiva detrás de esta tesis es que una secuencia de árboles de decisión pequeños puede rendir mejor que un árbol de decisión grande, reduciendo tanto el coste de entrenamiento como el de explotación.
Nuestro primer objetivo fue desarrollar un sistema capaz de reconocer diferentes tipos de elementos presentes en un documento, como el fondo, texto, líneas horizontales y verticales, dibujos esquemáticos y imágenes. Entonces, cada elemento puede ser tratado de acuerdo a sus características. Por ejemplo, el fondo se elimina y no se procesa, mientras que las otras regiones serían comprimidas usando el algoritmo apropiado, JPEG con pérdida para las imágenes y un método sin pérdida para el resto, por ejemplo. Los primeros experimentos usando árboles de decisión mostraron que los árboles de decisión construidos eran demasiado grandes y que sufrían de sobre-entrenamiento. Entonces, se trató de aprovechar la redundancia espacial presente en las imágenes, utilizando una aproximación de resolución múltiple: si un bloque grande no puede ser correctamente clasificado, romperlo en cuatro sub-bloques y repetir el proceso recursivamente para cada sub-bloque, usando todo el conocimiento que se haya calculado con anterioridad. Los bloques que no pueden ser procesados para una medida de bloque dada se etiquetan como "mixed", por lo que la palabra progresivo toma sentido: una primera versión de poca resolución de la imagen clasificada se obtiene con el primer clasificador, y se refina por el segundo, el tercero, etc., hasta que una versión final es obtenida con el último clasificador del montaje. De hecho, el uso del esquema progresivo lleva al uso de árboles de decisión más pequeños, ya que ya no es necesario un clasificador complejo. En lugar de construir un clasificador grande y complejo para clasificar todo el conjunto de entrenamiento, sólo tratamos de resolver la parte más fácil del problema de clasificación, retardando el resto para un segundo clasificador, etc.
La idea básica de esta tesis es, entonces, un compromiso entre el coste y la precisión bajo una restricción de confianza. Una primera clasificación es efectuada a bajo coste; si un elemento es clasificado con una confianza elevada, se acepta, y si no lo es, se rechaza y se efectúa una segunda clasificación, etc. Es básicamente, una variación del paradigma de "cascading", donde un primer clasificador se usa para calcular información adicional para cada elemento de entrada, que será usada para mejorar la precisión de clasificación de un segundo clasificador, etc. Lo que presentamos en esta tesis es, básicamente, una extensión del paradigma de "cascading" y una evaluación empírica exhaustiva de los parámetros involucrados en la creación de árboles de decisión progresivos. Algunos aspectos teóricos relacionados con los árboles de decisión progresivos como la complejidad del sistema, por ejemplo, también son tratados.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pisetta, Vincent. "New Insights into Decision Trees Ensembles." Thesis, Lyon 2, 2012. http://www.theses.fr/2012LYO20018/document.

Повний текст джерела
Анотація:
Les ensembles d’arbres constituent à l’heure actuelle l’une des méthodes d’apprentissage statistique les plus performantes. Toutefois, leurs propriétés théoriques, ainsi que leurs performances empiriques restent sujettes à de nombreuses questions. Nous proposons dans cette thèse d’apporter un nouvel éclairage à ces méthodes. Plus particulièrement, après avoir évoqué les aspects théoriques actuels (chapitre 1) de trois schémas ensemblistes principaux (Forêts aléatoires, Boosting et Discrimination Stochastique), nous proposerons une analyse tendant vers l’existence d’un point commun au bien fondé de ces trois principes (chapitre 2). Ce principe tient compte de l’importance des deux premiers moments de la marge dans l’obtention d’un ensemble ayant de bonnes performances. De là, nous en déduisons un nouvel algorithme baptisé OSS (Oriented Sub-Sampling) dont les étapes sont en plein accord et découlent logiquement du cadre que nous introduisons. Les performances d’OSS sont empiriquement supérieures à celles d’algorithmes en vogue comme les Forêts aléatoires et AdaBoost. Dans un troisième volet (chapitre 3), nous analysons la méthode des Forêts aléatoires en adoptant un point de vue « noyau ». Ce dernier permet d’améliorer la compréhension des forêts avec, en particulier la compréhension et l’observation du mécanisme de régularisation de ces techniques. Le fait d’adopter un point de vue noyau permet d’améliorer les Forêts aléatoires via des méthodes populaires de post-traitement comme les SVM ou l’apprentissage de noyaux multiples. Ceux-ci démontrent des performances nettement supérieures à l’algorithme de base, et permettent également de réaliser un élagage de l’ensemble en ne conservant qu’une petite partie des classifieurs le composant
Decision trees ensembles are among the most popular tools in machine learning. Nevertheless, their theoretical properties as well as their empirical performances are subject to strong investigation up to date. In this thesis, we propose to shed light on these methods. More precisely, after having described the current theoretical aspects of three main ensemble schemes (chapter 1), we give an analysis supporting the existence of common reasons to the success of these three principles (chapter 2). This last takes into account the two first moments of the margin as an essential ingredient to obtain strong learning abilities. Starting from this rejoinder, we propose a new ensemble algorithm called OSS (Oriented Sub-Sampling) whose steps are in perfect accordance with the point of view we introduce. The empirical performances of OSS are superior to the ones of currently popular algorithms such as Random Forests and AdaBoost. In a third chapter (chapter 3), we analyze Random Forests adopting a “kernel” point of view. This last allows us to understand and observe the underlying regularization mechanism of these kinds of methods. Adopting the kernel point of view also enables us to improve the predictive performance of Random Forests using popular post-processing techniques such as SVM and multiple kernel learning. In conjunction with random Forests, they show greatly improved performances and are able to realize a pruning of the ensemble by conserving only a small fraction of the initial base learners
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wickramarachchi, Darshana Chitraka. "Oblique decision trees in transformed spaces." Thesis, University of Canterbury. Mathematics and Statistics, 2015. http://hdl.handle.net/10092/11051.

Повний текст джерела
Анотація:
Decision trees (DTs) play a vital role in statistical modelling. Simplicity and interpretability of the solution structure have made the method popular in a wide range of disciplines. In data classification problems, DTs recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class. Axis parallel splits, the simplest form of splits, partition the feature space parallel to feature axes. However, for some problem domains DTs with axis parallel splits can produce complicated boundary structures. As an alternative, oblique splits are used to partition the feature space potentially simplifying the boundary structure. Various approaches have been explored to find optimal oblique splits. One approach is based on optimisation techniques. This is considered the benchmark approach, however, its major limitation is that the tree induction algorithm is computationally expensive. On the other hand, split finding approaches based on heuristic arguments have gained popularity and have made improvements on benchmark methods. This thesis proposes a methodology to induce oblique decision trees in transformed spaces based on a heuristic argument. As the first goal of the thesis, a new oblique decision tree algorithm, called HHCART (\underline{H}ouse\underline{H}older \underline{C}lassification and \underline{R}egression \underline{T}ree) is proposed. The proposed algorithm utilises a series of Householder matrices to reflect the training data at each non-terminal node during the tree construction. Householder matrices are constructed using the eigenvectors from each classes' covariance matrix. Axis parallel splits in the reflected (or transformed) spaces provide an efficient way of finding oblique splits in the original space. Experimental results show that the accuracy and size of the HHCART trees are comparable with some benchmark methods in the literature. The appealing features of HHCART is that it can handle both qualitative and quantitative features in the same oblique split, conceptually simple and computationally efficient. Data mining applications often come with massive example sets and inducing oblique DTs for such example sets often consumes considerable time. HHCART is a serial computing memory resident algorithm which may be ineffective when handling massive example sets. As the second goal of the thesis parallel computing and disk resident versions of the HHCART algorithm are presented so that HHCART can be used irrespective of the size of the problem. HHCART is a flexible algorithm and the eigenvectors defining Householder matrices can be replaced by other vectors deemed effective in oblique split finding. The third endeavour of this thesis explores this aspect of HHCART. HHCART can be used with other vectors in order to improve classification results. For example, a normal vector of the angular bisector, introduced in the Geometric Decision Tree (GDT) algorithm, is used to construct the Householder reflection matrix. The proposed method produces better results than GDT for some problem domains. In the second case, \textit{Class Representative Vectors} are introduced and used to construct Householder reflection matrices. The results of this experiment show that these oblique trees produce classification results competitive with those achieved with some benchmark decision trees. DTs are constructed using two approaches, namely: top-down and bottom-up. HHCART is a top-down tree, which is the most common approach. As the fourth idea of the thesis, the concept of HHCART is used to induce a new DT, HHBUT, using the bottom-up approach. The bottom-up approach performs cluster analysis prior to the tree building to identify the terminal nodes. The use of the Bayesian Information Criterion (BIC) to determine the number of clusters leads to accurate and compact trees when compared with Cross Validation (CV) based bottom-up trees. We suggest that HHBUT is a good alternative to the existing bottom-up tree especially when the number of examples is much higher than the number of features.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Han, Qian. "Mining Shared Decision Trees between Datasets." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1274807201.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Parkhe, Vidyamani. "Randomized decision trees for data mining." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5962/thesis.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains vi, 54 p.; also contains graphics. Vita. Includes bibliographical references (p. 52-53).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Boujari, Tahereh. "Instance-based ontology alignment using decision trees." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84918.

Повний текст джерела
Анотація:
Using ontologies is a key technology in the semantic web. The semantic web helps people to store their data on the web, build vocabularies, and has written rules for handling these data and also helps the search engines to distinguish between the information they want to access in web easier. In order to use multiple ontologies created by different experts we need matchers to find the similar concepts in them to use it to merge these ontologies. Text based searches use the string similarity functions to find the equivalent concepts inside ontologies using their names.This is the method that is used in lexical matchers. But a global standard for naming the concepts in different research area does not exist or has not been used. The same name may refer to different concepts while different names may describe the same concept. To solve this problem we can use another approach for calculating the similarity value between concepts which is used in structural and constraint-based matchers. It uses relations between concepts, synonyms and other information that are stored in the ontologies. Another category for matchers is instance-based that uses additional information like documents related to the concepts of ontologies, the corpus, to calculate the similarity value for the concepts. Decision trees in the area of data mining are used for different kind of classification for different purposes. Using decision trees in an instance-based matcher is the main concept of this thesis. The results of this implemented matcher using the C4.5 algorithm are discussed. The matcher is also compared to other matchers. It also is used for combination with other matchers to get a better result.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lee, Hong, and 李匡. "Model-based decision trees for ranking data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45149707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Beck, Jason. "Implementation and Experimentation with C4.5 Decision Trees." Honors in the Major Thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1157.

Повний текст джерела
Анотація:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Computer Engineering
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "DECISION TRESS"

1

Alsolami, Fawaz, Mohammad Azad, Igor Chikalov, and Mikhail Moshkov. Decision and Inhibitory Trees and Rules for Decision Tables with Many-valued Decisions. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-12854-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kustra, Rafal. Soft decision trees. Ottawa: National Library of Canada = Bibliothèque nationale du Canada, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

McNellis, Ryan Thomas. Training Decision Trees for Optimal Decision-Making. [New York, N.Y.?]: [publisher not identified], 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Azad, Mohammad, Igor Chikalov, Shahid Hussain, Mikhail Moshkov, and Beata Zielosko. Decision Trees with Hypotheses. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08585-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Authority, Financial Services. Stakeholder pensions decision trees. London: Financial Services Authority, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Authority, Financial Services. Stakeholder Pensions and Decision Trees. London: Financial Services Authority, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Authority, Financial Services, ed. Stakeholder pensions and decision trees. London: Financial Services Authority, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Authority, Financial Services, ed. Stakeholder pensions and decision trees. London: Financial Services Authority, 2002.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Authority, Financial Services, ed. Stakeholder pensions and decision trees. London: Financial Services Authority, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

service), SpringerLink (Online, ed. Average Time Complexity of Decision Trees. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "DECISION TRESS"

1

Grąbczewski, Krzysztof. "Validated Decision Trees versus Collective Decisions." In Computational Collective Intelligence. Technologies and Applications, 342–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23938-0_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Murty, M. Narasimha, and V. Susheela Devi. "Decision Trees." In Undergraduate Topics in Computer Science, 123–46. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-495-1_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Suzuki, Joe. "Decision Trees." In Statistical Learning with Math and R, 147–70. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7568-6_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhou, Hong. "Decision Trees." In Learn Data Mining Through Excel, 125–48. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5982-5_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pérez Castaño, Arnaldo. "Decision Trees." In Practical Artificial Intelligence, 367–410. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3357-3_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jukna, Stasys. "Decision Trees." In Algorithms and Combinatorics, 405–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24508-4_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dobra, Alin. "Decision Trees." In Encyclopedia of Database Systems, 1–2. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_553-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Grosan, Crina, and Ajith Abraham. "Decision Trees." In Intelligent Systems Reference Library, 269–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21004-4_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kubat, Miroslav. "Decision Trees." In An Introduction to Machine Learning, 113–35. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20010-1_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dobra, Alin. "Decision Trees." In Encyclopedia of Database Systems, 769. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_553.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "DECISION TRESS"

1

Waksman, Peter. "Isolating causes of yield excursions with decision tress and commonality." In Design, Process Integration, and Characterization for Microelectronics, edited by Alexander Starikov and Kenneth W. Tobin, Jr. SPIE, 2002. http://dx.doi.org/10.1117/12.475647.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sokolov, Andrey Pavlovich. "On expressive abilities of ensembles of decision trees." In Academician O.B. Lupanov 14th International Scientific Seminar "Discrete Mathematics and Its Applications". Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/dms-2022-72.

Повний текст джерела
Анотація:
Decision trees and their ensembles are widely used in machine learning, statistics and data analysis. Predictive models based on decision trees, show outstanding results in terms of quality and learning time. Especially on heterogeneous tabular data. Speed performance, simplicity and reliability make this family of predictive one of the most popular models in machine learning. important parameters when training ensembles of decision trees (random forest, gradient boosting, etc.) are: the number of trees and their maximum depth. These options are usually selected by a complete enumeration of all possible options on the training sample. The report will prove a theorem on expressive the possibilities of an ensemble of decision trees of limited depth. From of this result it follows that the depth of the decisive trees cannot be replaced by the size of the ensemble.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fleischer, Rudolf. "Decision trees." In the twenty-fifth annual ACM symposium. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/167088.167216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Björner, Anders, László Lovász, and Andrew C. C. Yao. "Linear decision trees." In the twenty-fourth annual ACM symposium. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/129712.129730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ignatov, Dmitry, and Andrey Ignatov. "Decision Stream: Cultivating Deep Decision Trees." In 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2017. http://dx.doi.org/10.1109/ictai.2017.00140.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vos, Daniël, and Sicco Verwer. "Optimal Decision Tree Policies for Markov Decision Processes." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/606.

Повний текст джерела
Анотація:
Interpretability of reinforcement learning policies is essential for many real-world tasks but learning such interpretable policies is a hard problem. Particularly, rule-based policies such as decision trees and rules lists are difficult to optimize due to their non-differentiability. While existing techniques can learn verifiable decision tree policies, there is no guarantee that the learners generate a policy that performs optimally. In this work, we study the optimization of size-limited decision trees for Markov Decision Processes (MPDs) and propose OMDTs: Optimal MDP Decision Trees. Given a user-defined size limit and MDP formulation, OMDT directly maximizes the expected discounted return for the decision tree using Mixed-Integer Linear Programming. By training optimal tree policies for different MDPs we empirically study the optimality gap for existing imitation learning techniques and find that they perform sub-optimally. We show that this is due to an inherent shortcoming of imitation learning, namely that complex policies cannot be represented using size-limited trees. In such cases, it is better to directly optimize the tree for expected return. While there is generally a trade-off between the performance and interpretability of machine learning models, we find that on small MDPs, depth 3 OMDTs often perform close to optimally.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

A.Jimenez-Roa, Lisandro, Tom Heskes, and Marielle Stoelinga. "Fault Trees, Decision Trees, And Binary Decision Diagrams: A Systematic Comparison." In Proceedings of the 31st European Safety and Reliability Conference. Singapore: Research Publishing Services, 2021. http://dx.doi.org/10.3850/978-981-18-2016-8_241-cd.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Abu-halaweh, Na'el M., and Robert W. Harrison. "Practical fuzzy decision trees." In 2009 IEEE Symposium on Computational Intelligence and Data Mining (CIDM). IEEE, 2009. http://dx.doi.org/10.1109/cidm.2009.4938651.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Struharik, R., V. Vranjkovic, S. Dautovic, and L. Novak. "Inducing oblique decision trees." In 2014 IEEE 12th International Symposium on Intelligent Systems and Informatics (SISY 2014). IEEE, 2014. http://dx.doi.org/10.1109/sisy.2014.6923596.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gopalan, Parikshit, Adam Tauman Kalai, and Adam R. Klivans. "Agnostically learning decision trees." In the 40th annual ACM symposium. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1374376.1374451.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "DECISION TRESS"

1

Wei, Yin-Loh. Decision Trees for Prediction and Data Mining. Fort Belvoir, VA: Defense Technical Information Center, February 2005. http://dx.doi.org/10.21236/ada430178.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kegelmeyer, W. P. Jr, B. Groshong, M. Allmen, and K. Woods. Decision trees and integrated features for computer aided mammographic screening. Office of Scientific and Technical Information (OSTI), February 1997. http://dx.doi.org/10.2172/501540.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kim, Joungbum, Sarah E. Schwarm, and Mari Ostendorf. Detecting Structural Metadata with Decision Trees and Transformation-Based Learning. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada457891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aba, David W., and Leonard A. Breslow. Comparing Simplification Procedures for Decision Trees on an Economics Classification. Fort Belvoir, VA: Defense Technical Information Center, May 1998. http://dx.doi.org/10.21236/ada343512.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Linenger, Jerry M., William B. Long, and William J. Sacco. Combat Surgery: Medical Decision Trees for Treatment of Naval Combat Casualties. Fort Belvoir, VA: Defense Technical Information Center, February 1991. http://dx.doi.org/10.21236/ada374992.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Barber, James. Using Boosted Decision Trees to Separate Signal and Background in B to XsGamma Decays. Office of Scientific and Technical Information (OSTI), September 2006. http://dx.doi.org/10.2172/892609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Edmunds, Thomas A., Jeffrey S. Garrett, and Craig R. Wuest. Decision Trees for Analysis of Strategies to Deter Limited Nuclear Use in a Generic Scenario. Office of Scientific and Technical Information (OSTI), October 2018. http://dx.doi.org/10.2172/1477147.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zio, Enrico, and Nicola Pedroni. Uncertainty characterization in risk analysis for decision-making practice. Fondation pour une culture de sécurité industrielle, May 2012. http://dx.doi.org/10.57071/155chr.

Повний текст джерела
Анотація:
This document provides an overview of sources of uncertainty in probabilistic risk analysis. For each phase of the risk analysis process (system modeling, hazard identification, estimation of the probability and consequences of accident sequences, risk evaluation), the authors describe and classify the types of uncertainty that can arise. The document provides: a description of the risk assessment process, as used in hazardous industries such as nuclear power and offshore oil and gas extraction; a classification of sources of uncertainty (both epistemic and aleatory) and a description of techniques for uncertainty representation; a description of the different steps involved in a Probabilistic Risk Assessment (PRA) or Quantitative Risk Assessment (QRA), and an analysis of the types of uncertainty that can affect each of these steps; annexes giving an overview of a number of tools used during probabilistic risk assessment, including the HAZID technique, fault trees and event tree analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Zhiyi. Measurement of single top quark production in the tau+jets channnel using boosted decision trees at D0. Office of Scientific and Technical Information (OSTI), December 2009. http://dx.doi.org/10.2172/970067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Harter, Rachel M., Pinliang (Patrick) Chen, Joseph P. McMichael, Edgardo S. Cureg, Samson A. Adeshiyan, and Katherine B. Morton. Constructing Strata of Primary Sampling Units for the Residential Energy Consumption Survey. RTI Press, May 2017. http://dx.doi.org/10.3768/rtipress.2017.op.0041.1705.

Повний текст джерела
Анотація:
The 2015 Residential Energy Consumption Survey design called for stratification of primary sampling units to improve estimation. Two methods of defining strata from multiple stratification variables were proposed, leading to this investigation. All stratification methods use stratification variables available for the entire frame. We reviewed textbook guidance on the general principles and desirable properties of stratification variables and the assumptions on which the two methods were based. Using principal components combined with cluster analysis on the stratification variables to define strata focuses on relationships among stratification variables. Decision trees, regressions, and correlation approaches focus more on relationships between the stratification variables and prior outcome data, which may be available for just a sample of units. Using both principal components/cluster analysis and decision trees, we stratified primary sampling units for the 2009 Residential Energy Consumption Survey and compared the resulting strata.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії