Academic literature on the topic 'Random Decision Forests'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Random Decision Forests.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Random Decision Forests"

1

Jeong, Hoyeon, Youngjune Kim, and So Yeong Lim. "A Predictive Model for Farmland Purchase/Rent Using Random Forests." Korean Agricultural Economics Association 63, no. 3 (September 30, 2022): 153–68. http://dx.doi.org/10.24997/kjae.2022.63.3.153.

Full text
Abstract:
This study contributes to guidance for understanding farmland purchase and rent decisions in Korea via an analysis using a machine learning tool, Random Forests: A Supervised Machine Learning Algorithm. Farm Household Economy Survey is employed to predict the relationship between farmland acquisition and farm household economic characteristics. Our main findings are two folds. First, a farmland purchase decision is positively related to transfer incomes, the value of inventory & fixed assets, and the value of farmland that farmers owned. Second, a farmland rent decision is also positively associated with a rent paid in a prior year, revenue from field crops, inventory and agricultural assets, and transfer incomes.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, David J., Tony Feng, Michael Naehrig, and Kristin Lauter. "Privately Evaluating Decision Trees and Random Forests." Proceedings on Privacy Enhancing Technologies 2016, no. 4 (October 1, 2016): 335–55. http://dx.doi.org/10.1515/popets-2016-0043.

Full text
Abstract:
Abstract Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. We then give an extension of the semi-honest protocol that is robust against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate a tenfold improvement in computation and bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
3

Kumano, So, and Tatsuya Akutsu. "Comparison of the Representational Power of Random Forests, Binary Decision Diagrams, and Neural Networks." Neural Computation 34, no. 4 (March 23, 2022): 1019–44. http://dx.doi.org/10.1162/neco_a_01486.

Full text
Abstract:
Abstract In this letter, we compare the representational power of random forests, binary decision diagrams (BDDs), and neural networks in terms of the number of nodes. We assume that an axis-aligned function on a single variable is assigned to each edge in random forests and BDDs, and the activation functions of neural networks are sigmoid, rectified linear unit, or similar functions. Based on existing studies, we show that for any random forest, there exists an equivalent depth-3 neural network with a linear number of nodes. We also show that for any BDD with balanced width, there exists an equivalent shallow depth neural network with a polynomial number of nodes. These results suggest that even shallow neural networks have the same or higher representation power than deep random forests and deep BDDs. We also show that in some cases, an exponential number of nodes are required to express a given random forest by a random forest with a much fewer number of trees, which suggests that many trees are required for random forests to represent some specific knowledge efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Heng-Ru, Fan Min, and Xu He. "Aggregated Recommendation through Random Forests." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/649596.

Full text
Abstract:
Aggregated recommendation refers to the process of suggesting one kind of items to a group of users. Compared to user-oriented or item-oriented approaches, it is more general and, therefore, more appropriate for cold-start recommendation. In this paper, we propose a random forest approach to create aggregated recommender systems. The approach is used to predict the rating of a group of users to a kind of items. In the preprocessing stage, we merge user, item, and rating information to construct an aggregated decision table, where rating information serves as the decision attribute. We also model the data conversion process corresponding to the new user, new item, and both new problems. In the training stage, a forest is built for the aggregated training set, where each leaf is assigned a distribution of discrete rating. In the testing stage, we present four predicting approaches to compute evaluation values based on the distribution of each tree. Experiments results on the well-known MovieLens dataset show that the aggregated approach maintains an acceptable level of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Audemard, Gilles, Steve Bellart, Louènas Bounia, Frédéric Koriche, Jean-Marie Lagniez, and Pierre Marquis. "Trading Complexity for Sparsity in Random Forest Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5461–69. http://dx.doi.org/10.1609/aaai.v36i5.20484.

Full text
Abstract:
Random forests have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting random forest can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by random forests are much more difficult to understand, as they involve a majority vote over multiple decision trees. In this paper, we examine different types of reasons that explain "why" an input instance is classified as positive or negative by a Boolean random forest. Notably, as an alternative to prime-implicant explanations taking the form of subset-minimal implicants of the random forest, we introduce majoritary reasons which are subset-minimal implicants of a strict majority of decision trees. For these abductive explanations, the tractability of the generation problem (finding one reason) and the optimization problem (finding one minimum-sized reason) are investigated. Unlike prime-implicant explanations, majoritary reasons may contain redundant features. However, in practice, prime-implicant explanations - for which the identification problem is DP-complete - are slightly larger than majoritary reasons that can be generated using a simple linear-time greedy algorithm. They are also significantly larger than minimum-sized majoritary reasons which can be approached using an anytime Partial MaxSAT algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Tin Kam Ho. "The random subspace method for constructing decision forests." IEEE Transactions on Pattern Analysis and Machine Intelligence 20, no. 8 (1998): 832–44. http://dx.doi.org/10.1109/34.709601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fröhlich, B., E. Rodner, M. Kemmler, and J. Denzler. "Efficient Gaussian process classification using random decision forests." Pattern Recognition and Image Analysis 21, no. 2 (June 2011): 184–87. http://dx.doi.org/10.1134/s1054661811020337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fletcher, Sam, and Md Zahidul Islam. "Differentially private random decision forests using smooth sensitivity." Expert Systems with Applications 78 (July 2017): 16–31. http://dx.doi.org/10.1016/j.eswa.2017.01.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thongkam, Jaree, and Vatinee Sukmak. "Enhancing Decision Tree with AdaBoost for Predicting Schizophrenia Readmission." Advanced Materials Research 931-932 (May 2014): 1467–71. http://dx.doi.org/10.4028/www.scientific.net/amr.931-932.1467.

Full text
Abstract:
A psychiatric readmission is argued to be an adverse outcome because it is costly and occurs when relapse to the illness is so severe. An analysis of systematic models in readmission data can provide useful insight into the quicker and sicker patients with schizophrenia. This research aims to develop and investigate schizophrenia readmission prediction models using data mining techniques including decision tree, Random Tree, Random Forests, AdaBoost, Bagging and a combination of AdaBoost with decision tree, AdaBoost with Random Tree, AdaBoost with Random Forests, Bagging with decision tree, Bagging with Random Tree and Bagging with Random Forests. The experimental results successfully showed that AdaBoost with decision tree has the highest precision, recall and F-measure up to 98.11%, 98.79% and 98.41%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Fröhlich, B., E. Rodner, M. Kemmler, and J. Denzler. "Large-scale Gaussian process classification using random decision forests." Pattern Recognition and Image Analysis 22, no. 1 (March 2012): 113–20. http://dx.doi.org/10.1134/s1054661812010166.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Random Decision Forests"

1

Julock, Gregory Alan. "The Effectiveness of a Random Forests Model in Detecting Network-Based Buffer Overflow Attacks." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/190.

Full text
Abstract:
Buffer Overflows are a common type of network intrusion attack that continue to plague the networked community. Unfortunately, this type of attack is not well detected with current data mining algorithms. This research investigated the use of Random Forests, an ensemble technique that creates multiple decision trees, and then votes for the best tree. The research Investigated Random Forests' effectiveness in detecting buffer overflows compared to other data mining methods such as CART and Naïve Bayes. Random Forests was used for variable reduction, cost sensitive classification was applied, and each method's detection performance compared and reported along with the receive operator characteristics. The experiment was able to show that Random Forests outperformed CART and Naïve Bayes in classification performance. Using a technique to obtain Buffer Overflow most important variables, Random Forests was also able to improve upon its Buffer Overflow classification performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Rosales, Elisa Renee. "Predicting Patient Satisfaction With Ensemble Methods." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/595.

Full text
Abstract:
Health plans are constantly seeking ways to assess and improve the quality of patient experience in various ambulatory and institutional settings. Standardized surveys are a common tool used to gather data about patient experience, and a useful measurement taken from these surveys is known as the Net Promoter Score (NPS). This score represents the extent to which a patient would, or would not, recommend his or her physician on a scale from 0 to 10, where 0 corresponds to "Extremely unlikely" and 10 to "Extremely likely". A large national health plan utilized automated calls to distribute such a survey to its members and was interested in understanding what factors contributed to a patient's satisfaction. Additionally, they were interested in whether or not NPS could be predicted using responses from other questions on the survey, along with demographic data. When the distribution of various predictors was compared between the less satisfied and highly satisfied members, there was significant overlap, indicating that not even the Bayes Classifier could successfully differentiate between these members. Moreover, the highly imbalanced proportion of NPS responses resulted in initial poor prediction accuracy. Thus, due to the non-linear structure of the data, and high number of categorical predictors, we have leveraged flexible methods, such as decision trees, bagging, and random forests, for modeling and prediction. We further altered the prediction step in the random forest algorithm in order to account for the imbalanced structure of the data.
APA, Harvard, Vancouver, ISO, and other styles
3

Varatharajah, Thujeepan, and Eriksson Victor. "A comparative study on artificial neural networks and random forests for stock market prediction." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186452.

Full text
Abstract:
This study investigates the predictive performance of two different machine learning (ML) models on the stock market and compare the results. The chosen models are based on artificial neural networks (ANN) and random forests (RF). The models are trained on two separate data sets and the predictions are made on the next day closing price. The input vectors of the models consist of 6 different financial indicators which are based on the closing prices of the past 5, 10 and 20 days. The performance evaluation are done by analyzing and comparing such values as the root mean squared error (RMSE) and mean average percentage error (MAPE) for the test period. Specific behavior in subsets of the test period is also analyzed to evaluate consistency of the models. The results showed that the ANN model performed better than the RF model as it throughout the test period had lower errors compared to the actual prices and thus overall made more accurate predictions.
Denna studie undersöker hur väl två olika modeller inom maskininlärning (ML) kan förutspå aktiemarknaden och jämför sedan resultaten av dessa. De valda modellerna baseras på artificiella neurala nätverk (ANN) samt random forests (RF). Modellerna tränas upp med två separata datamängder och prognoserna sker på nästföljande dags stängningskurs. Indatan för modellerna består av 6 olika finansiella nyckeltal som är baserade på stängningskursen för de senaste 5, 10 och 20 dagarna. Prestandan utvärderas genom att analysera och jämföra värden som root mean squared error (RMSE) samt mean average percentage error (MAPE) för testperioden. Även specifika trender i delmängder av testperioden undersöks för att utvärdera följdriktigheten av modellerna. Resultaten visade att ANN-modellen presterade bättre än RF-modellen då den sett över hela testperioden visade mindre fel jämfört med de faktiska värdena och gjorde därmed mer träffsäkra prognoser.
APA, Harvard, Vancouver, ISO, and other styles
4

Pisetta, Vincent. "New Insights into Decision Trees Ensembles." Thesis, Lyon 2, 2012. http://www.theses.fr/2012LYO20018/document.

Full text
Abstract:
Les ensembles d’arbres constituent à l’heure actuelle l’une des méthodes d’apprentissage statistique les plus performantes. Toutefois, leurs propriétés théoriques, ainsi que leurs performances empiriques restent sujettes à de nombreuses questions. Nous proposons dans cette thèse d’apporter un nouvel éclairage à ces méthodes. Plus particulièrement, après avoir évoqué les aspects théoriques actuels (chapitre 1) de trois schémas ensemblistes principaux (Forêts aléatoires, Boosting et Discrimination Stochastique), nous proposerons une analyse tendant vers l’existence d’un point commun au bien fondé de ces trois principes (chapitre 2). Ce principe tient compte de l’importance des deux premiers moments de la marge dans l’obtention d’un ensemble ayant de bonnes performances. De là, nous en déduisons un nouvel algorithme baptisé OSS (Oriented Sub-Sampling) dont les étapes sont en plein accord et découlent logiquement du cadre que nous introduisons. Les performances d’OSS sont empiriquement supérieures à celles d’algorithmes en vogue comme les Forêts aléatoires et AdaBoost. Dans un troisième volet (chapitre 3), nous analysons la méthode des Forêts aléatoires en adoptant un point de vue « noyau ». Ce dernier permet d’améliorer la compréhension des forêts avec, en particulier la compréhension et l’observation du mécanisme de régularisation de ces techniques. Le fait d’adopter un point de vue noyau permet d’améliorer les Forêts aléatoires via des méthodes populaires de post-traitement comme les SVM ou l’apprentissage de noyaux multiples. Ceux-ci démontrent des performances nettement supérieures à l’algorithme de base, et permettent également de réaliser un élagage de l’ensemble en ne conservant qu’une petite partie des classifieurs le composant
Decision trees ensembles are among the most popular tools in machine learning. Nevertheless, their theoretical properties as well as their empirical performances are subject to strong investigation up to date. In this thesis, we propose to shed light on these methods. More precisely, after having described the current theoretical aspects of three main ensemble schemes (chapter 1), we give an analysis supporting the existence of common reasons to the success of these three principles (chapter 2). This last takes into account the two first moments of the margin as an essential ingredient to obtain strong learning abilities. Starting from this rejoinder, we propose a new ensemble algorithm called OSS (Oriented Sub-Sampling) whose steps are in perfect accordance with the point of view we introduce. The empirical performances of OSS are superior to the ones of currently popular algorithms such as Random Forests and AdaBoost. In a third chapter (chapter 3), we analyze Random Forests adopting a “kernel” point of view. This last allows us to understand and observe the underlying regularization mechanism of these kinds of methods. Adopting the kernel point of view also enables us to improve the predictive performance of Random Forests using popular post-processing techniques such as SVM and multiple kernel learning. In conjunction with random Forests, they show greatly improved performances and are able to realize a pruning of the ensemble by conserving only a small fraction of the initial base learners
APA, Harvard, Vancouver, ISO, and other styles
5

Funiok, Ondřej. "Využití statistických metod při oceňování nemovitostí." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359241.

Full text
Abstract:
The thesis deals with the valuation of real estates in the Czech Republic using statistical methods. The work focuses on a complex task based on data from an advertising web portal. The aim of the thesis is to create a prototype of the statistical predication model of the residential properties valuation in Prague and to further evaluate the dissemination of its possibilities. The structure of the work is conceived according to the CRISP-DM methodology. On the pre-processed data are tested the methods regression trees and random forests, which are used to predict the price of real estate.
APA, Harvard, Vancouver, ISO, and other styles
6

Jánoš, Andrej. "Vývoj kredit skóringových modelov s využitím vybraných štatistických metód v R." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-262242.

Full text
Abstract:
Credit scoring is important and rapidly developing discipline. The aim of this thesis is to describe basic methods used for building and interpretation of the credit scoring models with an example of application of these methods for designing such models using statistical software R. This thesis is organized into five chapters. In chapter one, the term of credit scoring is explained with main examples of its application and motivation for studying this topic. In the next chapters, three in financial practice most often used methods for building credit scoring models are introduced. In chapter two, the most developed one, logistic regression is discussed. The main emphasis is put on the logistic regression model, which is characterized from a mathematical point of view and also various ways to assess the quality of the model are presented. The other two methods presented in this thesis are decision trees and Random forests, these methods are covered by chapters three and four. An important part of this thesis is a detailed application of the described models to a specific data set Default using the R program. The final fifth chapter is a practical demonstration of building credit scoring models, their diagnostics and subsequent evaluation of their applicability in practice using R. The appendices include used R code and also functions developed for testing of the final model and code used through the thesis. The key aspect of the work is to provide enough theoretical knowledge and practical skills for a reader to fully understand the mentioned models and to be able to apply them in practice.
APA, Harvard, Vancouver, ISO, and other styles
7

Heckman, Derek J. "A Comparison of Classification Methods in Predicting the Presence of DNA Profiles in Sexual Assault Kits." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1513703948257233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hellsing, Edvin, and Joel Klingberg. "It’s a Match: Predicting Potential Buyers of Commercial Real Estate Using Machine Learning." Thesis, Uppsala universitet, Institutionen för informatik och media, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445229.

Full text
Abstract:
This thesis has explored the development and potential effects of an intelligent decision support system (IDSS) to predict potential buyers for commercial real estate property. The overarching need for an IDSS of this type has been identified exists due to information overload, which the IDSS aims to reduce. By shortening the time needed to process data, time can be allocated to make sense of the environment with colleagues. The system architecture explored consisted of clustering commercial real estate buyers into groups based on their characteristics, and training a prediction model on historical transaction data from the Swedish market from the cadastral and land registration authority. The prediction model was trained to predict which out of the cluster groups most likely will buy a given property. For the clustering, three different clustering algorithms were used and evaluated, one density based, one centroid based and one hierarchical based. The best performing clustering model was the centroid based (K-means). For the predictions, three supervised Machine learning algorithms were used and evaluated. The different algorithms used were Naive Bayes, Random Forests and Support Vector Machines. The model based on Random Forests performed the best, with an accuracy of 99.9%.
Denna uppsats har undersökt utvecklingen av och potentiella effekter med ett intelligent beslutsstödssystem (IDSS) för att prediktera potentiella köpare av kommersiella fastigheter. Det övergripande behovet av ett sådant system har identifierats existerar på grund av informtaionsöverflöd, vilket systemet avser att reducera. Genom att förkorta bearbetningstiden av data kan tid allokeras till att skapa förståelse av omvärlden med kollegor. Systemarkitekturen som undersöktes bestod av att gruppera köpare av kommersiella fastigheter i kluster baserat på deras köparegenskaper, och sedan träna en prediktionsmodell på historiska transkationsdata från den svenska fastighetsmarknaden från Lantmäteriet. Prediktionsmodellen tränades på att prediktera vilken av grupperna som mest sannolikt kommer köpa en given fastighet. Tre olika klusteralgoritmer användes och utvärderades för grupperingen, en densitetsbaserad, en centroidbaserad och en hierarkiskt baserad. Den som presterade bäst var var den centroidbaserade (K-means). Tre övervakade maskininlärningsalgoritmer användes och utvärderades för prediktionerna. Dessa var Naive Bayes, Random Forests och Support Vector Machines. Modellen baserad p ̊a Random Forests presterade bäst, med en noggrannhet om 99,9%.
APA, Harvard, Vancouver, ISO, and other styles
9

Федоров, Д. П. "Comparison of classifiers based on the decision tree." Thesis, ХНУРЕ, 2021. https://openarchive.nure.ua/handle/document/16430.

Full text
Abstract:
The main purpose of this work is to compare classifiers. Random Forest and XGBoost are two popular machine learning algorithms. In this paper, we looked at how they work, compared their features, and obtained accurate results from their robots.
APA, Harvard, Vancouver, ISO, and other styles
10

Boshoff, Wiehan. "Use of Adaptive Mobile Applications to Improve Mindfulness." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527174546252577.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Random Decision Forests"

1

Smith, Chris, and Mark Koning. Decision Trees and Random Forests: A Visual Introduction For Beginners. Independently published, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Critelli, Christian. Machine Learning : How Decision Trees Work and How They Can Be Combined into a Random Forest: Random Forests and Decision Trees Comparison. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Youngberg, Casimira. Machine Learning for Beginners Book : Decision Trees and Random Forests Work: Classification Machine Learning Algorithms. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

López, César Pérez. DATA MINING and MACHINE LEARNING. PREDICTIVE TECHNIQUES : ENSEMBLE METHODS, BOOSTING, BAGGING, RANDOM FOREST, DECISION TREES and REGRESSION TREES.: Examples with MATLAB. Lulu Press, Inc., 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Random Decision Forests"

1

Buhmann, M. D., Prem Melville, Vikas Sindhwani, Novi Quadrianto, Wray L. Buntine, Luís Torgo, Xinhua Zhang, et al. "Random Decision Forests." In Encyclopedia of Machine Learning, 827. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hasija, Yasha, and Rajkumar Chakraborty. "Decision Trees and Random Forests." In Hands-On Data Science for Biologists Using Python, 209–17. First edition. | Boca Raton : CRC Press, 2021.: CRC Press, 2021. http://dx.doi.org/10.1201/9781003090113-11-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shahzad, Raja Khurram, Mehwish Fatima, Niklas Lavesson, and Martin Boldt. "Consensus Decision Making in Random Forests." In Lecture Notes in Computer Science, 347–58. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27926-8_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lepetit, V., and P. Fua. "Keypoint Recognition Using Random Forests and Random Ferns." In Decision Forests for Computer Vision and Medical Image Analysis, 111–24. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4929-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dudyrev, Egor, and Sergei O. Kuznetsov. "Decision Concept Lattice vs. Decision Trees and Random Forests." In Formal Concept Analysis, 252–60. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77867-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cui, Limeng, Zhiquan Qi, Zhensong Chen, Fan Meng, and Yong Shi. "Pavement Distress Detection Using Random Decision Forests." In Data Science, 95–102. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24474-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hassaballah, M., and Mourad Ahmed. "A Random Decision Forests Approach to Face Detection." In Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications, 375–86. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09994-1_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Papoušková, Monika, and Petr Hajek. "Modelling Loss Given Default in Peer-to-Peer Lending Using Random Forests." In Intelligent Decision Technologies 2019, 133–41. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8311-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Do, Thanh-Nghi. "Using Local Rules in Random Forests of Decision Trees." In Future Data and Security Engineering, 32–45. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26135-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bonfietti, Alessio, Michele Lombardi, and Michela Milano. "Embedding Decision Trees and Random Forests in Constraint Programming." In Integration of AI and OR Techniques in Constraint Programming, 74–90. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18008-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Random Decision Forests"

1

Ma, Juanjuan, Quan Pan, Jinwen Hu, Chunhui Zhao, Yaning Guo, and Dong Wang. "Small object detection with random decision forests." In 2017 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2017. http://dx.doi.org/10.1109/icus.2017.8278409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tsipouras, Markos G., Dimosthenis C. Tsouros, Panagiotis N. Smyrlis, Nikolaos Giannakeas, and Alexandros T. Tzallas. "Random Forests with Stochastic Induction of Decision Trees." In 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2018. http://dx.doi.org/10.1109/ictai.2018.00087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bernard, Simon, Laurent Heutte, and Sebastien Adam. "On the selection of decision trees in Random Forests." In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alencar, Francisco A. R., Carlos Massera Filho, Diego Gomes da Silva, and Denis F. Wolf. "Pedestrian Classification Using K-means and Random Decision Forests." In 2014 Joint Conference on Robotics: SBR-LARS Robotics Symposium and Robocontrol (SBR LARS Robocontrol). IEEE, 2014. http://dx.doi.org/10.1109/sbr.lars.robocontrol.2014.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Ruigang, and Jie Bao. "Advanced-step Stochastic Model Predictive Control using Random Forests." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jian Xue and Yunxin Zhao. "Random-forests-based phonetic decision trees for conversational speech recognition." In ICASSP 2008 - 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/icassp.2008.4518573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nanfack, Géraldin, Valentin Delchevalerie, and Benoit Frénay. "Boundary-Based Fairness Constraints in Decision Trees and Random Forests." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Audemard, Gilles, Steve Bellart, Louenas Bounia, Frederic Koriche, Jean-Marie Lagniez, and Pierre Marquis. "On Preferred Abductive Explanations for Decision Trees and Random Forests." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/91.

Full text
Abstract:
Abductive explanations take a central place in eXplainable Artificial Intelligence (XAI) by clarifying with few features the way data instances are classified. However, instances may have exponentially many minimum-size abductive explanations, and this source of complexity holds even for ``intelligible'' classifiers, such as decision trees. When the number of such abductive explanations is huge, computing one of them, only, is often not informative enough. Especially, better explanations than the one that is derived may exist. As a way to circumvent this issue, we propose to leverage a model of the explainee, making precise her / his preferences about explanations, and to compute only preferred explanations. In this paper, several models are pointed out and discussed. For each model, we present and evaluate an algorithm for computing preferred majoritary reasons, where majoritary reasons are specific abductive explanations suited to random forests. We show that in practice the preferred majoritary reasons for an instance can be far less numerous than its majoritary reasons.
APA, Harvard, Vancouver, ISO, and other styles
9

Shah, Najeebullah, Sheikh Ziauddin, and Ahmad R. Shahid. "Brain tumor segmentation and classification using cascaded random decision forests." In 2017 14th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, 2017. http://dx.doi.org/10.1109/ecticon.2017.8096339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

S. Pahl, Eric, W. Nick Street, Hans J. Johnson, and Alan I. Reed. "A Predictive Model for Kidney Transplant Graft Survival using Machine Learning." In 4th International Conference on Computer Science and Information Technology (COMIT 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101609.

Full text
Abstract:
Kidney transplantation is the best treatment for end-stage renal failure patients. The predominant method used for kidney quality assessment is the Cox regression-based, kidney donor risk index. A machine learning method may provide improved prediction of transplant outcomes and help decision-making. A popular tree-based machine learning method, random forest, was trained and evaluated with the same data originally used to develop the risk index (70,242 observations from 1995-2005). The random forest successfully predicted an additional 2,148 transplants than the risk index with equal type II error rates of 10%. Predicted results were analyzed with follow-up survival outcomes up to 240 months after transplant using Kaplan-Meier analysis and confirmed that the random forest performed significantly better than the risk index (p<0.05). The random forest predicted significantly more successful and longer-surviving transplants than the risk index. Random forests and other machine learning models may improve transplant decisions.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Random Decision Forests"

1

Liu, Hongrui, and Rahul Ramachandra Shetty. Analytical Models for Traffic Congestion and Accident Analysis. Mineta Transportation Institute, November 2021. http://dx.doi.org/10.31979/mti.2021.2102.

Full text
Abstract:
In the US, over 38,000 people die in road crashes each year, and 2.35 million are injured or disabled, according to the statistics report from the Association for Safe International Road Travel (ASIRT) in 2020. In addition, traffic congestion keeping Americans stuck on the road wastes millions of hours and billions of dollars each year. Using statistical techniques and machine learning algorithms, this research developed accurate predictive models for traffic congestion and road accidents to increase understanding of the complex causes of these challenging issues. The research used US Accidents data consisting of 49 variables describing 4.2 million accident records from February 2016 to December 2020, as well as logistic regression, tree-based techniques such as Decision Tree Classifier and Random Forest Classifier (RF), and Extreme Gradient boosting (XG-boost) to process and train the models. These models will assist people in making smart real-time transportation decisions to improve mobility and reduce accidents.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography