Academic literature on the topic 'ENSEMBLE LEARNING TECHNIQUE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ENSEMBLE LEARNING TECHNIQUE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

ACOSTA-MENDOZA, NIUSVEL, ALICIA MORALES-REYES, HUGO JAIR ESCALANTE, and ANDRÉS GAGO-ALONSO. "LEARNING TO ASSEMBLE CLASSIFIERS VIA GENETIC PROGRAMMING." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 07 (October 14, 2014): 1460005. http://dx.doi.org/10.1142/s0218001414600052.

Full text
Abstract:
This paper introduces a novel approach for building heterogeneous ensembles based on genetic programming (GP). Ensemble learning is a paradigm that aims at combining individual classifier's outputs to improve their performance. Commonly, classifiers outputs are combined by a weighted sum or a voting strategy. However, linear fusion functions may not effectively exploit individual models' redundancy and diversity. In this research, a GP-based approach to learn fusion functions that combine classifiers outputs is proposed. Heterogeneous ensembles are aimed in this study, these models use individual classifiers which are based on different principles (e.g. decision trees and similarity-based techniques). A detailed empirical assessment is carried out to validate the effectiveness of the proposed approach. Results show that the proposed method is successful at building very effective classification models, outperforming alternative ensemble methodologies. The proposed ensemble technique is also applied to fuse homogeneous models' outputs with results also showing its effectiveness. Therefore, an in-depth analysis from different perspectives of the proposed strategy to build ensembles is presented with a strong experimental support.
APA, Harvard, Vancouver, ISO, and other styles
2

Reddy, S. Pavan Kumar, and U. Sesadri. "A Bootstrap Aggregating Technique on Link-Based Cluster Ensemble Approach for Categorical Data Clustering." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 8 (August 30, 2013): 1913–21. http://dx.doi.org/10.24297/ijct.v10i8.1468.

Full text
Abstract:
Although attempts have been made to solve the problem of clustering categorical data via cluster ensembles, with the results being competitive to conventional algorithms, it is observed that these techniques unfortunately generate a final data partition based on incomplete information. The underlying ensemble-information matrix presents only cluster-data point relations, with many entries being left unknown. The paper presents an analysis that suggests this problem degrades the quality of the clustering result, and it presents a BSA (Bootstrap Aggregation) is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy along with a new link-based approach, which improves the conventional matrix by discovering unknown entries through similarity between clusters in an ensemble. In particular, an efficient BSA and link-based algorithm is proposed for the underlying similarity assessment. Afterward, to obtain the final clustering result, a graph partitioning technique is applied to a weighted bipartite graph that is formulated from the refined matrix. Experimental results on multiple real data sets suggest that the proposed link-based method almost always outperforms both conventional clustering algorithms for categorical data and well-known cluster ensemble techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Goyal, Jyotsana. "IMPROVING CLASSIFICATION PERFORMANCE USING ENSEMBLE LEARNING APPROACH." BSSS Journal of Computer 14, no. 1 (June 30, 2023): 63–75. http://dx.doi.org/10.51767/jc1409.

Full text
Abstract:
The data mining techniques are used for evaluation of the data in order to find and represent the data in such manner by which the applications are becomes beneficial. Therefore, different kinds of computational algorithms and modeling’s are incorporated for analyzing the data. These computational algorithms are help to understand the data patterns and their application utility. The data mining algorithms supports supervised as well as unsupervised techniques of data analysis. This work is aimed to investigate about the supervised learning technique specifically performance improvements on classification techniques. The proposed classification model includes the multiple classifiers namely Bayesian classifier, k-nearest neighbor and the c4.5 decision tree algorithm. By nature of the outcomes and the modeling of the data these algorithms are functioning differently from each other. Thus, a weight based classification technique is introduced in this work. The weight is a combination of outcomes provided by the implemented three classifiers in terms of their predicted class labels. Using the weighted outcomes, the final class label for the input data instance is decided. The implementation of the proposed working model is performed with the help of JAVA and WEKA classes. The results obtained by experimentation of the proposed approach with the vehicle data set demonstrate the high accurate classification results. Thus, the proposed model is an effective classification technique as compared to single model implementation for classification task.
APA, Harvard, Vancouver, ISO, and other styles
4

Cawood, Pieter, and Terence Van Zyl. "Evaluating State-of-the-Art, Forecasting Ensembles and Meta-Learning Strategies for Model Fusion." Forecasting 4, no. 3 (August 18, 2022): 732–51. http://dx.doi.org/10.3390/forecast4030040.

Full text
Abstract:
The techniques of hybridisation and ensemble learning are popular model fusion techniques for improving the predictive power of forecasting methods. With limited research that instigates combining these two promising approaches, this paper focuses on the utility of the Exponential Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base learners for different ensembles. We compare against some state-of-the-art ensembling techniques and arithmetic model averaging as a benchmark. We experiment with the M4 forecasting dataset of 100,000 time-series, and the results show that the Feature-Based FORecast Model Averaging (FFORMA), on average, is the best technique for late data fusion with the ES-RNN. However, considering the M4’s Daily subset of data, stacking was the only successful ensemble at dealing with the case where all base learner performances were similar. Our experimental results indicate that we attain state-of-the-art forecasting results compared to Neural Basis Expansion Analysis (N-BEATS) as a benchmark. We conclude that model averaging is a more robust ensembling technique than model selection and stacking strategies. Further, the results show that gradient boosting is superior for implementing ensemble learning strategies.
APA, Harvard, Vancouver, ISO, and other styles
5

Lenin, Thingbaijam, and N. Chandrasekaran. "Learning from Imbalanced Educational Data Using Ensemble Machine Learning Algorithms." Webology 18, Special Issue 01 (April 29, 2021): 183–95. http://dx.doi.org/10.14704/web/v18si01/web18053.

Full text
Abstract:
Student’s academic performance is one of the most important parameters for evaluating the standard of any institute. It has become a paramount importance for any institute to identify the student at risk of underperforming or failing or even drop out from the course. Machine Learning techniques may be used to develop a model for predicting student’s performance as early as at the time of admission. The task however is challenging as the educational data required to explore for modelling are usually imbalanced. We explore ensemble machine learning techniques namely bagging algorithm like random forest (rf) and boosting algorithms like adaptive boosting (adaboost), stochastic gradient boosting (gbm), extreme gradient boosting (xgbTree) in an attempt to develop a model for predicting the student’s performance of a private university at Meghalaya using three categories of data namely demographic, prior academic record, personality. The collected data are found to be highly imbalanced and also consists of missing values. We employ k-nearest neighbor (knn) data imputation technique to tackle the missing values. The models are developed on the imputed data with 10 fold cross validation technique and are evaluated using precision, specificity, recall, kappa metrics. As the data are imbalanced, we avoid using accuracy as the metrics of evaluating the model and instead use balanced accuracy and F-score. We compare the ensemble technique with single classifier C4.5. The best result is provided by random forest and adaboost with F-score of 66.67%, balanced accuracy of 75%, and accuracy of 96.94%.
APA, Harvard, Vancouver, ISO, and other styles
6

Arora, Madhur, Sanjay Agrawal, and Ravindra Patel. "Machine Learning Technique for Predicting Location." International Journal of Electrical and Electronics Research 11, no. 2 (June 30, 2023): 639–45. http://dx.doi.org/10.37391/ijeer.110254.

Full text
Abstract:
In the current era of internet and mobile phone usage, the prediction of a person's location at a specific moment has become a subject of great interest among researchers. As a result, there has been a growing focus on developing more effective techniques to accurately identify the precise location of a user at a given instant in time. The quality of GPS data plays a crucial role in obtaining high-quality results. Numerous algorithms are available that leverage user movement patterns and historical data for this purpose. This research presents a location prediction model that incorporates data from multiple users. To achieve the most accurate predictions, regression techniques are utilized for user trajectory prediction, and ensemble algorithmic procedures, such as the random forest approach, the Adaboost method, and the XGBoost method, are employed. The primary goal is to improve prediction accuracy. The improvement accuracy of proposed ensemble method is around 21.2%decrease in errors, which is much greater than earlier systems that are equivalent. Compared to previous comparable systems, the proposed system demonstrates an approximately 15% increase in accuracy when utilizing the ensemble methodology.
APA, Harvard, Vancouver, ISO, and other styles
7

Rahimi, Nouf, Fathy Eassa, and Lamiaa Elrefaei. "An Ensemble Machine Learning Technique for Functional Requirement Classification." Symmetry 12, no. 10 (September 25, 2020): 1601. http://dx.doi.org/10.3390/sym12101601.

Full text
Abstract:
In Requirement Engineering, software requirements are classified into two main categories: Functional Requirement (FR) and Non-Functional Requirement (NFR). FR describes user and system goals. NFR includes all constraints on services and functions. Deeper classification of those two categories facilitates the software development process. There are many techniques for classifying FR; some of them are Machine Learning (ML) techniques, and others are traditional. To date, the classification accuracy has not been satisfactory. In this paper, we introduce a new ensemble ML technique for classifying FR statements to improve their accuracy and availability. This technique combines different ML models and uses enhanced accuracy as a weight in the weighted ensemble voting approach. The five combined models are Naïve Bayes, Support Vector Machine (SVM), Decision Tree, Logistic Regression, and Support Vector Classification (SVC). The technique was implemented, trained, and tested using a collected dataset. The accuracy of classifying FR was 99.45%, and the required time was 0.7 s.
APA, Harvard, Vancouver, ISO, and other styles
8

., Hartono, Opim Salim Sitompul, Erna Budhiarti Nababan, Tulus ., Dahlan Abdullah, and Ansari Saleh Ahmar. "A New Diversity Technique for Imbalance Learning Ensembles." International Journal of Engineering & Technology 7, no. 2.14 (April 8, 2018): 478. http://dx.doi.org/10.14419/ijet.v7i2.11251.

Full text
Abstract:
Data mining and machine learning techniques designed to solve classification problems require balanced class distribution. However, in reality sometimes the classification of datasets indicates the existence of a class represented by a large number of instances whereas there are classes with far fewer instances. This problem is known as the class imbalance problem. Classifier Ensembles is a method often used in overcoming class imbalance problems. Data Diversity is one of the cornerstones of ensembles. An ideal ensemble system should have accurrate individual classifiers and if there is an error it is expected to occur on different objects or instances. This research will present the results of overview and experimental study using Hybrid Approach Redefinition (HAR) Method in handling class imbalance and at the same time expected to get better data diversity. This research will be conducted using 6 datasets with different imbalanced ratios and will be compared with SMOTEBoost which is one of the Re-Weighting method which is often used in handling class imbalance. This study shows that the data diversity is related to performance in the imbalance learning ensembles and the proposed methods can obtain better data diversity.
APA, Harvard, Vancouver, ISO, and other styles
9

Teoh, Chin-Wei, Sin-Ban Ho, Khairi Shazwan Dollmat, and Chuie-Hong Tan. "Ensemble-Learning Techniques for Predicting Student Performance on Video-Based Learning." International Journal of Information and Education Technology 12, no. 8 (2022): 741–45. http://dx.doi.org/10.18178/ijiet.2022.12.8.1679.

Full text
Abstract:
The transformation of education norms from face-to-face teaching era to the Massive Open Online Courses (MOOCs) era has promoted the rise of the big data era in educational data. This situation has created an opportunity for an educator to utilize the available data from MOOCs to facilitate student learning and performance. Therefore, this research study aims to introduce three types of ensemble learning methods, which are stacking, boosting, and bagging, to predict student performance. These techniques combine the advantage of feature selection method and Synthetic Minority Oversampling Technique (SMOTE) algorithm as a method to balance the number of output features to build the ensemble learning model. As a result, the proposed AdaBoost type ensemble classifier has shown the highest prediction accuracy of more than 90% and Area Under the Curve (AUC) of approximately 0.90. Results by AdaBoost classifier have outperformed other ensemble classifiers, stacking and bagging as well as base classifiers.
APA, Harvard, Vancouver, ISO, and other styles
10

Hussein, Salam Allawi, Alyaa Abduljawad Mahmood, and Emaan Oudah Oraby. "Network Intrusion Detection System Using Ensemble Learning Approaches." Webology 18, SI05 (October 30, 2021): 962–74. http://dx.doi.org/10.14704/web/v18si05/web18274.

Full text
Abstract:
To mitigate modern network intruders in a rapidly growing and fast pattern changing network traffic data, single classifier is not sufficient. In this study Chi-Square feature selection technique is used to select the most important features of network traffic data, then AdaBoost, Random Forest (RF), and XGBoost ensemble classifiers were used to classify data based on binary-classes and multi-classes. The aim of this study is to improve detection rate accuracy for every individual attack types and all types of attacks, which will help us to identify attacks and particular category of attacks. The proposed method is evaluated using k-fold cross validation, and the experimental results of all the three classifiers with and without feature selection are compared together. We used two different datasets in our experiments to evaluate the model performance. The used datasets are NSL-KDD and UNSW-NB15.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

King, Michael Allen. "Ensemble Learning Techniques for Structured and Unstructured Data." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51667.

Full text
Abstract:
This research provides an integrated approach of applying innovative ensemble learning techniques that has the potential to increase the overall accuracy of classification models. Actual structured and unstructured data sets from industry are utilized during the research process, analysis and subsequent model evaluations. The first research section addresses the consumer demand forecasting and daily capacity management requirements of a nationally recognized alpine ski resort in the state of Utah, in the United States of America. A basic econometric model is developed and three classic predictive models evaluated the effectiveness. These predictive models were subsequently used as input for four ensemble modeling techniques. Ensemble learning techniques are shown to be effective. The second research section discusses the opportunities and challenges faced by a leading firm providing sponsored search marketing services. The goal for sponsored search marketing campaigns is to create advertising campaigns that better attract and motivate a target market to purchase. This research develops a method for classifying profitable campaigns and maximizing overall campaign portfolio profits. Four traditional classifiers are utilized, along with four ensemble learning techniques, to build classifier models to identify profitable pay-per-click campaigns. A MetaCost ensemble configuration, having the ability to integrate unequal classification cost, produced the highest campaign portfolio profit. The third research section addresses the management challenges of online consumer reviews encountered by service industries and addresses how these textual reviews can be used for service improvements. A service improvement framework is introduced that integrates traditional text mining techniques and second order feature derivation with ensemble learning techniques. The concept of GLOW and SMOKE words is introduced and is shown to be an objective text analytic source of service defects or service accolades.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Nguyen, Thanh Tien. "Ensemble Learning Techniques and Applications in Pattern Classification." Thesis, Griffith University, 2017. http://hdl.handle.net/10072/366342.

Full text
Abstract:
It is widely known that the best classifier for a given problem is often problem dependent and there is no one classification algorithm that is the best for all classification tasks. A natural question that arise is: can we combine multiple classification algorithms to achieve higher classification accuracy than a single one? That is the idea behind a class of methods called ensemble method. Ensemble method is defined as the combination of several classifiers with the aim of achieving lower classification error rate than using a single classifier. Ensemble methods have been applying to various applications ranging from computer aided medical diagnosis, computer vision, software engineering, to information retrieval. In this study, we focus on heterogeneous ensemble methods in which a fixed set of diverse learning algorithms are learned on the same training set to generate the different classifiers and the class prediction is then made based on the output of these classifiers (called Level1 data or meta-data). The research on heterogeneous ensemble methods is mainly focused on two aspects: (i) to propose efficient classifiers combining methods on meta-data to achieve high accuracy, and (ii) to optimize the ensemble by performing feature and classifier selection. Although various approaches related to heterogeneous ensemble methods have been proposed, some research gaps still exist First, in ensemble learning, the meta-data of an observation reflects the agreement and disagreement between the different base classifiers.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
3

Valenzuela, Russell. "Predicting National Basketball Association Game Outcomes Using Ensemble Learning Techniques." Thesis, California State University, Long Beach, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10980443.

Full text
Abstract:

There have been a number of studies that try to predict sporting event outcomes. Most previous research has involved results in football and college basketball. Recent years has seen similar approaches carried out in professional basketball. This thesis attempts to build upon existing statistical techniques and apply them to the National Basketball Association using a synthesis of algorithms as motivation. A number of ensemble learning methods will be utilized and compared in hopes of improving the accuracy of single models. Individual models used in this thesis will be derived from Logistic Regression, Naïve Bayes, Random Forests, Support Vector Machines, and Artificial Neural Networks while aggregation techniques include Bagging, Boosting, and Stacking. Data from previous seasons and games from both?players and teams will be used to train models in R.

APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, Alfred. "Ensemble approach to code smell identification : Evaluating ensemble machine learning techniques to identify code smells within a software system." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49319.

Full text
Abstract:
The need for automated methods for identifying refactoring items is prelevent in many software projects today. Symptoms of refactoring needs is the concept of code smells within a software system. Recent studies have used single model machine learning to combat this issue. This study aims to test the possibility of improving machine learning code smell detection using ensemble methods. Therefore identifying the strongest ensemble model in the context of code smells and the relative sensitivity of the strongest perfoming ensemble identified. The ensemble models performance was studied by performing experiments using WekaNose to create datasets of code smells and Weka to train and test the models on the dataset. The datasets created was based on Qualitas Corpus curated java project. Each tested ensemble method was then compared to all the other ensembles, using f-measure, accuracy and AUC ROC scores. The tested ensemble methods were stacking, voting, bagging and boosting. The models to implement the ensemble methods with were models that previous studies had identified as strongest performer for code smell identification. The models where Jrip, J48, Naive Bayes and SMO. The findings showed, that compared to previous studies, bagging J48 improved results by 0.5%. And that the nominally implemented baggin of J48 in Weka follows best practices and the model where impacted negatively. However, due to the complexity of stacking and voting ensembles further work is needed regarding stacking and voting ensemble models in the context of code smell identification.
APA, Harvard, Vancouver, ISO, and other styles
5

Recamonde-Mendoza, Mariana. "Exploring ensemble learning techniques to optimize the reverse engineering of gene regulatory networks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/95693.

Full text
Abstract:
Nesta tese estamos especificamente interessados no problema de engenharia re- versa de redes regulatórias genéticas a partir de dados de pós-genômicos, um grande desafio na área de Bioinformática. Redes regulatórias genéticas são complexos cir- cuitos biológicos responsáveis pela regulação do nível de expressão dos genes, desem- penhando assim um papel fundamental no controle de inúmeros processos celulares, incluindo diferenciação celular, ciclo celular e metabolismo. Decifrar a estrutura destas redes é crucial para possibilitar uma maior compreensão à nível de sistema do desenvolvimento e comportamento dos organismos, e eventualmente esclarecer os mecanismos de doenças causados pela desregulação dos processos acima mencio- nados. Devido ao expressivo aumento da disponibilidade de dados experimentais de larga escala e da grande dimensão e complexidade dos sistemas biológicos, métodos computacionais têm sido ferramentas essenciais para viabilizar esta investigação. No entanto, seu desempenho ainda é bastante deteriorado por importantes desafios com- putacionais e biológicos impostos pelo cenário. Em particular, o ruído e esparsidade inerentes aos dados biológicos torna este problema de inferência de redes um difícil problema de otimização combinatória, para o qual métodos computacionais dispo- níveis falham em relação à exatidão e robustez das predições. Esta tese tem como objetivo investigar o uso de técnicas de ensemble learning como forma de superar as limitações existentes e otimizar o processo de inferência, explorando a diversidade entre um conjunto de modelos. Com este intuito, desenvolvemos métodos computa- cionais tanto para gerar redes diversificadas, como para combinar estas predições em uma solução única (solução ensemble ), e aplicamos esta abordagem a uma série de cenários com diferentes fontes de diversidade a fim de compreender o seu potencial neste contexto específico. Mostramos que as soluções propostas são competitivas com algoritmos tradicionais deste campo de pesquisa e que melhoram nossa capa- cidade de reconstruir com precisão as redes regulatórias genéticas. Os resultados obtidos para a inferência de redes de regulação transcricional e pós-transcricional, duas camadas adjacentes e complementares que compõem a rede de regulação glo- bal, tornam evidente a eficiência e robustez da nossa abordagem, encorajando a consolidação de ensemble learning como uma metodologia promissora para decifrar a estrutura de redes regulatórias genéticas.
In this thesis we are concerned about the reverse engineering of gene regulatory networks from post-genomic data, a major challenge in Bioinformatics research. Gene regulatory networks are intricate biological circuits responsible for govern- ing the expression levels (activity) of genes, thereby playing an important role in the control of many cellular processes, including cell differentiation, cell cycle and metabolism. Unveiling the structure of these networks is crucial to gain a systems- level understanding of organisms development and behavior, and eventually shed light on the mechanisms of diseases caused by the deregulation of these cellular pro- cesses. Due to the increasing availability of high-throughput experimental data and the large dimension and complexity of biological systems, computational methods have been essential tools in enabling this investigation. Nonetheless, their perfor- mance is much deteriorated by important computational and biological challenges posed by the scenario. In particular, the noisy and sparse features of biological data turn the network inference into a challenging combinatorial optimization prob- lem, to which current methods fail in respect to the accuracy and robustness of predictions. This thesis aims at investigating the use of ensemble learning tech- niques as means to overcome current limitations and enhance the inference process by exploiting the diversity among multiple inferred models. To this end, we develop computational methods both to generate diverse network predictions and to combine multiple predictions into an ensemble solution, and apply this approach to a number of scenarios with different sources of diversity in order to understand its potential in this specific context. We show that the proposed solutions are competitive with tra- ditional algorithms in the field and improve our capacity to accurately reconstruct gene regulatory networks. Results obtained for the inference of transcriptional and post-transcriptional regulatory networks, two adjacent and complementary layers of the overall gene regulatory network, evidence the efficiency and robustness of our approach, encouraging the consolidation of ensemble systems as a promising methodology to decipher the structure of gene regulatory networks.
APA, Harvard, Vancouver, ISO, and other styles
6

Luong, Vu A. "Advanced techniques for classification of non-stationary streaming data and applications." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/420554.

Full text
Abstract:
Today we are going through the Industry 4.0, where not only people are connected via social networks, but an enormous number of electronic devices are also connected via the Internet of Things (IoT). With the rapid development of modern technologies like blockchains, 5G, computing chips, software infrastructures, people and application programs are interacting with each other at a very fast pace. As a result, massive amount of data is generated in real time, posing many interesting but challenging problems to the machine learning community. According to the International Data Corporation [73], the total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly, reaching more than 180 zettabytes by 2025, which is approximately 90 times of the data volume in 2010. However, only a minor subset of this newly created data is saved, as just about two percentage of the data created and consumed in 2020 was retained into 2021. One of the reasons for this is that data from numerous real-world applications are often collected in the form of unbounded data streams, including sensor networks, video services, event logs, and traffic monitoring systems, which would exceed the physical storage eventually. Therefore, the two major concerns about unlimited data volume and fast velocity remain unsolved when dealing with data streams. Traditional offline machine learning methods have successfully solved many intelligence tasks in recent years, most notably computer vision and natural language processing. However, they are not efficient when dealing with data streams generated dynamically in real-time from above mentioned applications. Particularly, the offline learning paradigm suffers from many limitations in this context: (1) it is not practical to store the entire data stream in memory; (2) traditional algorithms need to be retrained when new training data instances are available; and (3) the slow training time makes them almost impossible to adapt instantly to real-time data. In this study, we focus on developing new ensemble methods to solve the problems of data stream classification. Although several studies related to ensemble learning have been proposed in the literature, some research gaps still exist. First, most ensemble algorithms developed for evolving data streams are homogenous, which means that all the base classifiers are generated from the same learning algorithm, most frequently Hoeffding Trees. The data stream literature is lacking heterogeneous ensembles, which benefit from having much fewer base learners to obtain comparable prediction performance to homogeneous ensembles. Therefore, we introduce the HEterogeneous Ensemble Selection (HEES) method that dynamically determines an appropriate subset of base learners to make predictions for non-stationary data streams. Though HEES only uses 8 base classifiers, our experiments on 50 datasets shows that its prediction accuracy is higher than other homogeneous ensembles with 40 base classifiers, including OzaBagAdwin, OzaBoostAdwin, BOLE, and LNSE. Second, most existing models in the literature have low expressive capability; hence, we propose a Streaming Deep Forest (SDF) method to fill this gap. Also, an active learning strategy is introduced to save the label query costs and to speed up SDF. As a result, SDF obtains the state-of-the-art accuracy in both immediate and delayed settings. Next, we present a multi-layer heterogeneous ensemble called SMiLE and a selection method to tackle the real-world problem of insect stream classification. In our experiments, SMiLE achieves the best performance on 10/11 datasets and second-best performance on the remaining dataset in comparison to benchmark algorithms. Finally, we propose an incremental framework to combine different segmentation models for medical images. The proposed framework is about 16 times faster than the second-fastest method MLR on both CVC_ColonDB and MICCAI2015 datasets in our experiments.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Xian Bo. "A novel fault detection and diagnosis framework for rotating machinery using advanced signal processing techniques and ensemble extreme learning machines." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Etienam, Clement. "Structural and shape reconstruction using inverse problems and machine learning techniques with application to hydrocarbon reservoirs." Thesis, University of Manchester, 2019. https://www.research.manchester.ac.uk/portal/en/theses/structural-and-shape-reconstruction-using-inverse-problems-and-machine-learning-techniques-with-application-to-hydrocarbon-reservoirs(e21f1030-64e7-4267-b708-b7f0165a5f53).html.

Full text
Abstract:
This thesis introduces novel ideas in subsurface reservoir model calibration known as History Matching in the reservoir engineering community. The target of history matching is to mimic historical pressure and production data from the producing wells with the output from the reservoir simulator for the sole purpose of reducing uncertainty from such models and improving confidence in production forecast. Ensemble based methods such as the Ensemble Kalman Filter (EnKF) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA) as been proposed for history matching in literature. EnKF/ES-MDA is a Monte Carlo ensemble nature filter where the representation of the covariance is located at the mean of the ensemble of the distribution instead of the uncertain true model. In EnKF/ES-MDA calculation of the gradients is not required, and the mean of the ensemble of the realisations provides the best estimates with the ensemble on its own estimating the probability density. However, because of the inherent assumptions of linearity and Gaussianity of petrophysical properties distribution, EnKF/ES-MDA does not provide an acceptable history-match and characterisation of uncertainty when tasked with calibrating reservoir models with channel like structures. One of the novel methods introduced in this thesis combines a successive parameter and shape reconstruction using level set functions (EnKF/ES-MDA-level set) where the spatial permeability fields' indicator functions are transformed into signed distances. These signed distances functions (better suited to the Gaussian requirement of EnKF/ES-MDA) are then updated during the EnKF/ES-MDA inversion. The method outperforms standard EnKF/ES-MDA in retaining geological realism of channels during and after history matching and also yielded lower Root-Mean-Square function (RMS) as compared to the standard EnKF/ES-MDA. To improve on the petrophysical reconstruction attained with the EnKF/ES-MDA-level set technique, a novel parametrisation incorporating an unsupervised machine learning method for the recovery of the permeability and porosity field is developed. The permeability and porosity fields are posed as a sparse field recovery problem and a novel SELE (Sparsity-Ensemble optimization-Level-set Ensemble optimisation) approach is proposed for the history matching. In SELE some realisations are learned using the K-means clustering Singular Value Decomposition (K-SVD) to generate an overcomplete codebook or dictionary. This dictionary is combined with Orthogonal Matching Pursuit (OMP) to ease the ill-posed nature of the production data inversion, converting our permeability/porosity field into a sparse domain. SELE enforces prior structural information on the model during the history matching and reduces the computational complexity of the Kalman gain matrix, leading to faster attainment of the minimum of the cost function value. From the results shown in the thesis; SELE outperforms conventional EnKF/ES-MDA in matching the historical production data, evident in the lower RMS value and a high geological realism/similarity to the true reservoir model.
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, Farrell R. "Evaluation of Supervised Machine Learning for Classifying Video Traffic." NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/972.

Full text
Abstract:
Operational deployment of machine learning based classifiers in real-world networks has become an important area of research to support automated real-time quality of service decisions by Internet service providers (ISPs) and more generally, network administrators. As the Internet has evolved, multimedia applications, such as voice over Internet protocol (VoIP), gaming, and video streaming, have become commonplace. These traffic types are sensitive to network perturbations, e.g. jitter and delay. Automated quality of service (QoS) capabilities offer a degree of relief by prioritizing network traffic without human intervention; however, they rely on the integration of real-time traffic classification to identify applications. Accordingly, researchers have begun to explore various techniques to incorporate into real-world networks. One method that shows promise is the use of machine learning techniques trained on sub-flows – a small number of consecutive packets selected from different phases of the full application flow. Generally, research on machine learning classifiers was based on statistics derived from full traffic flows, which can limit their effectiveness (recall and precision) if partial data captures are encountered by the classifier. In real-world networks, partial data captures can be caused by unscheduled restarts/reboots of the classifier or data capture capabilities, network interruptions, or application errors. Research on the use of machine learning algorithms trained on sub-flows to classify VoIP and gaming traffic has shown promise, even when partial data captures are encountered. This research extends that work by applying machine learning algorithms trained on multiple sub-flows to classification of video streaming traffic. Results from this research indicate that sub-flow classifiers have much higher and more consistent recall and precision than full flow classifiers when applied to video traffic. Moreover, the application of ensemble methods, specifically Bagging and adaptive boosting (AdaBoost) further improves recall and precision for sub-flow classifiers. Findings indicate sub-flow classifiers based on AdaBoost in combination with the C4.5 algorithm exhibited the best performance with the most consistent results for classification of video streaming traffic.
APA, Harvard, Vancouver, ISO, and other styles
10

Vandoni, Jennifer. "Ensemble Methods for Pedestrian Detection in Dense Crowds." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS116/document.

Full text
Abstract:
Cette thèse s’intéresse à la détection des piétons dans des foules très denses depuis un système mono-camera, avec comme but d’obtenir des détections localisées de toutes les personnes. Ces détections peuvent être utilisées soit pour obtenir une estimation robuste de la densité, soit pour initialiser un algorithme de suivi. Les méthodologies classiques utilisées pour la détection de piétons s’adaptent mal au cas où seulement les têtes sont visibles, de part l’absence d’arrière-plan, l’homogénéité visuelle de la foule, la petite taille des objets et la présence d’occultations très fortes. En présence de problèmes difficiles tels que notre application, les approches à base d’apprentissage supervisé sont bien adaptées. Nous considérons un système à plusieurs classifieurs (Multiple Classifier System, MCS), composé de deux ensembles différents, le premier basé sur les classifieurs SVM (SVM- ensemble) et le deuxième basé sur les CNN (CNN-ensemble), combinés dans le cadre de la Théorie des Fonctions de Croyance (TFC). L’ensemble SVM est composé de plusieurs SVM exploitant les données issues d’un descripteur différent. La TFC nous permet de prendre en compte une valeur d’imprécision supposée correspondre soit à une imprécision dans la procédure de calibration, soit à une imprécision spatiale. Cependant, le manque de données labellisées pour le cas des foules très denses nuit à la génération d’ensembles de données d’entrainement et de validation robustes. Nous avons proposé un algorithme d’apprentissage actif de type Query-by- Committee (QBC) qui permet de sélectionner automatiquement de nouveaux échantillons d’apprentissage. Cet algorithme s’appuie sur des mesures évidentielles déduites des fonctions de croyance. Pour le second ensemble, pour exploiter les avancées de l’apprentissage profond, nous avons reformulé notre problème comme une tâche de segmentation en soft labels. Une architecture entièrement convolutionelle a été conçue pour détecter les petits objets grâce à des convolutions dilatées. Nous nous sommes appuyés sur la technique du dropout pour obtenir un ensemble CNN capable d’évaluer la fiabilité sur les prédictions du réseau lors de l’inférence. Les réalisations de cet ensemble sont ensuite combinées dans le cadre de la TFC. Pour conclure, nous montrons que la sortie du MCS peut être utile aussi pour le comptage de personnes. Nous avons proposé une méthodologie d’évaluation multi-échelle, très utile pour la communauté de modélisation car elle lie incertitude (probabilité d’erreur) et imprécision sur les valeurs de densité estimées
This study deals with pedestrian detection in high- density crowds from a mono-camera system. The detections can be then used both to obtain robust density estimation, and to initialize a tracking algorithm. One of the most difficult challenges is that usual pedestrian detection methodologies do not scale well to high-density crowds, for reasons such as absence of background, high visual homogeneity, small size of the objects, and heavy occlusions. We cast the detection problem as a Multiple Classifier System (MCS), composed by two different ensembles of classifiers, the first one based on SVM (SVM-ensemble) and the second one based on CNN (CNN-ensemble), combined relying on the Belief Function Theory (BFT) to exploit their strengths for pixel-wise classification. SVM-ensemble is composed by several SVM detectors based on different gradient, texture and orientation descriptors, able to tackle the problem from different perspectives. BFT allows us to take into account the imprecision in addition to the uncertainty value provided by each classifier, which we consider coming from possible errors in the calibration procedure and from pixel neighbor's heterogeneity in the image space. However, scarcity of labeled data for specific dense crowd contexts reflects in the impossibility to obtain robust training and validation sets. By exploiting belief functions directly derived from the classifiers' combination, we propose an evidential Query-by-Committee (QBC) active learning algorithm to automatically select the most informative training samples. On the other side, we explore deep learning techniques by casting the problem as a segmentation task with soft labels, with a fully convolutional network designed to recover small objects thanks to a tailored use of dilated convolutions. In order to obtain a pixel-wise measure of reliability about the network's predictions, we create a CNN- ensemble by means of dropout at inference time, and we combine the different obtained realizations in the context of BFT. Finally, we show that the output map given by the MCS can be employed to perform people counting. We propose an evaluation method that can be applied at every scale, providing also uncertainty bounds on the estimated density
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

Ensemble Machine Learning Cookbook: Over 35 Practical Recipes to Explore Ensemble Machine Learning Techniques Using Python. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shaw, Brian P. Music Assessment for Better Ensembles. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190603144.001.0001.

Full text
Abstract:
Assessment is central to ensemble music. Yet, teachers do not always have the expertise to harness its potential to improve rehearsals and performances, and promote and document student learning. Written specifically for band, choir, and orchestra teachers at all levels, this book contains all of the information necessary to design and use assessment in a thriving music classroom. The first section addresses foundations such as learning targets, metacognition, and growth mindset. Assessment jargon such as formative assessment, summative assessment, Assessment for Learning, self and peer assessment, and authentic assessment is clarified and illustrated with music examples. Readers will learn practical strategies for choosing which concepts to assess, which methods to use, and how to use results to provide accurate and effective feedback to students. The second section brings assessment fundamentals into the music room. Filled with practical advice, each chapter examines a different facet of musicianship. Sample assessments in all performance areas are provided, including concert preparation, music literacy, fundamentals and technique, terminology, interpretation, evaluation and critique, composition and improvisation, beliefs and attitudes, and more. The final section is an examination of grading practices in music classes. Readers will gain information about ensemble grades that communicate what students know and are able to do. The book concludes with ways for music educators to take their first steps toward implementing these strategies in their own teaching, including the use of instructional technology. Assessing like an expert is possible, and this book is just what teachers need to get started.
APA, Harvard, Vancouver, ISO, and other styles
3

Tattar, Prabhanjan Narayanachar. Hands-On Ensemble Learning with R: A beginner's guide to combining the power of machine learning algorithms using ensemble techniques. Packt Publishing - ebooks Account, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rardin, Paul. Building Sound and Skills in the Men’s Chorus at Colleges and Universities in the United States. Edited by Frank Abrahams and Paul D. Head. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199373369.013.26.

Full text
Abstract:
Conductors of collegiate men’s choruses face unique challenges in building excellent choirs. They are likely to lead ensembles with disproportionately wide gaps between their most- and least-experienced singers, with a plurality or even majority of non-music majors—and may need to teach voice as much as they conduct. This chapter offers rehearsal techniques for these conductors which involve learning and utilizing vocal pedagogy, imparting basic phonation, and utilizing vocal tone exercises to build foundation and sound in a choir or glee club. They must then create a sense of community within their musically and vocally diverse choir; instill habits that lead to effective “core singing,” combining alignment, breathing technique, and resonance; and help male singers navigate shifts between their vocal registers.
APA, Harvard, Vancouver, ISO, and other styles
5

López, César Pérez. DATA MINING and MACHINE LEARNING. PREDICTIVE TECHNIQUES : ENSEMBLE METHODS, BOOSTING, BAGGING, RANDOM FOREST, DECISION TREES and REGRESSION TREES.: Examples with MATLAB. Lulu Press, Inc., 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

McPherson, Gary E., ed. The Oxford Handbook of Music Performance, Volume 2. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780190058869.001.0001.

Full text
Abstract:
Volume 2 of the Oxford Handbook of Music Performance is designed around four distinct parts: Enhancements, Health and Wellbeing, Science, and Innovations. Chapters on the popular Feldenkrais method and Alexander technique open the volume, and these lead to chapters on peak performance and mindfulness, stage behavior, impression management and charisma, enhancing music performance appraisal, and how to build a career and the skills and competencies needed to be successful. The part dealing with health and wellbeing surveys the brain mechanisms involved in music learning and performing and musical activities in people with disabilities, performance anxiety, diseases and health risks in instrumentalists, hearing and voice, and finally, a discussion of how to promote a healthy related lifestyle. The first six chapters of the Science part cover the basic science underlying the operation of wind, brass, string instruments, and the piano, and two chapters covering the solo voice and vocal ensembles. The final two chapters explain digital musical instruments and the practical issues that researchers and performers face when using motion capture technology to study movement during musical performances. The four chapters of the Innovations part address the types of technological and social and wellbeing innovations that are reshaping how musicians conceive their performances in the twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles
7

McPherson, Gary E., ed. The Oxford Handbook of Music Performance, Volume 2. Oxford University Press, 2022. http://dx.doi.org/10.1093/oxfordhb/9780190058869.001.0001.

Full text
Abstract:
Volume 2 of the Oxford Handbook of Music Performance is designed around four distinct parts: Enhancements, Health and Wellbeing, Science, and Innovations. Chapters on the popular Feldenkrais method and Alexander technique open the volume, and these lead to chapters on peak performance and mindfulness, stage behavior, impression management and charisma, enhancing music performance appraisal, and how to build a career and the skills and competencies needed to be successful. The part dealing with health and wellbeing surveys the brain mechanisms involved in music learning and performing and musical activities in people with disabilities, performance anxiety, diseases and health risks in instrumentalists, hearing and voice, and finally, a discussion of how to promote a healthy related lifestyle. The first six chapters of the Science part cover the basic science underlying the operation of wind, brass, string instruments, and the piano, and two chapters covering the solo voice and vocal ensembles. The final two chapters explain digital musical instruments and the practical issues that researchers and performers face when using motion capture technology to study movement during musical performances. The four chapters of the Innovations part address the types of technological and social and wellbeing innovations that are reshaping how musicians conceive their performances in the twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

Prasomphan, Sathit. "Ensemble Classification Technique for Cultural Heritage Image." In Machine Learning and Intelligent Communications, 17–27. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04409-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ansari, Arsalan Ahmed, Amaan Iqbal, and Bibhudatta Sahoo. "Heterogeneous Defect Prediction Using Ensemble Learning Technique." In Advances in Intelligent Systems and Computing, 283–93. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0199-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marndi, Ashapurna, and G. K. Patra. "Chlorophyll Prediction Using Ensemble Deep Learning Technique." In Advances in Intelligent Systems and Computing, 341–49. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2414-1_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cristin, Rajan, Aravapalli Rama Satish, Tamal Kr Kundu, and Balajee Maram. "Malaria Disease Prediction with Ensemble Learning Technique." In Innovations in Computer Science and Engineering, 519–27. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8987-1_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rozza, Alessandro, Gabriele Lombardi, Matteo Re, Elena Casiraghi, Giorgio Valentini, and Paola Campadelli. "A Novel Ensemble Technique for Protein Subcellular Location Prediction." In Ensembles in Machine Learning Applications, 151–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22910-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marndi, Ashapurna, and G. K. Patra. "Atmospheric Temperature Prediction Using Ensemble Deep Learning Technique." In Advances in Intelligent Systems and Computing, 209–21. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6984-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ganachari, Sreenidhi, and Srinivasa Rao Battula. "Stroke Disease Prediction Using Adaboost Ensemble Learning Technique." In Communication and Intelligent Systems, 247–60. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2100-3_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jegadeeswari, K., R. Ragunath, and R. Rathipriya. "Missing Data Imputation Using Ensemble Learning Technique: A Review." In Advances in Intelligent Systems and Computing, 223–36. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3590-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guidolin, Massimo, and Manuela Pedio. "Sharpening the Accuracy of Credit Scoring Models with Machine Learning Algorithms." In Data Science for Economics and Finance, 89–115. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66891-4_5.

Full text
Abstract:
AbstractThe big data revolution and recent advancements in computing power have increased the interest in credit scoring techniques based on artificial intelligence. This has found easy leverage in the fact that the accuracy of credit scoring models has a crucial impact on the profitability of lending institutions. In this chapter, we survey the most popular supervised credit scoring classification methods (and their combinations through ensemble methods) in an attempt to identify a superior classification technique in the light of the applied literature. There are at least three key insights that emerge from surveying the literature. First, as far as individual classifiers are concerned, linear classification methods often display a performance that is at least as good as that of machine learning methods. Second, ensemble methods tend to outperform individual classifiers. However, a dominant ensemble method cannot be easily identified in the empirical literature. Third, despite the possibility that machine learning techniques could fail to outperform linear classification methods when standard accuracy measures are considered, in the end they lead to significant cost savings compared to the financial implications of using different scoring models.
APA, Harvard, Vancouver, ISO, and other styles
10

Hussain, Farwa Maqbool, and Farhan Hassan Khan. "An Improved Ensemble Based Machine Learning Technique for Efficient Malware Classification." In Communications in Computer and Information Science, 651–62. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5232-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

Roy, A., R. Mukherjee, S. Moulik, and A. Chakrabarti. "Human Fall Prediction Using Ensemble Learning Technique." In 2022 IEEE International Conference on Consumer Electronics - Taiwan. IEEE, 2022. http://dx.doi.org/10.1109/icce-taiwan55306.2022.9868977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Junaid, Md Iman, and Samit Ari. "Gait Identification using Ensemble Deep Learning Technique." In 2022 IEEE Silchar Subsection Conference (SILCON). IEEE, 2022. http://dx.doi.org/10.1109/silcon55242.2022.10028846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chawla, Namit, and Mukul Bedwa. "Optimized Ensemble Learning Technique on Wrist Radiographs using Deep Learning." In 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS). IEEE, 2022. http://dx.doi.org/10.1109/ictacs56270.2022.9988045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vats, Saanidhya, and VNAD Chivukula. "Plant Disease Detection Using DeepNets and Ensemble Technique." In 2022 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT). IEEE, 2022. http://dx.doi.org/10.1109/icmlant56191.2022.9996468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jayachitra, J., and N. Umarkathaf. "Blood Cancer Identification using Hybrid Ensemble Deep Learning Technique." In 2023 Second International Conference on Electronics and Renewable Systems (ICEARS). IEEE, 2023. http://dx.doi.org/10.1109/icears56392.2023.10084996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vergos, George, Lazaros Alexios Iliadis, Paraskevi Kritopoulou, Achilleas Papatheodorou, Achilles D. Boursianis, Konstantinos-Iraklis D. Kokkinidis, Maria S. Papadopoulou, and Sotirios K. Goudos. "Ensemble Learning Technique for Artificial Intelligence Assisted IVF Applications." In 2023 12th International Conference on Modern Circuits and Systems Technologies (MOCAST). IEEE, 2023. http://dx.doi.org/10.1109/mocast57943.2023.10176690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hantao Chen, Xiaodong Zhang, Jane You, Guoqiang Han, and Le Li. "Dual neural gas based structure ensemble with the bagging technique." In 2012 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2012. http://dx.doi.org/10.1109/icmlc.2012.6359570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rana, Md Shohel, and Andrew H. Sung. "DeepfakeStack: A Deep Ensemble-based Learning Technique for Deepfake Detection." In 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2020. http://dx.doi.org/10.1109/cscloud-edgecom49738.2020.00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taha, Wasf A., and Suhad A. Yousif. "Enhancement of text categorization results via an ensemble learning technique." In THE SECOND INTERNATIONAL SCIENTIFIC CONFERENCE (SISC2021): College of Science, Al-Nahrain University. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0122942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shah, Rishi, Harsh Shah, Swarnendu Bhim, Leena Heistrene, and Vivek Pandya. "Short-term Electricity Price Forecasting using Ensemble Machine Learning Technique." In 2021 1st International Conference in Information and Computing Research (iCORE). IEEE, 2021. http://dx.doi.org/10.1109/icore54267.2021.00045.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "ENSEMBLE LEARNING TECHNIQUE"

1

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
2

Maher, Nicola, Pedro DiNezio, Antonietta Capotondi, and Jennifer Kay. Identifying precursors of daily to seasonal hydrological extremes over the USA using deep learning techniques and climate model ensembles. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lasko, Kristofer, and Elena Sava. Semi-automated land cover mapping using an ensemble of support vector machines with moderate resolution imagery integrated into a custom decision support tool. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42402.

Full text
Abstract:
Land cover type is a fundamental remote sensing-derived variable for terrain analysis and environmental mapping applications. The currently available products are produced only for a single season or a specific year. Some of these products have a coarse resolution and quickly become outdated, as land cover type can undergo significant change over a short time period. In order to enable on-demand generation of timely and accurate land cover type products, we developed a sensor-agnostic framework leveraging pre-trained machine learning models. We also generated land cover models for Sentinel-2 (20m) and Landsat 8 imagery (30m) using either a single date of imagery or two dates of imagery for mapping land cover type. The two-date model includes 11 land cover type classes, whereas the single-date model contains 6 classes. The models’ overall accuracies were 84% (Sentinel-2 single date), 82% (Sentinel-2 two date), and 86% (Landsat 8 two date) across the continental United States. The three different models were built into an ArcGIS Pro Python toolbox to enable a semi-automated workflow for end users to generate their own land cover type maps on demand. The toolboxes were built using parallel processing and image-splitting techniques to enable faster computation and for use on less-powerful machines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography