Academic literature on the topic 'ENSEMBLE LEARNING MODELS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ENSEMBLE LEARNING MODELS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ENSEMBLE LEARNING MODELS"

1

GURBYCH, A. "METHOD SUPER LEARNING FOR DETERMINATION OF MOLECULAR RELATIONSHIP." Herald of Khmelnytskyi National University. Technical sciences 307, no. 2 (May 2, 2022): 14–24. http://dx.doi.org/10.31891/2307-5732-2022-307-2-14-24.

Full text
Abstract:
This paper uses the Super Learning principle to predict the molecular affinity between the receptor (large biomolecule) and ligands (small organic molecules). Meta-models study the optimal combination of individual basic models in two consecutive ensembles – classification and regression. Each costume contains six models of machine learning, which are combined by stacking. Base models include the reference vector method, random forest, gradient boosting, neural graph networks, direct propagation, and transformers. The first ensemble predicts binding probability and classifies all candidate molecules to the selected receptor into active and inactive. Ligands recognized as involved by the first ensemble are fed to the second ensemble, which assumes the degree of their affinity for the receptor in the form of an inhibition factor (Ki). A feature of the method is the rejection of the use of atomic coordinates of individual molecules and their complexes – thus eliminating experimental errors in sample preparation and measurement of nuclear coordinates and the method to determine the affinity of biomolecules with unknown spatial configurations. It is shown that meta-learning increases the response (Recall) of the classification ensemble by 34.9% and the coefficient of determination (R2) of the regression ensemble by 21% compared to the average values. This paper shows that an ensemble with meta-stacking is an asymptotically optimal system for learning. The feature of Super Learning is to use k-fold cross-validation to form first-level predictions that teach second-level models — or meta-models — that combine first-level models optimally. The ability to predict the molecular affinity of six machine learning models is studied, and the efficiency improvement is due to the combination of models in the ensemble by the stacking method. Models that are combined into two consecutive ensembles are shown.
APA, Harvard, Vancouver, ISO, and other styles
2

ACOSTA-MENDOZA, NIUSVEL, ALICIA MORALES-REYES, HUGO JAIR ESCALANTE, and ANDRÉS GAGO-ALONSO. "LEARNING TO ASSEMBLE CLASSIFIERS VIA GENETIC PROGRAMMING." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 07 (October 14, 2014): 1460005. http://dx.doi.org/10.1142/s0218001414600052.

Full text
Abstract:
This paper introduces a novel approach for building heterogeneous ensembles based on genetic programming (GP). Ensemble learning is a paradigm that aims at combining individual classifier's outputs to improve their performance. Commonly, classifiers outputs are combined by a weighted sum or a voting strategy. However, linear fusion functions may not effectively exploit individual models' redundancy and diversity. In this research, a GP-based approach to learn fusion functions that combine classifiers outputs is proposed. Heterogeneous ensembles are aimed in this study, these models use individual classifiers which are based on different principles (e.g. decision trees and similarity-based techniques). A detailed empirical assessment is carried out to validate the effectiveness of the proposed approach. Results show that the proposed method is successful at building very effective classification models, outperforming alternative ensemble methodologies. The proposed ensemble technique is also applied to fuse homogeneous models' outputs with results also showing its effectiveness. Therefore, an in-depth analysis from different perspectives of the proposed strategy to build ensembles is presented with a strong experimental support.
APA, Harvard, Vancouver, ISO, and other styles
3

Siswoyo, Bambang, Zuraida Abal Abas, Ahmad Naim Che Pee, Rita Komalasari, and Nano Suryana. "Ensemble machine learning algorithm optimization of bankruptcy prediction of bank." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 2 (June 1, 2022): 679. http://dx.doi.org/10.11591/ijai.v11.i2.pp679-686.

Full text
Abstract:
The ensemble consists of a single set of individually trained models, the predictions of which are combined when classifying new cases, in building a good classification model requires the diversity of a single model. The algorithm, logistic regression, support vector machine, random forest, and neural network are single models as alternative sources of diversity information. Previous research has shown that ensembles are more accurate than single models. Single model and modified ensemble bagging model are some of the techniques we will study in this paper. We experimented with the banking industry’s financial ratios. The results of his observations are: First, an ensemble is always more accurate than a single model. Second, we observe that modified ensemble bagging models show improved classification model performance on balanced datasets, as they can adjust behavior and make them more suitable for relatively small datasets. The accuracy rate is 97% in the bagging ensemble learning model, an increase in the accuracy level of up to 16% compared to other models that use unbalanced datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Haifeng, Lei Huang, Rongjia Song, Feng Jiao, and Tao Ai. "Bus Single-Trip Time Prediction Based on Ensemble Learning." Computational Intelligence and Neuroscience 2022 (August 11, 2022): 1–24. http://dx.doi.org/10.1155/2022/6831167.

Full text
Abstract:
The prediction of bus single-trip time is essential for passenger travel decision-making and bus scheduling. Since many factors could influence bus operations, the accurate prediction of the bus single-trip time faces a great challenge. Moreover, bus single-trip time has obvious nonlinear and seasonal characteristics. Hence, in order to improve the accuracy of bus single-trip time prediction, five prediction algorithms including LSTM (Long Short-term Memory), LR (Linear Regression), KNN (K-Nearest Neighbor), XGBoost (Extreme Gradient Boosting), and GRU (Gate Recurrent Unit) are used and examined as the base models, and three ensemble models are further constructed by using various ensemble methods including Random Forest (bagging), AdaBoost (boosting), and Linear Regression (stacking). A data-driven bus single-trip time prediction framework is then proposed, which consists of three phases including traffic data analysis, feature extraction, and ensemble model prediction. Finally, the data features and the proposed ensembled models are analyzed using real-world datasets that are collected from the Beijing Transportation Operations Coordination Center (TOCC). Through comparing the predicting results, the following conclusions are drawn: (1) the accuracy of predicting by using the three ensemble models constructed is better than the corresponding prediction results by using the five sub-models; (2) the Random Forest ensemble model constructed based on the bagging method has the best prediction accuracy among the three ensemble models; and (3) in terms of the five sub-models, the prediction accuracy of LR is better than that of the other four models.
APA, Harvard, Vancouver, ISO, and other styles
5

Ruaud, Albane, Niklas Pfister, Ruth E. Ley, and Nicholas D. Youngblut. "Interpreting tree ensemble machine learning models with endoR." PLOS Computational Biology 18, no. 12 (December 14, 2022): e1010714. http://dx.doi.org/10.1371/journal.pcbi.1010714.

Full text
Abstract:
Tree ensemble machine learning models are increasingly used in microbiome science as they are compatible with the compositional, high-dimensional, and sparse structure of sequence-based microbiome data. While such models are often good at predicting phenotypes based on microbiome data, they only yield limited insights into how microbial taxa may be associated. We developed endoR, a method to interpret tree ensemble models. First, endoR simplifies the fitted model into a decision ensemble. Then, it extracts information on the importance of individual features and their pairwise interactions, displaying them as an interpretable network. Both the endoR network and importance scores provide insights into how features, and interactions between them, contribute to the predictive performance of the fitted model. Adjustable regularization and bootstrapping help reduce the complexity and ensure that only essential parts of the model are retained. We assessed endoR on both simulated and real metagenomic data. We found endoR to have comparable accuracy to other common approaches while easing and enhancing model interpretation. Using endoR, we also confirmed published results on gut microbiome differences between cirrhotic and healthy individuals. Finally, we utilized endoR to explore associations between human gut methanogens and microbiome components. Indeed, these hydrogen consumers are expected to interact with fermenting bacteria in a complex syntrophic network. Specifically, we analyzed a global metagenome dataset of 2203 individuals and confirmed the previously reported association between Methanobacteriaceae and Christensenellales. Additionally, we observed that Methanobacteriaceae are associated with a network of hydrogen-producing bacteria. Our method accurately captures how tree ensembles use features and interactions between them to predict a response. As demonstrated by our applications, the resultant visualizations and summary outputs facilitate model interpretation and enable the generation of novel hypotheses about complex systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Khanna, Samarth, and Kabir Nagpal. "Sign Language Interpretation using Ensembled Deep Learning Models." ITM Web of Conferences 53 (2023): 01003. http://dx.doi.org/10.1051/itmconf/20235301003.

Full text
Abstract:
Communication is an integral part of our day-to-day lives. People experiencing difficulty in speaking or hearing often feel neglected in our society. While Automatic Speech Recognition Systems have now progressed to the purpose of being commercially viable, Signed Language Recognition Systems are still in the early stages. Currently, all such interpretations are administered by humans. Here, we present an approach using ensembled architecture for the classification of Sign Language characters. The novel ensemble of InceptionV3 and ResNet101 achieved an accuracy of 97.24% on the ASL dataset.
APA, Harvard, Vancouver, ISO, and other styles
7

Alazba, Amal, and Hamoud Aljamaan. "Software Defect Prediction Using Stacking Generalization of Optimized Tree-Based Ensembles." Applied Sciences 12, no. 9 (April 30, 2022): 4577. http://dx.doi.org/10.3390/app12094577.

Full text
Abstract:
Software defect prediction refers to the automatic identification of defective parts of software through machine learning techniques. Ensemble learning has exhibited excellent prediction outcomes in comparison with individual classifiers. However, most of the previous work utilized ensemble models in the context of software defect prediction with the default hyperparameter values, which are considered suboptimal. In this paper, we investigate the applicability of a stacking ensemble built with fine-tuned tree-based ensembles for defect prediction. We used grid search to optimize the hyperparameters of seven tree-based ensembles: random forest, extra trees, AdaBoost, gradient boosting, histogram-based gradient boosting, XGBoost and CatBoost. Then, a stacking ensemble was built utilizing the fine-tuned tree-based ensembles. The ensembles were evaluated using 21 publicly available defect datasets. Empirical results showed large impacts of hyperparameter optimization on extra trees and random forest ensembles. Moreover, our results demonstrated the superiority of the stacking ensemble over all fine-tuned tree-based ensembles.
APA, Harvard, Vancouver, ISO, and other styles
8

Sonawane, Deepkanchan Nanasaheb. "Ensemble Learning For Increasing Accuracy Data Models." IOSR Journal of Computer Engineering 9, no. 1 (2013): 35–37. http://dx.doi.org/10.9790/0661-0913537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Ziyue, Kan Ren, Yifan Yang, Xinyang Jiang, Yuqing Yang, and Dongsheng Li. "Towards Inference Efficient Deep Ensemble Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8711–19. http://dx.doi.org/10.1609/aaai.v37i7.26048.

Full text
Abstract:
Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e.g., can be up to 2048X in large-scale ensemble tasks. However, we found that the majority of computations in ensemble methods are redundant. For instance, over 77% of samples in CIFAR-100 dataset can be correctly classified with only a single ResNet-18 model, which indicates that only around 23% of the samples need an ensemble of extra models. To this end, we propose an inference efficient ensemble learning method, to simultaneously optimize for effectiveness and efficiency in ensemble learning. More specifically, we regard ensemble of models as a sequential inference process and learn the optimal halting event for inference on a specific sample. At each timestep of the inference process, a common selector judges if the current ensemble has reached ensemble effectiveness and halt further inference, otherwise filters this challenging sample for the subsequent models to conduct more powerful ensemble. Both the base models and common selector are jointly optimized to dynamically adjust ensemble inference for different samples with various hardness, through the novel optimization goals including sequential ensemble boosting and computation saving. The experiments with different backbones on real-world datasets illustrate our method can bring up to 56% inference cost reduction while maintaining comparable performance to full ensemble, achieving significantly better ensemble utility than other baselines. Code and supplemental materials are available at https://seqml.github.io/irene.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdillah, Abid Famasya, Cornelius Bagus Purnama Putra, Apriantoni Apriantoni, Safitri Juanita, and Diana Purwitasari. "Ensemble-based Methods for Multi-label Classification on Biomedical Question-Answer Data." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (April 26, 2022): 42–50. http://dx.doi.org/10.20473/jisebi.8.1.42-50.

Full text
Abstract:
Background: Question-answer (QA) is a popular method to seek health-related information and biomedical data. Such questions can refer to more than one medical entity (multi-label) so determining the correct tags is not easy. The question classification (QC) mechanism in a QA system can narrow down the answers we are seeking. Objective: This study develops a multi-label classification using the heterogeneous ensembles method to improve accuracy in biomedical data with long text dimensions. Methods: We used the ensemble method with heterogeneous deep learning and machine learning for multi-label extended text classification. There are 15 various single models consisting of three deep learning (CNN, LSTM, and BERT) and four machine learning algorithms (SVM, kNN, Decision Tree, and Naïve Bayes) with various text representations (TF-IDF, Word2Vec, and FastText). We used the bagging approach with a hard voting mechanism for the decision-making. Results: The result shows that deep learning is more powerful than machine learning as a single multi-label biomedical data classification method. Moreover, we found that top-three was the best number of base learners by combining the ensembles method. Heterogeneous-based ensembles with three learners resulted in an F1-score of 82.3%, which is better than the best single model by CNN with an F1-score of 80%. Conclusion: A multi-label classification of biomedical QA using ensemble models is better than single models. The result shows that heterogeneous ensembles are more potent than homogeneous ensembles on biomedical QA data with long text dimensions. Keywords: Biomedical Question Classification, Ensemble Method, Heterogeneous Ensembles, Multi-Label Classification, Question Answering
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "ENSEMBLE LEARNING MODELS"

1

He, Wenbin. "Exploration and Analysis of Ensemble Datasets with Statistical and Deep Learning Models." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574695259847734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jinhan. "J-model : an open and social ensemble learning architecture for classification." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7672.

Full text
Abstract:
Ensemble learning is a promising direction of research in machine learning, in which an ensemble classifier gives better predictive and more robust performance for classification problems by combining other learners. Meanwhile agent-based systems provide frameworks to share knowledge from multiple agents in an open context. This thesis combines multi-agent knowledge sharing with ensemble methods to produce a new style of learning system for open environments. We now are surrounded by many smart objects such as wireless sensors, ambient communication devices, mobile medical devices and even information supplied via other humans. When we coordinate smart objects properly, we can produce a form of collective intelligence from their collaboration. Traditional ensemble methods and agent-based systems have complementary advantages and disadvantages in this context. Traditional ensemble methods show better classification performance, while agent-based systems might not guarantee their performance for classification. Traditional ensemble methods work as closed and centralised systems (so they cannot handle classifiers in an open context), while agent-based systems are natural vehicles for classifiers in an open context. We designed an open and social ensemble learning architecture, named J-model, to merge the conflicting benefits of the two research domains. The J-model architecture is based on a service choreography approach for coordinating classifiers. Coordination protocols are defined by interaction models that describe how classifiers will interact with one another in a peer-to-peer manner. The peer ranking algorithm recommends more appropriate classifiers to participate in an interaction model to boost the success rate of results of their interactions. Coordinated participant classifiers who are recommended by the peer ranking algorithm become an ensemble classifier within J-model. We evaluated J-model’s classification performance with 13 UCI machine learning benchmark data sets and a virtual screening problem as a realistic classification problem. J-model showed better performance of accuracy, for 9 benchmark sets out of 13 data sets, than 8 other representative traditional ensemble methods. J-model gave better results of specificity for 7 benchmark sets. In the virtual screening problem, J-model gave better results for 12 out of 16 bioassays than already published results. We defined different interaction models for each specific classification task and the peer ranking algorithm was used across all the interaction models. Our research contributions to knowledge are as follows. First, we showed that service choreography can be an effective ensemble coordination method for classifiers in an open context. Second, we used interaction models that implement task specific coordinations of classifiers to solve a variety of representative classification problems. Third, we designed the peer ranking algorithm which is generally and independently applicable to the task of recommending appropriate member classifiers from a classifier pool based on an open pool of interaction models and classifiers.
APA, Harvard, Vancouver, ISO, and other styles
3

Gharroudi, Ouadie. "Ensemble multi-label learning in supervised and semi-supervised settings." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1333/document.

Full text
Abstract:
L'apprentissage multi-label est un problème d'apprentissage supervisé où chaque instance peut être associée à plusieurs labels cibles simultanément. Il est omniprésent dans l'apprentissage automatique et apparaît naturellement dans de nombreuses applications du monde réel telles que la classification de documents, l'étiquetage automatique de musique et l'annotation d'images. Nous discutons d'abord pourquoi les algorithmes multi-label de l'etat-de-l'art utilisant un comité de modèle souffrent de certains inconvénients pratiques. Nous proposons ensuite une nouvelle stratégie pour construire et agréger les modèles ensemblistes multi-label basés sur k-labels. Nous analysons ensuite en profondeur l'effet de l'étape d'agrégation au sein des approches ensemblistes multi-label et étudions comment cette agrégation influece les performances de prédictive du modèle enfocntion de la nature de fonction cout à optimiser. Nous abordons ensuite le problème spécifique de la selection de variables dans le contexte multi-label en se basant sur le paradigme ensembliste. Trois méthodes de sélection de caractéristiques multi-label basées sur le paradigme des forêts aléatoires sont proposées. Ces méthodes diffèrent dans la façon dont elles considèrent la dépendance entre les labels dans le processus de sélection des varibales. Enfin, nous étendons les problèmes de classification et de sélection de variables au cadre d'apprentissage semi-supervisé. Nous proposons une nouvelle approche de sélection de variables multi-label semi-supervisée basée sur le paradigme de l'ensemble. Le modèle proposé associe des principes issues de la co-training en conjonction avec une métrique interne d'évaluation d'importnance des varaibles basée sur les out-of-bag. Testés de manière satisfaisante sur plusieurs données de référence, les approches développées dans cette thèse sont prometteuses pour une variété d'ap-plications dans l'apprentissage multi-label supervisé et semi-supervisé. Testés de manière satisfaisante sur plusieurs jeux de données de référence, les approches développées dans cette thèse affichent des résultats prometteurs pour une variété domaine d'applications de l'apprentissage multi-label supervisé et semi-supervisé
Multi-label learning is a specific supervised learning problem where each instance can be associated with multiple target labels simultaneously. Multi-label learning is ubiquitous in machine learning and arises naturally in many real-world applications such as document classification, automatic music tagging and image annotation. In this thesis, we formulate the multi-label learning as an ensemble learning problem in order to provide satisfactory solutions for both the multi-label classification and the feature selection tasks, while being consistent with respect to any type of objective loss function. We first discuss why the state-of-the art single multi-label algorithms using an effective committee of multi-label models suffer from certain practical drawbacks. We then propose a novel strategy to build and aggregate k-labelsets based committee in the context of ensemble multi-label classification. We then analyze the effect of the aggregation step within ensemble multi-label approaches in depth and investigate how this aggregation impacts the prediction performances with respect to the objective multi-label loss metric. We then address the specific problem of identifying relevant subsets of features - among potentially irrelevant and redundant features - in the multi-label context based on the ensemble paradigm. Three wrapper multi-label feature selection methods based on the Random Forest paradigm are proposed. These methods differ in the way they consider label dependence within the feature selection process. Finally, we extend the multi-label classification and feature selection problems to the semi-supervised setting and consider the situation where only few labelled instances are available. We propose a new semi-supervised multi-label feature selection approach based on the ensemble paradigm. The proposed model combines ideas from co-training and multi-label k-labelsets committee construction in tandem with an inner out-of-bag label feature importance evaluation. Satisfactorily tested on several benchmark data, the approaches developed in this thesis show promise for a variety of applications in supervised and semi-supervised multi-label learning
APA, Harvard, Vancouver, ISO, and other styles
4

Henriksson, Aron. "Ensembles of Semantic Spaces : On Combining Models of Distributional Semantics with Applications in Healthcare." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-122465.

Full text
Abstract:
Distributional semantics allows models of linguistic meaning to be derived from observations of language use in large amounts of text. By modeling the meaning of words in semantic (vector) space on the basis of co-occurrence information, distributional semantics permits a quantitative interpretation of (relative) word meaning in an unsupervised setting, i.e., human annotations are not required. The ability to obtain inexpensive word representations in this manner helps to alleviate the bottleneck of fully supervised approaches to natural language processing, especially since models of distributional semantics are data-driven and hence agnostic to both language and domain. All that is required to obtain distributed word representations is a sizeable corpus; however, the composition of the semantic space is not only affected by the underlying data but also by certain model hyperparameters. While these can be optimized for a specific downstream task, there are currently limitations to the extent the many aspects of semantics can be captured in a single model. This dissertation investigates the possibility of capturing multiple aspects of lexical semantics by adopting the ensemble methodology within a distributional semantic framework to create ensembles of semantic spaces. To that end, various strategies for creating the constituent semantic spaces, as well as for combining them, are explored in a number of studies. The notion of semantic space ensembles is generalizable across languages and domains; however, the use of unsupervised methods is particularly valuable in low-resource settings, in particular when annotated corpora are scarce, as in the domain of Swedish healthcare. The semantic space ensembles are here empirically evaluated for tasks that have promising applications in healthcare. It is shown that semantic space ensembles – created by exploiting various corpora and data types, as well as by adjusting model hyperparameters such as the size of the context window and the strategy for handling word order within the context window – are able to outperform the use of any single constituent model on a range of tasks. The semantic space ensembles are used both directly for k-nearest neighbors retrieval and for semi-supervised machine learning. Applying semantic space ensembles to important medical problems facilitates the secondary use of healthcare data, which, despite its abundance and transformative potential, is grossly underutilized.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 4 and 5: Unpublished conference papers.


High-Performance Data Mining for Drug Effect Detection
APA, Harvard, Vancouver, ISO, and other styles
5

Chakraborty, Debaditya. "Detection of Faults in HVAC Systems using Tree-based Ensemble Models and Dynamic Thresholds." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543582336141076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Qiongzhu. "Study of Single and Ensemble Machine Learning Models on Credit Data to Detect Underlying Non-performing Loans." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297080.

Full text
Abstract:
In this paper, we try to compare the performance of two feature dimension reduction methods, the LASSO and PCA. Both simulation study and empirical study show that the LASSO is superior to PCA when selecting significant variables. We apply Logistics Regression (LR), Artificial Neural Network (ANN), Support Vector Machine (SVM), Decision Tree (DT) and their corresponding ensemble machines constructed by bagging and adaptive boosting (adaboost) in our study. Three experiments are conducted to explore the impact of class-unbalanced data set on all models. Empirical study indicates that when the percentage of performing loans exceeds 83.3%, the training models shall be carefully applied. When we have class-balanced data set, ensemble machines indeed have a better performance over single machines. The weaker the single machine, the more obvious the improvement we can observe.
APA, Harvard, Vancouver, ISO, and other styles
7

Franch, Gabriele. "Deep Learning for Spatiotemporal Nowcasting." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/295096.

Full text
Abstract:
Nowcasting – short-term forecasting using current observations – is a key challenge that human activities have to face on a daily basis. We heavily rely on short-term meteorological predictions in domains such as aviation, agriculture, mobility, and energy production. One of the most important and challenging task for meteorology is the nowcasting of extreme events, whose anticipation is highly needed to mitigate risk in terms of social or economic costs and human safety. The goal of this thesis is to contribute with new machine learning methods to improve the spatio-temporal precision of nowcasting of extreme precipitation events. This work relies on recent advances in deep learning for nowcasting, adding methods targeted at improving nowcasting using ensembles and trained on novel original data resources. Indeed, the new curated multi-year radar scan dataset (TAASRAD19) is introduced that contains more than 350.000 labelled precipitation records over 10 years, to provide a baseline benchmark, and foster reproducibility of machine learning modeling. A TrajGRU model is applied to TAASRAD19, and implemented in an operational prototype. The thesis also introduces a novel method for fast analog search based on manifold learning: the tool leverages the entire dataset history in less than 5 seconds and demonstrates the feasibility of predictive ensembles. In the final part of the thesis, the new deep learning architecture ConvSG based on stacked generalization is presented, introducing novel concepts for deep learning in precipitation nowcasting: ConvSG is specifically designed to improve predictions of extreme precipitation regimes over published methods, and shows a 117% skill improvement on extreme rain regimes over a single member. Moreover, ConvSG shows superior or equal skills compared to Lagrangian Extrapolation models for all rain rates, achieving a 49% average improvement in predictive skill over extrapolation on the higher precipitation regimes.
APA, Harvard, Vancouver, ISO, and other styles
8

Franch, Gabriele. "Deep Learning for Spatiotemporal Nowcasting." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/295096.

Full text
Abstract:
Nowcasting – short-term forecasting using current observations – is a key challenge that human activities have to face on a daily basis. We heavily rely on short-term meteorological predictions in domains such as aviation, agriculture, mobility, and energy production. One of the most important and challenging task for meteorology is the nowcasting of extreme events, whose anticipation is highly needed to mitigate risk in terms of social or economic costs and human safety. The goal of this thesis is to contribute with new machine learning methods to improve the spatio-temporal precision of nowcasting of extreme precipitation events. This work relies on recent advances in deep learning for nowcasting, adding methods targeted at improving nowcasting using ensembles and trained on novel original data resources. Indeed, the new curated multi-year radar scan dataset (TAASRAD19) is introduced that contains more than 350.000 labelled precipitation records over 10 years, to provide a baseline benchmark, and foster reproducibility of machine learning modeling. A TrajGRU model is applied to TAASRAD19, and implemented in an operational prototype. The thesis also introduces a novel method for fast analog search based on manifold learning: the tool leverages the entire dataset history in less than 5 seconds and demonstrates the feasibility of predictive ensembles. In the final part of the thesis, the new deep learning architecture ConvSG based on stacked generalization is presented, introducing novel concepts for deep learning in precipitation nowcasting: ConvSG is specifically designed to improve predictions of extreme precipitation regimes over published methods, and shows a 117% skill improvement on extreme rain regimes over a single member. Moreover, ConvSG shows superior or equal skills compared to Lagrangian Extrapolation models for all rain rates, achieving a 49% average improvement in predictive skill over extrapolation on the higher precipitation regimes.
APA, Harvard, Vancouver, ISO, and other styles
9

Ekström, Linus, and Andreas Augustsson. "A comperative study of text classification models on invoices : The feasibility of different machine learning algorithms and their accuracy." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15647.

Full text
Abstract:
Text classification for companies is becoming more important in a world where an increasing amount of digital data are made available. The aim is to research whether five different machine learning algorithms can be used to automate the process of classification of invoice data and see which one gets the highest accuracy. Algorithms are in a later stage combined for an attempt to achieve higher results. N-grams are used, and results are compared in form of total accuracy of classification for each algorithm. A library in Python, called scikit-learn, implementing the chosen algorithms, was used. Data is collected and generated to represent data present on a real invoice where data has been extracted. Results from this thesis show that it is possible to use machine learning for this type of problem. The highest scoring algorithm (LinearSVC from scikit-learn) classifies 86% of all samples correctly. This is a margin of 16% above the acceptable level of 70%.
APA, Harvard, Vancouver, ISO, and other styles
10

Lundberg, Jacob. "Resource Efficient Representation of Machine Learning Models : investigating optimization options for decision trees in embedded systems." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162013.

Full text
Abstract:
Combining embedded systems and machine learning models is an exciting prospect. However, to fully target any embedded system, with the most stringent resource requirements, the models have to be designed with care not to overwhelm it. Decision tree ensembles are targeted in this thesis. A benchmark model is created with LightGBM, a popular framework for gradient boosted decision trees. This model is first transformed and regularized with RuleFit, a LASSO regression framework. Then it is further optimized with quantization and weight sharing, techniques used when compressing neural networks. The entire process is combined into a novel framework, called ESRule. The data used comes from the domain of frequency measurements in cellular networks. There is a clear use-case where embedded systems can use the produced resource optimized models. Compared with LightGBM, ESRule uses 72ˆ less internal memory on average, simultaneously increasing predictive performance. The models use 4 kilobytes on average. The serialized variant of ESRule uses 104ˆ less hard disk space than LightGBM. ESRule is also clearly faster at predicting a single sample.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "ENSEMBLE LEARNING MODELS"

1

Kyriakides, George, and Konstantinos G. Margaritis. Hands-On Ensemble Learning with Python: Build Highly Optimized Ensemble Machine Learning Models Using Scikit-Learn and Keras. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Head, Paul D. The Choral Experience. Edited by Frank Abrahams and Paul D. Head. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199373369.013.3.

Full text
Abstract:
Much has changed in the choral rehearsal room over the past two generations, particularly in regard to the role the choral conductor assumes—or commands—in the rehearsal process. This chapter discusses the ever-evolving stereotypical roles of the conductor, while examining alternatives to traditional leadership models with particular emphasis on the encouragement of student engagement and peer-based learning. In addition to the facilitation of collaborative learning exercises, the chapter outlines a specific process of written interaction with the choral ensemble. This section is inspired by the renowned “Dear People” letters of Robert Shaw. Finally, in response to the recently revised National Standards for Music Education in the United States, the author discusses possible implementation of the Standards in a performance-based classroom. In the shadow of the relatively recent phenomena of collegiate a cappella groups, these student ensembles have created a new paradigm for peer-led instruction.
APA, Harvard, Vancouver, ISO, and other styles
3

Summerson, Samantha R., and Caleb Kemere. Multi-electrode Recording of Neural Activity in Awake Behaving Animals. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199939800.003.0004.

Full text
Abstract:
Systems neuroscience is being revolutionized by the ability to record the activity of large numbers of neurons simultaneously. Chronic recording with multi- electrode arrays in animal models is a critical tool for studies of learning and memory, sensory processing, motor control, emotion, and decision-making. The experimental process for gathering large amounts of neural ensemble data can be very time consuming, however, the resulting data can be incredibly rich. We present a detailed overview of the process of acquiring multichannel neural data, with a particular focus on chronic tetrode recording in rodents.
APA, Harvard, Vancouver, ISO, and other styles
4

Wheelahan, Leesa. Rethinking Skills Development. Edited by John Buchanan, David Finegold, Ken Mayhew, and Chris Warhurst. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199655366.013.30.

Full text
Abstract:
This article critiques models of competency-based training in vocational education and training in Anglophone countries and contrasts it to ‘kompetenz’ in Germanic countries. It identifies six key problems with Competency-Based Training (CBT): first, CBT is tied to specific ensembles of workplace roles and requirements; second, the outcomes of learning are tied to descriptions of work as it currently exists; third, CBT does not provide adequate access to underpinning knowledge; fourth, CBT is based on the simplistic and behaviourist notion that processes of learning are identical with the skills that are to be learnt; fifth, the credibility of a qualification is based on trust, not what it says a person can do; and sixth, CBT is based on a notion of the human actor as the supervised worker. The article argues generic skills are not the alternative, and it uses a ‘modified’ version of the capabilities approach as the conceptual basis for qualifications.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ENSEMBLE LEARNING MODELS"

1

Coqueret, Guillaume, and Tony Guida. "Ensemble models." In Machine Learning for Factor Investing, 173–86. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003121596-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Alok, and Mayank Jain. "Mixing Models." In Ensemble Learning for AI Developers, 31–48. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5940-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bisong, Ekaba. "Ensemble Methods." In Building Machine Learning and Deep Learning Models on Google Cloud Platform, 269–86. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hennicker, Rolf, Alexander Knapp, and Martin Wirsing. "Epistemic Ensembles." In Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning, 110–26. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19759-8_8.

Full text
Abstract:
AbstractAn ensemble consists of a set of computing entities which collaborate to reach common goals. We introduce epistemic ensembles that use shared knowledge for collaboration between agents. Collaboration is achieved by different kinds of knowledge announcements. For specifying epistemic ensemble behaviours we use formulas of dynamic logic with compound ensemble actions. Our semantics relies on an epistemic notion of ensemble transition systems as behavioural models. These transition systems describe control flow over epistemic states for expressing knowledge-based collaboration of agents. Specifications are implemented by epistemic processes that are composed in parallel to form ensemble realisations. We give a formal operational semantics of these processes that generates an epistemic ensemble transition system. A realisation is correct w. r. t. an ensemble specification if its semantics is a model of the specification.
APA, Harvard, Vancouver, ISO, and other styles
5

Juniper, Matthew P. "Machine Learning for Thermoacoustics." In Lecture Notes in Energy, 307–37. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16248-0_11.

Full text
Abstract:
AbstractThis chapter demonstrates three promising ways to combine machine learning with physics-based modelling in order to model, forecast, and avoid thermoacoustic instability. The first method assimilates experimental data into candidate physics-based models and is demonstrated on a Rijke tube. This uses Bayesian inference to select the most likely model. This turns qualitatively-accurate models into quantitatively-accurate models that can extrapolate, which can be combined powerfully with automated design. The second method assimilates experimental data into level set numerical simulations of a premixed bunsen flame and a bluff-body stabilized flame. This uses either an Ensemble Kalman filter, which requires no prior simulation but is slow, or a Bayesian Neural Network Ensemble, which is fast but requires prior simulation. This method deduces the simulations’ parameters that best reproduce the data and quantifies their uncertainties. The third method recognises precursors of thermoacoustic instability from pressure measurements. It is demonstrated on a turbulent bunsen flame, an industrial fuel spray nozzle, and full scale aeroplane engines. With this method, Bayesian Neural Network Ensembles determine how far each system is from instability. The trained BayNNEs out-perform physics-based methods on a given system. This method will be useful for practical avoidance of thermoacoustic instability.
APA, Harvard, Vancouver, ISO, and other styles
6

Brazdil, Pavel, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren. "Metalearning in Ensemble Methods." In Metalearning, 189–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_10.

Full text
Abstract:
AbstractThis chapter discusses some approaches that exploit metalearning methods in ensemble learning. It starts by presenting a set of issues, such as the ensemble method used, which affect the process of ensemble learning and the resulting ensemble. In this chapter we discuss various lines of research that were followed. Some approaches seek an ensemble-based solution for the whole dataset, others for individual instances. Regarding the first group, we focus on metalearning in the construction, pruning and integration phase. Modeling the interdependence of models plays an important part in this process. In the second group, the dynamic selection of models is carried out for each instance. A separate section is dedicated to hierarchical ensembles and some methods used in their design. As this area involves potentially very large configuration spaces, recourse to advanced methods, including metalearning, is advantageous. It can be exploited to define the competence regions of different models and the dependencies between them.
APA, Harvard, Vancouver, ISO, and other styles
7

Dritsas, Elias, Maria Trigka, and Phivos Mylonas. "Ensemble Machine Learning Models for Breast Cancer Identification." In IFIP Advances in Information and Communication Technology, 303–11. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34171-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Di Napoli, Mariano, Giuseppe Bausilio, Andrea Cevasco, Pierluigi Confuorto, Andrea Mandarino, and Domenico Calcaterra. "Landslide Susceptibility Assessment by Ensemble-Based Machine Learning Models." In Understanding and Reducing Landslide Disaster Risk, 225–31. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60227-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mokeev, Vladimir. "An Ensemble of Learning Machine Models for Plant Recognition." In Communications in Computer and Information Science, 256–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39575-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Divjot, and Ashutosh Mishra. "Early Prediction of Alzheimer’s Disease Using Ensemble Learning Models." In Springer Proceedings in Mathematics & Statistics, 459–77. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-15175-0_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ENSEMBLE LEARNING MODELS"

1

Celikyilmaz, Asli, and Dilek Hakkani-Tur. "Investigation of ensemble models for sequence learning." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kordik, Pavel, and Jan Cerny. "Building predictive models in two stages with meta-learning templates optimized by genetic programming." In 2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL). IEEE, 2014. http://dx.doi.org/10.1109/ciel.2014.7015740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kotary, James, Vincenzo Di Vito, and Ferdinando Fioretto. "Differentiable Model Selection for Ensemble Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/217.

Full text
Abstract:
Model selection is a strategy aimed at creating accurate and robust models by identifying the optimal model for classifying any particular input sample. This paper proposes a novel framework for differentiable selection of groups of models by integrating machine learning and combinatorial optimization. The framework is tailored for ensemble learning with a strategy that learns to combine the predictions of appropriately selected pre-trained ensemble models. It does so by modeling the ensemble learning task as a differentiable selection program trained end-to-end over a pretrained ensemble to optimize task performance. The proposed framework demonstrates its versatility and effectiveness, outperforming conventional and advanced consensus rules across a variety of classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

K P, Saranyanath, Wei Shi, and Jean-Pierre Corriveau. "Cyberbullying Detection using Ensemble Method." In 3rd International Conference on Data Science and Machine Learning (DSML 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121507.

Full text
Abstract:
Cyberbullying is a form of bullying that occurs across social media platforms using electronic messages. This paper proposes three approaches and five models to identify cyberbullying on a generated social media dataset derived from multiple online platforms. Our initial approach consists in enhancing Support Vector Machines. Our second approach is based on DistilBERT, a lighter and faster Transformer model than BERT. Staking the first three models we obtain two more ensemble models. Contrasting the ensemble models with the three others, we observe that the ensemble models outperform the base model concerning all evaluation metrics except precision. While the highest accuracy, of 89.6%, was obtained using an ensemble model, we achieved the lowest accuracy, at 85.53% on the SVM model. The DistilBERT model exhibited the highest precision, at 91.17%. The model developed using the different granularity of features outperformed the simple TF-IDF.
APA, Harvard, Vancouver, ISO, and other styles
5

Cheung, Catherine, and Zouhair Hamaimou. "Ensemble Integration Methods for Load Estimation." In Vertical Flight Society 78th Annual Forum & Technology Display. The Vertical Flight Society, 2022. http://dx.doi.org/10.4050/f-0078-2022-17553.

Full text
Abstract:
Helicopter component load estimation can be achieved through a variety of machine learning techniques and algorithms. To increase confidence in the load estimation process, ensemble methods are employed combining multiple individual load estimators that increase predictive stability across flights and add robustness to noisy data. In this work, several load estimation methods are applied to a variety of machine learning algorithms to build a large library of individual load estimation models for main rotor yoke loads from 28 flight state and control system parameters. This paper explores several ensemble integration methods including simple averaging, weighted averaging using rank sum, and forward selection. From the 426 individual models, 25 top models were selected based on four ranking metrics, root mean squared error (RMSE), correlation coefficient, and interquartile ranges of these two metrics. All ensembles achieved improved performance for these four metrics compared to the best individual model, with the forward selection ensemble obtaining the lowest RMSE, highest correlation, and closest load signal prediction visually of all models.
APA, Harvard, Vancouver, ISO, and other styles
6

Hoppe, F., and G. Sommer. "Ensemble Learning for Hierarchies of Locally Arranged Models." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.247246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Byeon, Yeong-Hyeon, Sung-Bum Pan, and Keun-Chang Kwak. "Ensemble Deep Learning Models for ECG-based Biometrics." In 2020 Cybernetics & Informatics (K&I). IEEE, 2020. http://dx.doi.org/10.1109/ki48306.2020.9039871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

K, Fahmida Minna, and Maya Mohan. "Ensemble Learning Models for Drug Target Interaction Prediction." In 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2022. http://dx.doi.org/10.1109/icaaic53929.2022.9793081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Panyushkin, Georgy, and Vitalii Varkentin. "Network Traffic and Ensemble Models in Machine Learning." In 2021 International Conference on Quality Management, Transport and Information Security, Information Technologies (IT&QM&IS). IEEE, 2021. http://dx.doi.org/10.1109/itqmis53292.2021.9642907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

E M, Roopa Devi, R. Shanthakumari, R. Rajadevi, Anoj Roshan M, Hari V, and Lakshmanan S. "Forecasting Air Quality Pollutants using Ensemble Learning Models." In 2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN). IEEE, 2023. http://dx.doi.org/10.1109/vitecon58111.2023.10157087.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "ENSEMBLE LEARNING MODELS"

1

de Luis, Mercedes, Emilio Rodríguez, and Diego Torres. Machine learning applied to active fixed-income portfolio management: a Lasso logit approach. Madrid: Banco de España, September 2023. http://dx.doi.org/10.53479/33560.

Full text
Abstract:
The use of quantitative methods constitutes a standard component of the institutional investors’ portfolio management toolkit. In the last decade, several empirical studies have employed probabilistic or classification models to predict stock market excess returns, model bond ratings and default probabilities, as well as to forecast yield curves. To the authors’ knowledge, little research exists into their application to active fixed-income management. This paper contributes to filling this gap by comparing a machine learning algorithm, the Lasso logit regression, with a passive (buy-and-hold) investment strategy in the construction of a duration management model for high-grade bond portfolios, specifically focusing on US treasury bonds. Additionally, a two-step procedure is proposed, together with a simple ensemble averaging aimed at minimising the potential overfitting of traditional machine learning algorithms. A method to select thresholds that translate probabilities into signals based on conditional probability distributions is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
2

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
3

Lasko, Kristofer, and Elena Sava. Semi-automated land cover mapping using an ensemble of support vector machines with moderate resolution imagery integrated into a custom decision support tool. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42402.

Full text
Abstract:
Land cover type is a fundamental remote sensing-derived variable for terrain analysis and environmental mapping applications. The currently available products are produced only for a single season or a specific year. Some of these products have a coarse resolution and quickly become outdated, as land cover type can undergo significant change over a short time period. In order to enable on-demand generation of timely and accurate land cover type products, we developed a sensor-agnostic framework leveraging pre-trained machine learning models. We also generated land cover models for Sentinel-2 (20m) and Landsat 8 imagery (30m) using either a single date of imagery or two dates of imagery for mapping land cover type. The two-date model includes 11 land cover type classes, whereas the single-date model contains 6 classes. The models’ overall accuracies were 84% (Sentinel-2 single date), 82% (Sentinel-2 two date), and 86% (Landsat 8 two date) across the continental United States. The three different models were built into an ArcGIS Pro Python toolbox to enable a semi-automated workflow for end users to generate their own land cover type maps on demand. The toolboxes were built using parallel processing and image-splitting techniques to enable faster computation and for use on less-powerful machines.
APA, Harvard, Vancouver, ISO, and other styles
4

Pettit, Chris, and D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/41034.

Full text
Abstract:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condition. Training data are obtained from Crank-Nicholson solutions of the parabolic equation with homogeneous ground impedance and Monin-Obukhov similarity theory for the effective sound speed in the moving atmosphere. Training data are random samples from an ensemble of solutions for combinations of parameters governing the impedance and the effective sound speed. PINN output is processed to produce realizations of transmission loss that look much like the Crank-Nicholson solutions. We describe the framework for implementing PINN for outdoor sound, and we outline practical matters related to network architecture, the size of the training set, the physics-informed loss function, and challenge of managing the spatial complexity of the complex pressure.
APA, Harvard, Vancouver, ISO, and other styles
5

Pedersen, Gjertrud. Symphonies Reframed. Norges Musikkhøgskole, August 2018. http://dx.doi.org/10.22501/nmh-ar.481294.

Full text
Abstract:
Symphonies Reframed recreates symphonies as chamber music. The project aims to capture the features that are unique for chamber music, at the juncture between the “soloistic small” and the “orchestral large”. A new ensemble model, the “triharmonic ensemble” with 7-9 musicians, has been created to serve this purpose. By choosing this size range, we are looking to facilitate group interplay without the need of a conductor. We also want to facilitate a richness of sound colours by involving piano, strings and winds. The exact combination of instruments is chosen in accordance with the features of the original score. The ensemble setup may take two forms: nonet with piano, wind quartet and string quartet (with double bass) or septet with piano, wind trio and string trio. As a group, these instruments have a rich tonal range with continuous and partly overlapping registers. This paper will illuminate three core questions: What artistic features emerge when changing from large orchestral structures to mid-sized chamber groups? How do the performers reflect on their musical roles in the chamber ensemble? What educational value might the reframing unfold? Since its inception in 2014, the project has evolved to include works with vocal, choral and soloistic parts, as well as sonata literature. Ensembles of students and professors have rehearsed, interpreted and performed our transcriptions of works by Brahms, Schumann and Mozart. We have also carried out interviews and critical discussions with the students, on their experiences of the concrete projects and on their reflections on own learning processes in general. Chamber ensembles and orchestras are exponents of different original repertoire. The difference in artistic output thus hinges upon both ensemble structure and the composition at hand. Symphonies Reframed seeks to enable an assessment of the qualities that are specific to the performing corpus and not beholden to any particular piece of music. Our transcriptions have enabled comparisons and reflections, using original compositions as a reference point. Some of our ensemble musicians have had first-hand experience with performing the original works as well. Others have encountered the works for the first time through our productions. This has enabled a multi-angled approach to the three central themes of our research. This text is produced in 2018.
APA, Harvard, Vancouver, ISO, and other styles
6

Maher, Nicola, Pedro DiNezio, Antonietta Capotondi, and Jennifer Kay. Identifying precursors of daily to seasonal hydrological extremes over the USA using deep learning techniques and climate model ensembles. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Douglas, Thomas, and Caiyun Zhang. Machine learning analyses of remote sensing measurements establish strong relationships between vegetation and snow depth in the boreal forest of Interior Alaska. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41222.

Full text
Abstract:
The seasonal snowpack plays a critical role in Arctic and boreal hydrologic and ecologic processes. Though snow depth can be different from one season to another there are repeated relationships between ecotype and snowpack depth. Alterations to the seasonal snowpack, which plays a critical role in regulating wintertime soil thermal conditions, have major ramifications for near-surface permafrost. Therefore, relationships between vegetation and snowpack depth are critical for identifying how present and projected future changes in winter season processes or land cover will affect permafrost. Vegetation and snow cover areal extent can be assessed rapidly over large spatial scales with remote sensing methods, however, measuring snow depth remotely has proven difficult. This makes snow depth–vegetation relationships a potential means of assessing snowpack characteristics. In this study, we combined airborne hyperspectral and LiDAR data with machine learning methods to characterize relationships between ecotype and the end of winter snowpack depth. Our results show hyperspectral measurements account for two thirds or more of the variance in the relationship between ecotype and snow depth. An ensemble analysis of model outputs using hyperspectral and LiDAR measurements yields the strongest relationships between ecotype and snow depth. Our results can be applied across the boreal biome to model the coupling effects between vegetation and snowpack depth.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography