To see the other types of publications on this topic, follow the link: Ensemble non dominé.

Journal articles on the topic 'Ensemble non dominé'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Ensemble non dominé.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Moffette, David. "Propositions pour une sociologie pragmatique des frontières : multiples acteurs, pratiques spatio-temporelles et jeux de juridictions." Cahiers de recherche sociologique, no. 59-60 (June 15, 2016): 61–78. http://dx.doi.org/10.7202/1036786ar.

Full text
Abstract:
Bien que les sociologues aient beaucoup travaillé sur des objets connexes, l’étude des frontières demeure un champ de recherche dominé par les géographes et politistes. Ce sont eux qui ont proposé qu’il faille considérer la frontière non pas comme un objet physique spatialement situé, mais bien comme un ensemble de pratiques d’acteurs dispersés. Nous soutenons qu’en adoptant une approche pragmatique des frontières qui mette l’accent sur la multiplicité des acteurs impliqués, leurs pratiques socio-temporelles et leurs jeux de juridictions, les sociologues peuvent pousser les limites de ce domaine de recherche. De plus, en encourageant les sociologues à réfléchir aux dimensions spatiales, temporelles et juridictionnelles des pratiques sociales, la « sociologie des frontières » proposée ici peut faciliter un renouvellement de l’analyse sociologique et nous aider non seulement à ne pas réifier le social, mais aussi à ne pas le distinguer a priori du spatial, du temporel et du juridique.
APA, Harvard, Vancouver, ISO, and other styles
2

Hsu, Kuo-Wei. "A Theoretical Analysis of Why Hybrid Ensembles Work." Computational Intelligence and Neuroscience 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/1930702.

Full text
Abstract:
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
APA, Harvard, Vancouver, ISO, and other styles
3

Bogaert, Matthias, and Lex Delaere. "Ensemble Methods in Customer Churn Prediction: A Comparative Analysis of the State-of-the-Art." Mathematics 11, no. 5 (February 24, 2023): 1137. http://dx.doi.org/10.3390/math11051137.

Full text
Abstract:
In the past several single classifiers, homogeneous and heterogeneous ensembles have been proposed to detect the customers who are most likely to churn. Despite the popularity and accuracy of heterogeneous ensembles in various domains, customer churn prediction models have not yet been picked up. Moreover, there are other developments in the performance evaluation and model comparison level that have not been introduced in a systematic way. Therefore, the aim of this study is to perform a large scale benchmark study in customer churn prediction implementing these novel methods. To do so, we benchmark 33 classifiers, including 6 single classifiers, 14 homogeneous, and 13 heterogeneous ensembles across 11 datasets. Our findings indicate that heterogeneous ensembles are consistently ranked higher than homogeneous ensembles and single classifiers. It is observed that a heterogeneous ensemble with simulated annealing classifier selection is ranked the highest in terms of AUC and expected maximum profits. For accuracy, F1 measure and top-decile lift, a heterogenous ensemble optimized by non-negative binomial likelihood, and a stacked heterogeneous ensemble are, respectively, the top ranked classifiers. Our study contributes to the literature by being the first to include such an extensive set of classifiers, performance metrics, and statistical tests in a benchmark study of customer churn.
APA, Harvard, Vancouver, ISO, and other styles
4

Kioutsioukis, Ioannis, Ulas Im, Efisio Solazzo, Roberto Bianconi, Alba Badia, Alessandra Balzarini, Rocío Baró, et al. "Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data." Atmospheric Chemistry and Physics 16, no. 24 (December 20, 2016): 15629–52. http://dx.doi.org/10.5194/acp-16-15629-2016.

Full text
Abstract:
Abstract. Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint distribution of accuracy and diversity in the ensembles. The skill enhancement was superior using the weighting scheme, but the training period required to acquire representative weights was longer compared to the sub-selecting schemes. Further development of the method is discussed in the conclusion.
APA, Harvard, Vancouver, ISO, and other styles
5

Parshin, Alexander M. "Modulation of Light Transmission in Self-Organized Ensembles of Nematic Domains." Liquid Crystals and their Application 23, no. 4 (December 26, 2023): 49–57. http://dx.doi.org/10.18083/lcappl.2023.4.49.

Full text
Abstract:
Using an electric field, the transmission of light passing through self-organized ensembles of nematic domains is studied. Modulation characteristics of the ensembles containing domains with non-oriented and magnetic-field-oriented disclination lines are compared. A significant decrease in light scattering is shown when disclination lines are oriented. The calculated and well agreeing with them experimental dependences of light transmission on electric voltage applied to liquid crystal cells are obtained and presented. Oscillations on light transmission curves are studied. Superpositions of ordinary and extraordinary waves propagating through the domain ensembles and homogeneous planar liquid crystal layers are considered. The spectral characteristics of the ensembles are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Karim, Zainoolabadien, and Terence L. van Zyl. "Deep/Transfer Learning with Feature Space Ensemble Networks (FeatSpaceEnsNets) and Average Ensemble Networks (AvgEnsNets) for Change Detection Using DInSAR Sentinel-1 and Optical Sentinel-2 Satellite Data Fusion." Remote Sensing 13, no. 21 (October 31, 2021): 4394. http://dx.doi.org/10.3390/rs13214394.

Full text
Abstract:
Differential interferometric synthetic aperture radar (DInSAR), coherence, phase, and displacement are derived from processing SAR images to monitor geological phenomena and urban change. Previously, Sentinel-1 SAR data combined with Sentinel-2 optical imagery has improved classification accuracy in various domains. However, the fusing of Sentinel-1 DInSAR processed imagery with Sentinel-2 optical imagery has not been thoroughly investigated. Thus, we explored this fusion in urban change detection by creating a verified balanced binary classification dataset comprising 1440 blobs. Machine learning models using feature descriptors and non-deep learning classifiers, including a two-layer convolutional neural network (ConvNet2), were used as baselines. Transfer learning by feature extraction (TLFE) using various pre-trained models, deep learning from random initialization, and transfer learning by fine-tuning (TLFT) were all evaluated. We introduce a feature space ensemble family (FeatSpaceEnsNet), an average ensemble family (AvgEnsNet), and a hybrid ensemble family (HybridEnsNet) of TLFE neural networks. The FeatSpaceEnsNets combine TLFE features directly in the feature space using logistic regression. AvgEnsNets combine TLFEs at the decision level by aggregation. HybridEnsNets are a combination of FeatSpaceEnsNets and AvgEnsNets. Several FeatSpaceEnsNets, AvgEnsNets, and HybridEnsNets, comprising a heterogeneous mixture of different depth and architecture models, are defined and evaluated. We show that, in general, TLFE outperforms both TLFT and classic deep learning for the small dataset used and that larger ensembles of TLFE models do not always improve accuracy. The best performing ensemble is an AvgEnsNet (84.862%) comprised of a ResNet50, ResNeXt50, and EfficientNet B4. This was matched by a similarly composed FeatSpaceEnsNet with an F1 score of 0.001 and variance of 0.266 less. The best performing HybridEnsNet had an accuracy of 84.775%. All of the ensembles evaluated outperform the best performing single model, ResNet50 with TLFE (83.751%), except for AvgEnsNet 3, AvgEnsNet 6, and FeatSpaceEnsNet 5. Five of the seven similarly composed FeatSpaceEnsNets outperform the corresponding AvgEnsNet.
APA, Harvard, Vancouver, ISO, and other styles
7

Santana-Falcón, Yeray, Pierre Brasseur, Jean Michel Brankart, and Florent Garnier. "Assimilation of chlorophyll data into a stochastic ensemble simulation for the North Atlantic Ocean." Ocean Science 16, no. 5 (October 29, 2020): 1297–315. http://dx.doi.org/10.5194/os-16-1297-2020.

Full text
Abstract:
Abstract. Satellite-derived surface chlorophyll data are assimilated daily into a three-dimensional 24-member ensemble configuration of an online-coupled NEMO (Nucleus for European Modeling of the Ocean)–PISCES (Pelagic Interaction Scheme of Carbon and Ecosystem Studies) model for the North Atlantic Ocean. A 1-year multivariate assimilation experiment is performed to evaluate the impacts on analyses and forecast ensembles. Our results demonstrate that the integration of data improves surface analysis and forecast chlorophyll representation in a major part of the model domain, where the assimilated simulation outperforms the probabilistic skills of a non-assimilated analogous simulation. However, improvements are dependent on the reliability of the prior free ensemble. A regional diagnosis shows that surface chlorophyll is overestimated in the northern limit of the subtropical North Atlantic, where the prior ensemble spread does not cover the observation's variability. There, the system cannot deal with corrections that alter the equilibrium between the observed and unobserved state variables producing instabilities that propagate into the forecast. To alleviate these inconsistencies, a 1-month sensitivity experiment in which the assimilation process is only applied to model fluctuations is performed. Results suggest the use of this methodology may decrease the effect of corrections on the correlations between state vectors. Overall, the experiments presented here evidence the need of refining the description of model's uncertainties according to the biogeochemical characteristics of each oceanic region.
APA, Harvard, Vancouver, ISO, and other styles
8

Kotlarski, S., K. Keuler, O. B. Christensen, A. Colette, M. Déqué, A. Gobiet, K. Goergen, et al. "Regional climate modeling on European scales: a joint standard evaluation of the EURO-CORDEX RCM ensemble." Geoscientific Model Development Discussions 7, no. 1 (January 14, 2014): 217–93. http://dx.doi.org/10.5194/gmdd-7-217-2014.

Full text
Abstract:
Abstract. EURO-CORDEX is an international climate downscaling initiative that aims to provide high-resolution climate scenarios for Europe. Here an evaluation of the ERA-Interim-driven EURO-CORDEX regional climate model (RCM) ensemble is presented. The study documents the performance of the individual models in representing the basic spatio-temporal patterns of the European climate for the period 1989–2008. Model evaluation focuses on near-surface air temperature and precipitation, and uses the E-OBS dataset as observational reference. The ensemble consists of 17 simulations carried out by seven different models at grid resolutions of 12 km (nine experiments) and 50 km (eight experiments). Several performance metrics computed from monthly and seasonal mean values are used to assess model performance over eight sub-domains of the European continent. Results are compared to those for the ERA40-driven ENSEMBLES simulations. The analysis confirms the ability of RCMs to capture the basic features of the European climate, including its variability in space and time. But it also identifies non-negligible deficiencies of the simulations for selected metrics, regions and seasons. Seasonally and regionally averaged temperature biases are mostly smaller than 1.5 °C, while precipitation biases are typically located in the ±40% range. Some bias characteristics, such as a predominant cold and wet bias in most seasons and over most parts of Europe and a warm and dry summer bias over southern and south-eastern Europe reflect common model biases. For seasonal mean quantities averaged over large European sub-domains, no clear benefit of an increased spatial resolution (12 km vs. 50 km) can be identified. The bias ranges of the EURO-CORDEX ensemble mostly correspond to those of the ENSEMBLES simulations, but some improvements in model performance can be identified (e.g., a less pronounced southern European warm summer bias). The temperature bias spread across different configurations of one individual model can be of a similar magnitude as the spread across different models, demonstrating a strong influence of the specific choices in physical parameterizations and experimental setup on model performance. Based on a number of simply reproducible metrics, the present study quantifies the currently achievable accuracy of RCMs used for regional climate simulations over Europe and provides a quality standard for future model developments.
APA, Harvard, Vancouver, ISO, and other styles
9

Akemann, Gernot, Markus Ebke, and Iván Parra. "Skew-Orthogonal Polynomials in the Complex Plane and Their Bergman-Like Kernels." Communications in Mathematical Physics 389, no. 1 (October 27, 2021): 621–59. http://dx.doi.org/10.1007/s00220-021-04230-8.

Full text
Abstract:
AbstractNon-Hermitian random matrices with symplectic symmetry provide examples for Pfaffian point processes in the complex plane. These point processes are characterised by a matrix valued kernel of skew-orthogonal polynomials. We develop their theory in providing an explicit construction of skew-orthogonal polynomials in terms of orthogonal polynomials that satisfy a three-term recurrence relation, for general weight functions in the complex plane. New examples for symplectic ensembles are provided, based on recent developments in orthogonal polynomials on planar domains or curves in the complex plane. Furthermore, Bergman-like kernels of skew-orthogonal Hermite and Laguerre polynomials are derived, from which the conjectured universality of the elliptic symplectic Ginibre ensemble and its chiral partner follow in the limit of strong non-Hermiticity at the origin. A Christoffel perturbation of skew-orthogonal polynomials as it appears in applications to quantum field theory is provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Adler, Mark, and Pierre van Moerbeke. "Double interlacing in random tiling models." Journal of Mathematical Physics 64, no. 3 (March 1, 2023): 033509. http://dx.doi.org/10.1063/5.0093542.

Full text
Abstract:
Random tilings of very large domains will typically lead to a solid, a liquid, and a gas phase. In the two-phase case, the solid–liquid boundary (arctic curve) is smooth, possibly with singularities. At the point of tangency of the arctic curve with the domain boundary, for large-sized domains, the tiles of a certain shape form a singly interlacing set, fluctuating according to the eigenvalues of the principal minors of a Gaussian unitary ensemble-matrix. Introducing non-convexities in large domains may lead to the appearance of several interacting liquid regions: They can merely touch, leading to either a split tacnode (hard tacnode), with two distinct adjacent frozen phases descending into the tacnode, or a soft tacnode. For appropriate scaling of the non-convex domains and probing about such split tacnodes, filaments, evolving in a bricklike sea of dimers of another type, will connect the liquid patches. Nearby, the tiling fluctuations are governed by a discrete tacnode kernel—i.e., a determinantal point process on a doubly interlacing set of dots belonging to a discrete array of parallel lines. This kernel enables us to compute the joint distribution of the dots along those lines. This kernel appears in two very different models: (i) domino tilings of skew-Aztec rectangles and (ii) lozenge tilings of hexagons with cuts along opposite edges. Soft tacnodes appear when two arctic curves gently touch each other amid a bricklike sea of dimers of one type, unlike the split tacnode. We hope that this largely expository paper will provide a view on the subject and be accessible to a wider audience.
APA, Harvard, Vancouver, ISO, and other styles
11

Stephen, Okeke, Samaneh Madanian, and Minh Nguyen. "A Robust Deep Learning Ensemble-Driven Model for Defect and Non-Defect Recognition and Classification Using a Weighted Averaging Sequence-Based Meta-Learning Ensembler." Sensors 22, no. 24 (December 17, 2022): 9971. http://dx.doi.org/10.3390/s22249971.

Full text
Abstract:
The need to overcome the challenges of visual inspections conducted by domain experts drives the recent surge in visual inspection research. Typical manual industrial data analysis and inspection for defects conducted by trained personnel are expensive, time-consuming, and characterized by mistakes. Thus, an efficient intelligent-driven model is needed to eliminate or minimize the challenges of defect identification and elimination in processes to the barest minimum. This paper presents a robust method for recognizing and classifying defects in industrial products using a deep-learning architectural ensemble approach integrated with a weighted sequence meta-learning unification framework. In the proposed method, a unique base model is constructed and fused together with other co-learning pretrained models using a sequence-driven meta-learning ensembler that aggregates the best features learned from the various contributing models for better and superior performance. During experimentation in the study, different publicly available industrial product datasets consisting of the defect and non-defect samples were used to train, validate, and test the introduced model, with remarkable results obtained that demonstrate the viability of the proposed method in tackling the challenges of the manual visual inspection approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Tang, Yifeng, and Rayhaneh Akhavan. "Computations of equilibrium and non-equilibrium turbulent channel flows using a nested-LES approach." Journal of Fluid Mechanics 793 (March 22, 2016): 709–48. http://dx.doi.org/10.1017/jfm.2016.137.

Full text
Abstract:
A new nested-LES approach for computation of high Reynolds number, equilibrium, and non-equilibrium, wall-bounded turbulent flows is presented. The method couples coarse-resolution LES in the full computational domain with fine-resolution LES in a minimal flow unit to retain the accuracy of well-resolved LES throughout the computational domain, including in the near-wall region, while significantly reducing the computational cost. The two domains are coupled by renormalizing the instantaneous velocity fields in each domain dynamically during the course of the simulation to match the wall-normal profiles of single-time, ensemble-averaged kinetic energies of the components of ‘mean’ and fluctuating velocities in the inner layer to those of the minimal flow unit, and in the outer layer to those of the full domain. This simple renormalization procedure is shown to correct the energy spectra and wall shear stresses in both domains, thus leading to accurate turbulence statistics. The nested-LES approach has been applied to computation of equilibrium turbulent channel flow at $Re_{{\it\tau}}\approx 1000$, 2000, 5000, 10 000, and non-equilibrium, strained turbulent channel flow at $Re_{{\it\tau}}\approx 2000$. In both flows, nested-LES predicts the skin friction coefficient, first- and higher-order turbulence statistics, spectra and structure of the flow in agreement with available DNS and experimental data. Nested-LES can be applied to any flow with at least one direction of local or global homogeneity, while reducing the required number of grid points from $O(Re_{{\it\tau}}^{2})$ of conventional LES to $O(\log Re_{{\it\tau}})$ or $O(Re_{{\it\tau}})$ in flows with two or one locally or globally homogeneous directions, respectively.
APA, Harvard, Vancouver, ISO, and other styles
13

Mahmood, Zafar, Naveed Anwer Butt, Ghani Ur Rehman, Muhammad Zubair, Muhammad Aslam, Afzal Badshah, and Syeda Fizzah Jilani. "Generation of Controlled Synthetic Samples and Impact of Hyper-Tuning Parameters to Effectively Classify the Complex Structure of Overlapping Region." Applied Sciences 12, no. 16 (August 22, 2022): 8371. http://dx.doi.org/10.3390/app12168371.

Full text
Abstract:
The classification of imbalanced and overlapping data has provided customary insight over the last decade, as most real-world applications comprise multiple classes with an imbalanced distribution of samples. Samples from different classes overlap near class boundaries, creating a complex structure for the underlying classifier. Due to the imbalanced distribution of samples, the underlying classifier favors samples from the majority class and ignores samples representing the least minority class. The imbalanced nature of the data—resulting in overlapping regions—greatly affects the learning of various machine learning classifiers, as most machine learning classifiers are designed to handle balanced datasets and perform poorly when applied to imbalanced data. To improve learning on multi-class problems, more expertise is required in both traditional classifiers and problem domain datasets. Some experimentation and knowledge of hyper-tuning the parameters and parameters of the classifier under consideration are required. Several techniques for learning from multi-class problems have been reported in the literature, such as sampling techniques, algorithm adaptation methods, transformation methods, hybrid methods, and ensemble techniques. In the current research work, we first analyzed the learning behavior of state-of-the-art ensemble and non-ensemble classifiers on imbalanced and overlapping multi-class data. After analysis, we used grid search techniques to optimize key parameters (by hyper-tuning) of ensemble and non-ensemble classifiers to determine the optimal set of parameters to enhance the learning from a multi-class imbalanced classification problem, performed on 15 public datasets. After hyper-tuning, 20% of the dataset samples are synthetically generated to add to the majority class of each respective dataset to make it more overlapped (complex structure). After the synthetic sample’s addition, the hyper-tuned ensemble and non-ensemble classifiers are tested over that complex structure. This paper also includes a brief description of tuned parameters and their effects on imbalanced data, followed by a detailed comparison of ensemble and non-ensemble classifiers with the default and tuned parameters for both original and synthetically overlapped datasets. We believe that the underlying paper is the first kind of effort in this domain, which will furnish various research aspects to with a greater focus on the parameters of the classifier in the field of learning from imbalanced data problems using machine-learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Aribowo, Agus Sasmito, Halizah Basiron, Noor Fazilla Abd Yusof, and Siti Khomsah. "Cross-domain sentiment analysis model on Indonesian YouTube comment." International Journal of Advances in Intelligent Informatics 7, no. 1 (March 31, 2021): 12. http://dx.doi.org/10.26555/ijain.v7i1.554.

Full text
Abstract:
A cross-domain sentiment analysis (CDSA) study in the Indonesian language and tree-based ensemble machine learning is quite interesting. CDSA is useful to support the labeling process of cross-domain sentiment and reduce any dependence on the experts; however, the mechanism in the opinion unstructured by stop word, language expressions, and Indonesian slang words is unidentified yet. This study aimed to obtain the best model of CDSA for the opinion in Indonesia language that commonly is full of stop words and slang words in the Indonesian dialect. This study was purposely to observe the benefits of the stop words cleaning and slang words conversion in CDSA in the Indonesian language form. It was also to find out which machine learning method is suitable for this model. This study started by crawling five datasets of the comments on YouTube from 5 different domains. The dataset was copied into two groups: the dataset group without any process of stop word cleaning and slang word conversion and the dataset group to stop word cleaning and slang word conversion. CDSA model was built for each dataset group and then tested using two types of tree-based ensemble machine learning, i.e., Random Forest (RF) and Extra Tree (ET) classifier, and tested using three types of non-ensemble machine learning, including Naïve Bayes (NB), SVM, and Decision Tree (DT) as the comparison. Then, It can be suggested that the accuracy of CDSA in Indonesia Language increased if it still removed the stop words and converted the slang words. The best classifier model was built using tree-based ensemble machine learning, particularly ET, as in this study, the ET model could achieve the highest accuracy by 91.19%. This model is expected to be the CDSA technique alternative in the Indonesian language.
APA, Harvard, Vancouver, ISO, and other styles
15

Pal, Tanmoy Kumar, and Subhrangsu Santra. "PANDEMIC-INDUCED LOCKDOWN'S SHOCK TO HOUSEHOLD LEVEL FOOD SECURITY." ENSEMBLE SP-1, no. 1 (April 15, 2021): 120–28. http://dx.doi.org/10.37948/ensemble-2021-sp1-a014.

Full text
Abstract:
The impact of the lockdown induced by the COVID-19 pandemic was devastating for the farm as well as the non-farm sectors of the Indian economy. Even though many authors expressed the apprehensions of hunger, and journalistic accounts of hunger appeared in newspapers, very few studies were undertaken to investigate the nature and extent of lockdown-induced food insecurity experienced by the households and understand the household management strategies adopted by those households. This study was undertaken in a village located in the Birbhum district of West Bengal during the unlock-I phase to fill the above-stated gap. Data for this study were collected from 40 households using a standardized tool known as the Household Food Insecurity Access Scale (HFIAS), and a semi-structured questionnaire. Results showed that inaccessibility of food was experienced by the households in three domains- anxiety and uncertainty (82.5% households), unsatisfactory quality (100% households), and insufficient quantity (77.5% households). However, quantitative scale scores of food insecurity showed that none of the households experienced the highest possible degree of food insecurity. The public distribution system and mid-day-meal programs were most effective in reducing the food insecurity of many families, but the level of support extended was not enough. More than half of the households reported a reduction in animal protein consumption, higher expenditure on vegetables and fruits, and an increase in taking loans. Based on the findings of the study, two specific suggestions were provided for facilitating the management of disruptions caused by lockdown-like emergency conditions.
APA, Harvard, Vancouver, ISO, and other styles
16

Vich, M., and R. Romero. "Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations." Natural Hazards and Earth System Sciences 10, no. 11 (November 25, 2010): 2371–77. http://dx.doi.org/10.5194/nhess-10-2371-2010.

Full text
Abstract:
Abstract. The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.
APA, Harvard, Vancouver, ISO, and other styles
17

Vanetik, Natalia, and Marina Litvak. "Definition Extraction from Generic and Mathematical Domains with Deep Ensemble Learning." Mathematics 9, no. 19 (October 6, 2021): 2502. http://dx.doi.org/10.3390/math9192502.

Full text
Abstract:
Definitions are extremely important for efficient learning of new materials. In particular, mathematical definitions are necessary for understanding mathematics-related areas. Automated extraction of definitions could be very useful for automated indexing educational materials, building taxonomies of relevant concepts, and more. For definitions that are contained within a single sentence, this problem can be viewed as a binary classification of sentences into definitions and non-definitions. In this paper, we focus on automatic detection of one-sentence definitions in mathematical and general texts. We experiment with different classification models arranged in an ensemble and applied to a sentence representation containing syntactic and semantic information, to classify sentences. Our ensemble model is applied to the data adjusted with oversampling. Our experiments demonstrate the superiority of our approach over state-of-the-art methods in both general and mathematical domains.
APA, Harvard, Vancouver, ISO, and other styles
18

Rahman, Md Mosheyur, Mohammed Imamul Hassan Bhuiyan, and Anindya Bijoy Das. "Classification of focal and non-focal EEG signals in VMD-DWT domain using ensemble stacking." Biomedical Signal Processing and Control 50 (April 2019): 72–82. http://dx.doi.org/10.1016/j.bspc.2019.01.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gong, Xuan, Abhishek Sharma, Srikrishna Karanam, Ziyan Wu, Terrence Chen, David Doermann, and Arun Innanje. "Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 11891–99. http://dx.doi.org/10.1609/aaai.v36i11.21446.

Full text
Abstract:
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model while the training data remains decentralized. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they suffer from communication bottlenecks. More importantly, they risk privacy leakage risk. In this work, we develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation using unlabeled, cross-domain, non-sensitive public data. We propose a quantized and noisy ensemble of local predictions from completely trained local models for stronger privacy guarantees without sacrificing accuracy. Based on extensive experiments on image classification and text classification tasks, we show that our method outperforms baseline FL algorithms with superior performance in both accuracy and data privacy preservation.
APA, Harvard, Vancouver, ISO, and other styles
20

Jha, Paritosh Navinchandra, and Marco Cucculelli. "A New Model Averaging Approach in Predicting Credit Risk Default." Risks 9, no. 6 (June 8, 2021): 114. http://dx.doi.org/10.3390/risks9060114.

Full text
Abstract:
The paper introduces a novel approach to ensemble modeling as a weighted model average technique. The proposed idea is prudent, simple to understand, and easy to implement compared to the Bayesian and frequentist approach. The paper provides both theoretical and empirical contributions for assessing credit risk (probability of default) effectively in a new way by creating an ensemble model as a weighted linear combination of machine learning models. The idea can be generalized to any classification problems in other domains where ensemble-type modeling is a subject of interest and is not limited to an unbalanced dataset or credit risk assessment. The results suggest a better forecasting performance compared to the single best well-known machine learning of parametric, non-parametric, and other ensemble models. The scope of our approach can be extended to any further improvement in estimating weights differently that may be beneficial to enhance the performance of the model average as a future research direction.
APA, Harvard, Vancouver, ISO, and other styles
21

Emamipour, Sajad, Rasoul Sali, and Zahra Yousefi. "A Multi-Objective Ensemble Method for Class Imbalance Learning." International Journal of Big Data and Analytics in Healthcare 2, no. 1 (January 2017): 16–34. http://dx.doi.org/10.4018/ijbdah.2017010102.

Full text
Abstract:
This article describes how class imbalance learning has attracted great attention in recent years as many real world domain applications suffer from this problem. Imbalanced class distribution occurs when the number of training examples for one class far surpasses the training examples of the other class often the one that is of more interest. This problem may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. Toward this end, the authors developed a hybrid model to address the class imbalance learning with focus on binary class problems. This model combines benefits of the ensemble classifiers with a multi objective feature selection technique to achieve higher classification performance. The authors' model also proposes non-dominated sets of features. Then they evaluate the performance of the proposed model by comparing its results with notable algorithms for solving imbalanced data problem. Finally, the authors utilize the proposed model in medical domain of predicting life expectancy in post-operative of thoracic surgery patients.
APA, Harvard, Vancouver, ISO, and other styles
22

Huijnen, V., H. J. Eskes, A. Poupkou, H. Elbern, K. F. Boersma, G. Foret, M. Sofiev, et al. "Comparison of OMI NO<sub>2</sub> tropospheric columns with an ensemble of global and European regional air quality models." Atmospheric Chemistry and Physics 10, no. 7 (April 6, 2010): 3273–96. http://dx.doi.org/10.5194/acp-10-3273-2010.

Full text
Abstract:
Abstract. We present a comparison of tropospheric NO2 from OMI measurements to the median of an ensemble of Regional Air Quality (RAQ) models, and an intercomparison of the contributing RAQ models and two global models for the period July 2008–June 2009 over Europe. The model forecasts were produced routinely on a daily basis in the context of the European GEMS ("Global and regional Earth-system (atmosphere) Monitoring using Satellite and in-situ data") project. The tropospheric vertical column of the RAQ ensemble median shows a spatial distribution which agrees well with the OMI NO2 observations, with a correlation r=0.8. This is higher than the correlations from any one of the individual RAQ models, which supports the use of a model ensemble approach for regional air pollution forecasting. The global models show high correlations compared to OMI, but with significantly less spatial detail, due to their coarser resolution. Deviations in the tropospheric NO2 columns of individual RAQ models from the mean were in the range of 20–34% in winter and 40–62% in summer, suggesting that the RAQ ensemble prediction is relatively more uncertain in the summer months. The ensemble median shows a stronger seasonal cycle of NO2 columns than OMI, and the ensemble is on average 50% below the OMI observations in summer, whereas in winter the bias is small. On the other hand the ensemble median shows a somewhat weaker seasonal cycle than NO2 surface observations from the Dutch Air Quality Network, and on average a negative bias of 14%. Full profile information was available for two RAQ models and for the global models. For these models the retrieval averaging kernel was applied. Minor differences are found for area-averaged model columns with and without applying the kernel, which shows that the impact of replacing the a priori profiles by the RAQ model profiles is on average small. However, the contrast between major hotspots and rural areas is stronger for the direct modeled vertical columns than the columns where the averaging kernels are applied, related to a larger relative contribution of the free troposphere and the coarse horizontal resolution in the a priori profiles compared to the RAQ models. In line with validation results reported in the literature, summertime concentrations in the lowermost boundary layer in the a priori profiles from the DOMINO product are significantly larger than the RAQ model concentrations and surface observations over the Netherlands. This affects the profile shape, and contributes to a high bias in OMI tropospheric columns over polluted regions. The global models indicate that the upper troposphere may contribute significantly to the total column and it is important to account for this in comparisons with RAQ models. A combination of upper troposphere model biases, the a priori profile effects and DOMINO product retrieval issues could explain the discrepancy observed between the OMI observations and the ensemble median in summer.
APA, Harvard, Vancouver, ISO, and other styles
23

You, Weizhen, Alexandre Saidi, Abdel-malek Zine, and Mohamed Ichchou. "Mechanical Reliability Assessment by Ensemble Learning." Vehicles 2, no. 1 (February 14, 2020): 126–41. http://dx.doi.org/10.3390/vehicles2010007.

Full text
Abstract:
Reliability assessment plays a significant role in mechanical design and improvement processes. Uncertainties in structural properties as well as those in the stochatic excitations have made reliability analysis more difficult to apply. In fact, reliability evaluations involve estimations of the so-called conditional failure probability (CFP) that can be seen as a regression problem taking the structural uncertainties as input and the CFPs as output. As powerful ensemble learning methods in a machine learning (ML) domain, random forest (RF), and its variants Gradient boosting (GB), Extra-trees (ETs) always show good performance in handling non-parametric regressions. However, no systematic studies of such methods in mechanical reliability are found in the current published research. Another more complex ensemble method, i.e., Stacking (Stacked Generalization), tries to build the regression model hierarchically, resulting in a meta-learner induced from various base learners. This research aims to build a framework that integrates ensemble learning theories in mechanical reliability estimations and explore their performances on different complexities of structures. In numerical simulations, the proposed methods are tested based on different ensemble models and their performances are compared and analyzed from different perspectives. The simulation results show that, with much less analysis of structural samples, the ensemble learning methods achieve highly comparable estimations with those by direct Monte Carlo simulation (MCS).
APA, Harvard, Vancouver, ISO, and other styles
24

Rachappa, Chetana, Mahantesh Kapanaiah, and Vidhyashree Nagaraju. "Hybrid ensemble learning framework for epileptic seizure detection using electroencephalograph signals." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 3 (October 7, 2022): 1502. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1502-1509.

Full text
Abstract:
An automated method for accurate prediction of seizures is critical to enhance the quality of epileptic patients While numerous existing studies develop models and methods to identify an efficient feature selection and classification of electroencephalograph (EEG) data, recent studies emphasize on the development of ensemble learning methods to efficiently classify EEG signals in effective detection of epileptic seizures. Since EEG signals are non-stationary, traditional machine learning approaches may not suffice in effective identification of epileptic seizures. The paper proposes a hybrid ensemble learning framework that systematically combines pre-processing methods with ensemble machine learning algorithms. Specifically, principal component analysis (PCA) or t-distributed stochastic neighbor embedding (t-SNE) combined along k-means clustering followed by ensemble learning such as extreme gradient boosting algorithms (XGBoost) or random forest is considered. Selection of ensemble learning methods is justified by comparing the mean average precision score with well known methodologies in epileptic seizure detection domain when applied to real data set. The proposed hybrid framework is also compared with other simple supervised machine learning algorithms with training set of varying size. Results suggested that the proposed approach achieves significant improvement in accuracy compared with other algorithms and suggests stability in classification accuracy even with small sized data.
APA, Harvard, Vancouver, ISO, and other styles
25

Miyazaki, K., H. J. Eskes, and K. Sudo. "Global NO<sub>x</sub> emission estimates derived from an assimilation of OMI tropospheric NO<sub>2</sub> columns." Atmospheric Chemistry and Physics Discussions 11, no. 12 (December 2, 2011): 31523–83. http://dx.doi.org/10.5194/acpd-11-31523-2011.

Full text
Abstract:
Abstract. A data assimilation system has been developed to estimate global nitrogen oxides (NOx) emissions using OMI tropospheric NO2 columns (DOMINO product) and a global chemical transport model (CTM), CHASER. The data assimilation system, based on an ensemble Kalman filter approach, was applied to optimize daily NOx emissions with a horizontal resolution of 2.8° during the years 2005 and 2006. The background error covariance estimated from the ensemble CTM forecasts explicitly represents non-direct relationships between the emissions and tropospheric columns caused by atmospheric transport and chemical processes. In comparison to the a priori emissions based on bottom-up inventories, the optimized emissions were higher over Eastern China, the Eastern United States, Southern Africa, and Central-Western Europe, suggesting that the anthropogenic emissions are mostly underestimated in the inventories. In addition, the seasonality of the estimated emissions differed from that of the a priori emission over several biomass burning regions, with a large increase over Southeast Asia in April and over South America in October. The data assimilation results were validated against independent data: SCIAMACHY tropospheric NO2 columns and vertical NO2 profiles obtained from aircraft and lidar measurements. The emission correction greatly improved the agreement between the simulated and observed NO2 fields; this implies that the data assimilation system efficiently derives NOx emissions from concentration observations. We also demonstrated that biases in the satellite retrieval and model settings used in the data assimilation largely affect the magnitude of estimated emissions. These dependences should be carefully considered for better understanding NOx sources from top-down approaches.
APA, Harvard, Vancouver, ISO, and other styles
26

Napoles, Courtney, Maria Nădejde, and Joel Tetreault. "Enabling Robust Grammatical Error Correction in New Domains: Data Sets, Metrics, and Analyses." Transactions of the Association for Computational Linguistics 7 (November 2019): 551–66. http://dx.doi.org/10.1162/tacl_a_00282.

Full text
Abstract:
Until now, grammatical error correction (GEC) has been primarily evaluated on text written by non-native English speakers, with a focus on student essays. This paper enables GEC development on text written by native speakers by providing a new data set and metric. We present a multiple-reference test corpus for GEC that includes 4,000 sentences in two new domains ( formal and informal writing by native English speakers) and 2,000 sentences from a diverse set of non-native student writing. We also collect human judgments of several GEC systems on this new test set and perform a meta-evaluation, assessing how reliable automatic metrics are across these domains. We find that commonly used GEC metrics have inconsistent performance across domains, and therefore we propose a new ensemble metric that is robust on all three domains of text.
APA, Harvard, Vancouver, ISO, and other styles
27

Miyazaki, K., H. J. Eskes, and K. Sudo. "Global NO<sub>x</sub> emission estimates derived from an assimilation of OMI tropospheric NO<sub>2</sub> columns." Atmospheric Chemistry and Physics 12, no. 5 (March 1, 2012): 2263–88. http://dx.doi.org/10.5194/acp-12-2263-2012.

Full text
Abstract:
Abstract. A data assimilation system has been developed to estimate global nitrogen oxides (NOx) emissions using OMI tropospheric NO2 columns (DOMINO product) and a global chemical transport model (CTM), the Chemical Atmospheric GCM for Study of Atmospheric Environment and Radiative Forcing (CHASER). The data assimilation system, based on an ensemble Kalman filter approach, was applied to optimize daily NOx emissions with a horizontal resolution of 2.8° during the years 2005 and 2006. The background error covariance estimated from the ensemble CTM forecasts explicitly represents non-direct relationships between the emissions and tropospheric columns caused by atmospheric transport and chemical processes. In comparison to the a priori emissions based on bottom-up inventories, the optimized emissions were higher over eastern China, the eastern United States, southern Africa, and central-western Europe, suggesting that the anthropogenic emissions are mostly underestimated in the inventories. In addition, the seasonality of the estimated emissions differed from that of the a priori emission over several biomass burning regions, with a large increase over Southeast Asia in April and over South America in October. The data assimilation results were validated against independent data: SCIAMACHY tropospheric NO2 columns and vertical NO2 profiles obtained from aircraft and lidar measurements. The emission correction greatly improved the agreement between the simulated and observed NO2 fields; this implies that the data assimilation system efficiently derives NOx emissions from concentration observations. We also demonstrated that biases in the satellite retrieval and model settings used in the data assimilation largely affect the magnitude of estimated emissions. These dependences should be carefully considered for better understanding NOx sources from top-down approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Marsigli, C., F. Boccanera, A. Montani, and T. Paccagnella. "The COSMO-LEPS mesoscale ensemble system: validation of the methodology and verification." Nonlinear Processes in Geophysics 12, no. 4 (May 20, 2005): 527–36. http://dx.doi.org/10.5194/npg-12-527-2005.

Full text
Abstract:
Abstract. The limited-area ensemble prediction system COSMO-LEPS has been running every day at ECMWF since November 2002. A number of runs of the non-hydrostatic limited-area model Lokal Modell (LM) are available every day, nested on members of the ECMWF global ensemble. The limited-area ensemble forecasts range up to 120h and LM-based probabilistic products are disseminated to several national and regional weather services. Some changes of the operational suite have recently been made, on the basis of the results of a statistical analysis of the methodology. The analysis is presented in this paper, showing the benefit of increasing the number of ensemble members. The system has been designed to have a probabilistic support at the mesoscale, focusing the attention on extreme precipitation events. In this paper, the performance of COSMO-LEPS in forecasting precipitation is presented. An objective verification in terms of probabilistic indices is made, using a dense network of observations covering a part of the COSMO domain. The system is compared with ECMWF EPS, showing an improvement of the limited-area high-resolution system with respect to the global ensemble system in the forecast of high precipitation values. The impact of the use of different schemes for the parametrisation of the convection in the limited-area model is also assessed, showing that this have a minor impact with respect to run the model with different initial and boundary condition.
APA, Harvard, Vancouver, ISO, and other styles
29

Lauvaux, T., O. Pannekoucke, C. Sarrat, F. Chevallier, P. Ciais, J. Noilhan, and P. J. Rayner. "Structure of the transport uncertainty in mesoscale inversions of CO<sub>2</sub> sources and sinks using ensemble model simulations." Biogeosciences Discussions 5, no. 6 (December 9, 2008): 4813–46. http://dx.doi.org/10.5194/bgd-5-4813-2008.

Full text
Abstract:
Abstract. We study the characteristics of a statistical ensemble of mesoscale simulations in order to estimate the model error in the simulation of CO2 concentrations. The ensemble consists of ten members and the reference simulation using the operationnal short range forecast PEARP, perturbed by Singular Vector (SV) technic. We then used this ensemble of simulations as the initial and boundary conditions for the meso scale model simulations, here the atmospheric transport model Méso-NH, transporting CO2 fluxes from the ISBA-A-gs land surface model. The final ensemble represents the model dependence to the boundary conditions, conserving the physical properties of the dynamical schemes. First, the variance of our ensemble is estimated over the domain, with associated spatial and temporal correlations. Second, we extract the signal from noisy horizontal correlations, due to the limited size ensemble, using diffusion equation modelling. Finally, we compute the diagonal and non-diagonal terms of the observation error covariance matrix and introduced it into our CO2 flux matrix inversion over 18 days of the 2005 intensive campaign CERES over the South West of France. On the horizontal plane, variance of the ensemble follows the discontinuities of the mesoscale structures during the day, but remain locally driven during the night. On the vertical, surface layer variance shows large correlations with the upper levels in the boundary layer (>0.6), down to 0.4 with the low free troposphere. Large temporal correlations were found during the afternoon (>0.5 for several hours), reduced during the night. Diffusion equation model extracted relevant error covariance signals on the horizontal space, and shows reduced correlations over mountain area and during the night over the continent. The posterior error reduction on the inverted CO2 fluxes accounting for the model error correlations illustrates finally the predominance of the temporal over the spatial correlations when using tower-based CO2 concentration observations.
APA, Harvard, Vancouver, ISO, and other styles
30

Han, Wen Hua, Hai Xia Ren, Xu Chen, and Xiao Juan Tao. "The Application of Improved HHT to Harmonic Analysis and Fault Detection of Power System." Advanced Materials Research 354-355 (October 2011): 1406–11. http://dx.doi.org/10.4028/www.scientific.net/amr.354-355.1406.

Full text
Abstract:
Hilbert-Huang transform (HHT) is a new time-frequency-domain analysis method, which is suitable for non-stationary and nonlinear signals. In this paper, endpoint continuation and ensemble empirical mode decomposition (EEMD) decomposition method are introduced to improve the HHT, which solve the endpoint winger and modal aliasing problem. The improved HHT (IHHT) is used for analyzing the harmonic signal and detecting the fault signal of power system. Simulation results show that IHHT is feasible and effective for harmonic analysis and fault detection.
APA, Harvard, Vancouver, ISO, and other styles
31

Chu, P. C., and L. M. Ivanov. "Statistical characteristics of irreversible predictability time in regional ocean models." Nonlinear Processes in Geophysics 12, no. 1 (January 28, 2005): 129–38. http://dx.doi.org/10.5194/npg-12-129-2005.

Full text
Abstract:
Abstract. Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF) of the irreversible predictability time (IPT) (called τ-PDF) computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle) are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks) and intermediate (up to two months) predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain) are not outliers and share the same statistics as a whole ensemble of predictions.
APA, Harvard, Vancouver, ISO, and other styles
32

Lauvaux, T., O. Pannekoucke, C. Sarrat, F. Chevallier, P. Ciais, J. Noilhan, and P. J. Rayner. "Structure of the transport uncertainty in mesoscale inversions of CO<sub>2</sub> sources and sinks using ensemble model simulations." Biogeosciences 6, no. 6 (June 19, 2009): 1089–102. http://dx.doi.org/10.5194/bg-6-1089-2009.

Full text
Abstract:
Abstract. We study the characteristics of a statistical ensemble of mesoscale simulations in order to estimate the model error in the simulation of CO2 concentrations. The ensemble consists of ten members and the reference simulation using the operationnal short range forecast PEARP, perturbed using the Singular Vector technique. We then used this ensemble of simulations as the initial and boundary conditions for the meso scale model (Méso-NH) simulations, which uses CO2 fluxes from the ISBA-A-gs land surface model. The final ensemble represents the model dependence to the boundary conditions, conserving the physical properties of the dynamical schemes, but excluding the intrinsic error of the model. First, the variance of our ensemble is estimated over the domain, with associated spatial and temporal correlations. Second, we extract the signal from noisy horizontal correlations, due to the limited size ensemble, using diffusion equation modelling. The computational cost of such ensemble limits the number of members (simulations) especially when running online the carbon flux and the atmospheric models. In the theory, 50 to 100 members would be required to explore the overall sensitivity of the ensemble. The present diffusion model allows us to extract a significant part of the noisy error, and makes this study feasable with a limited number of simulations. Finally, we compute the diagonal and non-diagonal terms of the observation error covariance matrix and introduced it into our CO2 flux matrix inversion for 18 days of the 2005 intensive campaign CERES over the South West of France. Variances are based on model-data mismatch to ensure we treat model bias as well as ensemble dispersion, whereas spatial and temporal covariances are estimated with our method. The horizontal structure of the ensemble variance manifests the discontinuities of the mesoscale structures during the day, but remains locally driven during the night. On the vertical, surface layer variance shows large correlations with the upper levels in the boundary layer (> 0.6), dropping to 0.4 with the lower levels of the free troposphere. Large temporal correlations were found during the afternoon (> 0.5 for several hours), reduced during the night. The diffusion equation model extracted relevant error covariance signals horizontally, with reduced correlations over mountain areas and during the night over the continent. The posterior error reduction on the inverted CO2 fluxes accounting for the model error correlations illustrates the predominance of the temporal over the spatial correlations when using tower-based CO2 concentration observations.
APA, Harvard, Vancouver, ISO, and other styles
33

Kwon, Soon, Ji Yu, Chan Park, Jiseop Lee, and Baik Seong. "Nucleic Acid-Dependent Structural Transition of the Intrinsically Disordered N-Terminal Appended Domain of Human Lysyl-tRNA Synthetase." International Journal of Molecular Sciences 19, no. 10 (October 3, 2018): 3016. http://dx.doi.org/10.3390/ijms19103016.

Full text
Abstract:
Eukaryotic lysyl-tRNA synthetases (LysRS) have an N-terminal appended tRNA-interaction domain (RID) that is absent in their prokaryotic counterparts. This domain is intrinsically disordered and lacks stable structures. The disorder-to-order transition is induced by tRNA binding and has implications on folding and subsequent assembly into multi-tRNA synthetase complexes. Here, we expressed and purified RID from human LysRS (hRID) in Escherichia coli and performed a detailed mutagenesis of the appended domain. hRID was co-purified with nucleic acids during Ni-affinity purification, and cumulative mutations on critical amino acid residues abolished RNA binding. Furthermore, we identified a structural ensemble between disordered and helical structures in non-RNA-binding mutants and an equilibrium shift for wild-type into the helical conformation upon RNA binding. Since mutations that disrupted RNA binding led to an increase in non-functional soluble aggregates, a stabilized RNA-mediated structural transition of the N-terminal appended domain may have implications on the functional organization of human LysRS and multi-tRNA synthetase complexes in vivo.
APA, Harvard, Vancouver, ISO, and other styles
34

Despret, Vinciane. "Anthropo-éthologie des non-humains politiques." Social Science Information 45, no. 2 (June 2006): 209–26. http://dx.doi.org/10.1177/0539018406063635.

Full text
Abstract:
English The temptation to seek in primates our own origin is still found in ethology. More broadly speaking, we see that the animal kingdom is often used as an anthropological operator of identity, using either similitude or inversion or contrast. The observation data most often reflect values, or even preferences, concerning modes of social organization. However, this observation should not lead to relativism. On the contrary, it invites us to envisage ethological knowledge as constructing humans and animals at the same time, together. This article sets out to explore the concrete conditions in which this kind of knowledge can be constructed. French La tentation d'interroger les primates en leur posant la question de notre origine reste présente dans le domaine de l'éthologie. Plus largement, on peut remarquer que l'animal se constitue souvent comme un opérateur anthropologique d'identité, soit par similitude, soit par inversion ou contraste. Or, les faits issus des observations traduisent le plus souvent des valeurs, voire reflètent des préférences quant aux modes d'organisation sociale. Ce constat ne doit pas nous conduire au relativisme. Il nous invite au contraire à envisager le savoir de l'éthologie comme un savoir qui construit simultanément l'identité de l'homme et de l'animal, ensemble. Cet article se propose d'explorer les conditions concrètes dans lesquelles ce type de savoir peut se constituer.
APA, Harvard, Vancouver, ISO, and other styles
35

Yarce Botero, Andrés, Santiago Lopez-Restrepo, Nicolás Pinel Peláez, Olga L. Quintero, Arjo Segers, and Arnold W. Heemink. "Estimating NOx LOTOS-EUROS CTM Emission Parameters over the Northwest of South America through 4DEnVar TROPOMI NO2 Assimilation." Atmosphere 12, no. 12 (December 7, 2021): 1633. http://dx.doi.org/10.3390/atmos12121633.

Full text
Abstract:
In this work, we present the development of a 4D-Ensemble-Variational (4DEnVar) data assimilation technique to estimate NOx top-down emissions using the regional chemical transport model LOTOS-EUROS with the NO2 observations from the TROPOspheric Monitoring Instrument (TROPOMI). The assimilation was performed for a domain in the northwest of South America centered over Colombia, and includes regions in Panama, Venezuela and Ecuador. In the 4DEnVar approach, the implementation of the linearized and adjoint model are avoided by generating an ensemble of model simulations and by using this ensemble to approximate the nonlinear model and observation operator. Emission correction parameters’ locations were defined for positions where the model simulations showed significant discrepancies with the satellite observations. Using the 4DEnVar data assimilation method, optimal emission parameters for the LOTOS-EUROS model were estimated, allowing for corrections in areas where ground observations are unavailable and the region’s emission inventories do not correctly reflect the current emissions activities. The analyzed 4DEnVar concentrations were compared with the ground measurements of one local air quality monitoring network and the data retrieved by the satellite instrument Ozone Monitoring Instrument (OMI). The assimilation had a low impact on NO2 surface concentrations reducing the Mean Fractional Bias from 0.45 to 0.32, primordially enhancing the spatial and temporal variations in the simulated NO2 fields.
APA, Harvard, Vancouver, ISO, and other styles
36

Fassin, Didier. "Les économies morales revisitées." Annales. Histoire, Sciences Sociales 64, no. 6 (December 2009): 1235–66. http://dx.doi.org/10.1017/s0395264900027499.

Full text
Abstract:
RésuméLe concept d’économies morales, proposé par E.P. Thompson il y a quarante ans, a connu depuis lors un succès non démenti mais pourtant ambigu. D’abord, dans les années 1970 et 1980, repris par le politiste James Scott, il a nourri un ensemble important de travaux, surtout anthropologiques, sur les formes de résistance et de rébellion des paysanneries du tiers-monde. Ensuite, dans les années 1990 et 2000, à la suite de l’historienne Lorraine Daston, il a servi à interpréter les réseaux de valeurs et d’affects incorporés dans le travail scientifique et au-delà dans divers mondes sociaux. Après avoir fait un retour sur les analyses princeps de l’inventeur du concept pour en montrer les tensions et les paradoxes, j’examine les continuités et les ruptures dans ses multiples descendances en prêtant notamment attention aux enrichissements mais aussi aux abandons théoriques de la période récente. J’avance alors une définition plus ouverte que celle initialement donnée (en ne limitant pas le concept aux groupes dominés et en ne le restreignant pas au domaine économique) et plus critique que celle secondairement adoptée (en restituant la dimension politique des économies morales) et j’en propose quelques illustrations à partir de mes travaux empiriques autour de l’immigration et de la violence dans différents contextes historiques pour en montrer le potentiel heuristique.
APA, Harvard, Vancouver, ISO, and other styles
37

Farchi, Alban, and Marc Bocquet. "Review article: Comparison of local particle filters and new implementations." Nonlinear Processes in Geophysics 25, no. 4 (November 12, 2018): 765–807. http://dx.doi.org/10.5194/npg-25-765-2018.

Full text
Abstract:
Abstract. Particle filtering is a generic weighted ensemble data assimilation method based on sequential importance sampling, suited for nonlinear and non-Gaussian filtering problems. Unless the number of ensemble members scales exponentially with the problem size, particle filter (PF) algorithms experience weight degeneracy. This phenomenon is a manifestation of the curse of dimensionality that prevents the use of PF methods for high-dimensional data assimilation. The use of local analyses to counteract the curse of dimensionality was suggested early in the development of PF algorithms. However, implementing localisation in the PF is a challenge, because there is no simple and yet consistent way of gluing together locally updated particles across domains. In this article, we review the ideas related to localisation and the PF in the geosciences. We introduce a generic and theoretical classification of local particle filter (LPF) algorithms, with an emphasis on the advantages and drawbacks of each category. Alongside the classification, we suggest practical solutions to the difficulties of local particle filtering, which lead to new implementations and improvements in the design of LPF algorithms. The LPF algorithms are systematically tested and compared using twin experiments with the one-dimensional Lorenz 40-variables model and with a two-dimensional barotropic vorticity model. The results illustrate the advantages of using the optimal transport theory to design the local analysis. With reasonable ensemble sizes, the best LPF algorithms yield data assimilation scores comparable to those of typical ensemble Kalman filter algorithms, even for a mildly nonlinear system.
APA, Harvard, Vancouver, ISO, and other styles
38

STEBELETSKYI, Myroslav, Eduard MANZIUK, Tetyana SKRYPNYK, and Ruslan BAHRIY. "METHOD OF BUILDING ENSEMBLES OF MODELS FOR DATA CLASSIFICATION BASED ON DECISION CORRELATIONS." Herald of Khmelnytskyi National University. Technical sciences 315, no. 6(1) (December 29, 2022): 224–33. http://dx.doi.org/10.31891/2307-5732-2022-315-6-224-233.

Full text
Abstract:
The scientific work highlights the problem of increasing the accuracy of binary classification predictions using machine learning algorithms. Over the past few decades, systems that consist of many machine learning algorithms, also called ensemble models, have received increasing attention in the computational intelligence and machine learning community. This attention is well deserved, as ensemble systems have proven to be very effective and extremely versatile in a wide range of problem domains and real-world applications. One algorithm may not make a perfect prediction for a particular data set. Machine learning algorithms have their limitations, so creating a model with high accuracy is a difficult task. If you create and combine several models by combining and aggregating the results of each model, there is a chance to improve the overall accuracy, this problem is dealt with by ensembling. The basis of the information system of binary classification is the ensemble model. This model, in turn, contains a set of unique combinations of basic classifiers – a kind of algorithmic primitives. An ensemble model can be considered as some kind of meta-algorithm, which consists of unique sets of machine learning (ML) classification algorithms. The task of the ensemble model is to find such a combination of basic classification algorithms that would give the highest performance. The performance is evaluated according to the main ML metrics in classification tasks. Another aspect of scientific work is the creation of an aggregation mechanism for combining the results of basic classification algorithms. That is, each unique combination within the ensemble consists of a set of basic models (harbingers), the results of which must be aggregated. In this work, a non-hierarchical clustering method is used to aggregate (average) the predictions of the base models. A feature of this study is to find the correlation coefficients of the base models in each combination. With the help of the magnitude of correlations, the relationship between the prediction of the classifier (base model) and the true value is established, as a result of which space is opened for further research on improving the ensemble model (meta-algorithm)
APA, Harvard, Vancouver, ISO, and other styles
39

Eerola, Tuomas, Kelly Jakubowski, Nikki Moran, Peter E. Keller, and Martin Clayton. "Shared periodic performer movements coordinate interactions in duo improvisations." Royal Society Open Science 5, no. 2 (February 2018): 171520. http://dx.doi.org/10.1098/rsos.171520.

Full text
Abstract:
Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets—(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations—to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers’ movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers’ movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions.
APA, Harvard, Vancouver, ISO, and other styles
40

Abdallah, Zahraa S., and Mohamed Medhat Gaber. "Co-eye: a multi-resolution ensemble classifier for symbolically approximated time series." Machine Learning 109, no. 11 (August 26, 2020): 2029–61. http://dx.doi.org/10.1007/s10994-020-05887-3.

Full text
Abstract:
Abstract Time series classification (TSC) is a challenging task that attracted many researchers in the last few years. One main challenge in TSC is the diversity of domains where time series data come from. Thus, there is no “one model that fits all” in TSC. Some algorithms are very accurate in classifying a specific type of time series when the whole series is considered, while some only target the existence/non-existence of specific patterns/shapelets. Yet other techniques focus on the frequency of occurrences of discriminating patterns/features. This paper presents a new classification technique that addresses the inherent diversity problem in TSC using a nature-inspired method. The technique is stimulated by how flies look at the world through “compound eyes” that are made up of thousands of lenses, called ommatidia. Each ommatidium is an eye with its own lens, and thousands of them together create a broad field of vision. The developed technique similarly uses different lenses and representations to look at the time series, and then combines them for broader visibility. These lenses have been created through hyper-parameterisation of symbolic representations (Piecewise Aggregate and Fourier approximations). The algorithm builds a random forest for each lens, then performs soft dynamic voting for classifying new instances using the most confident eyes, i.e., forests. We evaluate the new technique, coined Co-eye, using the recently released extended version of UCR archive, containing more than 100 datasets across a wide range of domains. The results show the benefits of bringing together different perspectives reflecting on the accuracy and robustness of Co-eye in comparison to other state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
41

Yatsenko, Dmitriy, and Sergey Tsybulya. "DIANNA (diffraction analysis of nanopowders) – a software for structural analysis of nanosized powders." Zeitschrift für Kristallographie - Crystalline Materials 233, no. 1 (January 26, 2018): 61–66. http://dx.doi.org/10.1515/zkri-2017-2056.

Full text
Abstract:
AbstractDIANNA is a free software developed to simulate atomic models of structures for an ensemble of nanoparticles and to calculate their whole X-ray powder diffraction patterns and the radial distribution function. The main objects of investigation are the particles whose coherent scattering domains do not exceed several nm. DIANNA is based on theab initiomethod using the Debye scattering equation. This method makes it possible to obtain information on the atomic structure, shape and size of nanoparticles. It can be applied also to non-periodic materials or coherently ordered nanostructures. Basic program features, methods and some examples are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
42

Awan, Amna Waheed, Syed Muhammad Usman, Shehzad Khalid, Aamir Anwar, Roobaea Alroobaea, Saddam Hussain, Jasem Almotiri, Syed Sajid Ullah, and Muhammad Usman Akram. "An Ensemble Learning Method for Emotion Charting Using Multimodal Physiological Signals." Sensors 22, no. 23 (December 4, 2022): 9480. http://dx.doi.org/10.3390/s22239480.

Full text
Abstract:
Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR are also known as physiological signals, which can be used for identification of human emotions. Due to the unbiased nature of physiological signals, this field has become a great motivation in recent research as physiological signals are generated autonomously from human central nervous system. Researchers have developed multiple methods for the classification of these signals for emotion detection. However, due to the non-linear nature of these signals and the inclusion of noise, while recording, accurate classification of physiological signals is a challenge for emotion charting. Valence and arousal are two important states for emotion detection; therefore, this paper presents a novel ensemble learning method based on deep learning for the classification of four different emotional states including high valence and high arousal (HVHA), low valence and low arousal (LVLA), high valence and low arousal (HVLA) and low valence high arousal (LVHA). In the proposed method, multimodal signals (EEG, ECG, and GSR) are preprocessed using bandpass filtering and independent components analysis (ICA) for noise removal in EEG signals followed by discrete wavelet transform for time domain to frequency domain conversion. Discrete wavelet transform results in spectrograms of the physiological signal and then features are extracted using stacked autoencoders from those spectrograms. A feature vector is obtained from the bottleneck layer of the autoencoder and is fed to three classifiers SVM (support vector machine), RF (random forest), and LSTM (long short-term memory) followed by majority voting as ensemble classification. The proposed system is trained and tested on the AMIGOS dataset with k-fold cross-validation. The proposed system obtained the highest accuracy of 94.5% and shows improved results of the proposed method compared with other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Berg, Andrej, and Christine Peter. "Simulating and analysing configurational landscapes of protein–protein contact formation." Interface Focus 9, no. 3 (April 19, 2019): 20180062. http://dx.doi.org/10.1098/rsfs.2018.0062.

Full text
Abstract:
Interacting proteins can form aggregates and protein–protein interfaces with multiple patterns and different stabilities. Using molecular simulation one would like to understand the formation of these aggregates and which of the observed states are relevant for protein function and recognition. To characterize the complex configurational ensemble of protein aggregates, one needs a quantitative measure for the similarity of structures. We present well-suited descriptors that capture the essential features of non-covalent protein contact formation and domain motion. This set of collective variables is used with a nonlinear multi-dimensional scaling-based dimensionality reduction technique to obtain a low-dimensional representation of the configurational landscape of two ubiquitin proteins from coarse-grained simulations. We show that this two-dimensional representation is a powerful basis to identify meaningful states in the ensemble of aggregated structures and to calculate distributions and free energy landscapes for different sets of simulations. By using a measure to quantitatively compare free energy landscapes we can show how the introduction of a covalent bond between two ubiquitin proteins at different positions alters the configurational states of these dimers.
APA, Harvard, Vancouver, ISO, and other styles
44

Cluzet, Bertrand, Matthieu Lafaysse, Emmanuel Cosme, Clément Albergel, Louis-François Meunier, and Marie Dumont. "CrocO_v1.0: a particle filter to assimilate snowpack observations in a spatialised framework." Geoscientific Model Development 14, no. 3 (March 19, 2021): 1595–614. http://dx.doi.org/10.5194/gmd-14-1595-2021.

Full text
Abstract:
Abstract. Monitoring the evolution of snowpack properties in mountainous areas is crucial for avalanche hazard forecasting and water resources management. In situ and remotely sensed observations provide precious information on the state of the snowpack but usually offer limited spatio-temporal coverage of bulk or surface variables only. In particular, visible–near-infrared (Vis–NIR) reflectance observations can provide information about the snowpack surface properties but are limited by terrain shading and clouds. Snowpack modelling enables the estimation of any physical variable virtually anywhere, but it is affected by large errors and uncertainties. Data assimilation offers a way to combine both sources of information and to propagate information from observed areas to non-observed areas. Here, we present CrocO (Crocus-Observations), an ensemble data assimilation system able to ingest any snowpack observation (applied as a first step to the height of snow (HS) and Vis–NIR reflectances) in a spatialised geometry. CrocO uses an ensemble of snowpack simulations to represent modelling uncertainties and a particle filter (PF) to reduce them. The PF is prone to collapse when assimilating too many observations. Two variants of the PF were specifically implemented to ensure that observational information is propagated in space while tackling this issue. The global algorithm ingests all available observations with an iterative inflation of observation errors, while the klocal algorithm is a localised approach performing a selection of the observations to assimilate based on background correlation patterns. Feasibility testing experiments are carried out in an identical twin experiment setup, with synthetic observations of HS and Vis–NIR reflectances available in only one-sixth of the simulation domain. Results show that compared against runs without assimilation, analyses exhibit an average improvement of the snow water equivalent continuous rank probability score (CRPS) of 60 % when assimilating HS with a 40-member ensemble and an average 20 % CRPS improvement when assimilating reflectance with a 160-member ensemble. Significant improvements are also obtained outside the observation domain. These promising results open a possibility for the assimilation of real observations of reflectance or of any snowpack observations in a spatialised context.
APA, Harvard, Vancouver, ISO, and other styles
45

Schirmer, Tilman, Tjaart A. P. de Beer, Stefanie Tamegger, Alexander Harms, Nikolaus Dietz, David M. Dranow, Thomas E. Edwards, Peter J. Myler, Isabelle Phan, and Christoph Dehio. "Evolutionary Diversification of Host-Targeted Bartonella Effectors Proteins Derived from a Conserved FicTA Toxin-Antitoxin Module." Microorganisms 9, no. 8 (July 31, 2021): 1645. http://dx.doi.org/10.3390/microorganisms9081645.

Full text
Abstract:
Proteins containing a FIC domain catalyze AMPylation and other post-translational modifications (PTMs). In bacteria, they are typically part of FicTA toxin-antitoxin modules that control conserved biochemical processes such as topoisomerase activity, but they have also repeatedly diversified into host-targeted virulence factors. Among these, Bartonella effector proteins (Beps) comprise a particularly diverse ensemble of FIC domains that subvert various host cellular functions. However, no comprehensive comparative analysis has been performed to infer molecular mechanisms underlying the biochemical and functional diversification of FIC domains in the vast Bep family. Here, we used X-ray crystallography, structural modelling, and phylogenetic analyses to unravel the expansion and diversification of Bep repertoires that evolved in parallel in three Bartonella lineages from a single ancestral FicTA toxin-antitoxin module. Our analysis is based on 99 non-redundant Bep sequences and nine crystal structures. Inferred from the conservation of the FIC signature motif that comprises the catalytic histidine and residues involved in substrate binding, about half of them represent AMP transferases. A quarter of Beps show a glutamate in a strategic position in the putative substrate binding pocket that would interfere with triphosphate-nucleotide binding but may allow binding of an AMPylated target for deAMPylation or another substrate to catalyze a distinct PTM. The β-hairpin flap that registers the modifiable target segment to the active site exhibits remarkable structural variability. The corresponding sequences form few well-defined groups that may recognize distinct target proteins. The binding of Beps to promiscuous FicA antitoxins is well conserved, indicating a role of the antitoxin to inhibit enzymatic activity or to serve as a chaperone for the FIC domain before translocation of the Bep into host cells. Taken together, our analysis indicates a remarkable functional plasticity of Beps that is mostly brought about by structural changes in the substrate pocket and the target dock. These findings may guide future structure–function analyses of the highly versatile FIC domains.
APA, Harvard, Vancouver, ISO, and other styles
46

Marsh, Joseph A., Chris Neale, Fernando E. Jack, Wing-Yiu Choy, Anna Y. Lee, Karin A. Crowhurst, and Julie D. Forman-Kay. "Improved Structural Characterizations of the drkN SH3 Domain Unfolded State Suggest a Compact Ensemble with Native-like and Non-native Structure." Journal of Molecular Biology 367, no. 5 (April 2007): 1494–510. http://dx.doi.org/10.1016/j.jmb.2007.01.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Singh, Jaskaran, Narpinder Singh, Mostafa M. Fouda, Luca Saba, and Jasjit S. Suri. "Attention-Enabled Ensemble Deep Learning Models and Their Validation for Depression Detection: A Domain Adoption Paradigm." Diagnostics 13, no. 12 (June 16, 2023): 2092. http://dx.doi.org/10.3390/diagnostics13122092.

Full text
Abstract:
Depression is increasingly prevalent, leading to higher suicide risk. Depression detection and sentimental analysis of text inputs in cross-domain frameworks are challenging. Solo deep learning (SDL) and ensemble deep learning (EDL) models are not robust enough. Recently, attention mechanisms have been introduced in SDL. We hypothesize that attention-enabled EDL (aeEDL) architectures are superior compared to attention-not-enabled SDL (aneSDL) or aeSDL models. We designed EDL-based architectures with attention blocks to build eleven kinds of SDL model and five kinds of EDL model on four domain-specific datasets. We scientifically validated our models by comparing “seen” and “unseen” paradigms (SUP). We benchmarked our results against the SemEval (2016) sentimental dataset and established reliability tests. The mean increase in accuracy for EDL over their corresponding SDL components was 4.49%. Regarding the effect of attention block, the increase in the mean accuracy (AUC) of aeSDL over aneSDL was 2.58% (1.73%), and the increase in the mean accuracy (AUC) of aeEDL over aneEDL was 2.76% (2.80%). When comparing EDL vs. SDL for non-attention and attention, the mean aneEDL was greater than aneSDL by 4.82% (3.71%), and the mean aeEDL was greater than aeSDL by 5.06% (4.81%). For the benchmarking dataset (SemEval), the best-performing aeEDL model (ALBERT+BERT-BiLSTM) was superior to the best aeSDL (BERT-BiLSTM) model by 3.86%. Our scientific validation and robust design showed a difference of only 2.7% in SUP, thereby meeting the regulatory constraints. We validated all our hypotheses and further demonstrated that aeEDL is a very effective and generalized method for detecting symptoms of depression in cross-domain settings.
APA, Harvard, Vancouver, ISO, and other styles
48

SONI, AMEET, and JUDE SHAVLIK. "PROBABILISTIC ENSEMBLES FOR IMPROVED INFERENCE IN PROTEIN-STRUCTURE DETERMINATION." Journal of Bioinformatics and Computational Biology 10, no. 01 (February 2012): 1240009. http://dx.doi.org/10.1142/s0219720012400094.

Full text
Abstract:
Protein X-ray crystallography — the most popular method for determining protein structures — remains a laborious process requiring a great deal of manual crystallographer effort to interpret low-quality protein images. Automating this process is critical in creating a high-throughput protein-structure determination pipeline. Previously, our group developed ACMI, a probabilistic framework for producing protein-structure models from electron-density maps produced via X-ray crystallography. ACMI uses a Markov Random Field to model the three-dimensional (3D) location of each non-hydrogen atom in a protein. Calculating the best structure in this model is intractable, so ACMI uses approximate inference methods to estimate the optimal structure. While previous results have shown ACMI to be the state-of-the-art method on this task, its approximate inference algorithm remains computationally expensive and susceptible to errors. In this work, we develop Probabilistic Ensembles in ACMI (PEA), a framework for leveraging multiple, independent runs of approximate inference to produce estimates of protein structures. Our results show statistically significant improvements in the accuracy of inference resulting in more complete and accurate protein structures. In addition, PEA provides a general framework for advanced approximate inference methods in complex problem domains.
APA, Harvard, Vancouver, ISO, and other styles
49

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Full text
Abstract:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
APA, Harvard, Vancouver, ISO, and other styles
50

Bobe, Christin, Daan Hanssens, Thomas Hermans, and Ellen Van De Vijver. "Efficient Probabilistic Joint Inversion of Direct Current Resistivity and Small-Loop Electromagnetic Data." Algorithms 13, no. 6 (June 18, 2020): 144. http://dx.doi.org/10.3390/a13060144.

Full text
Abstract:
Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography