Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Variational bayes methods.

Zeitschriftenartikel zum Thema „Variational bayes methods“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Variational bayes methods" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Park, Mijung, James Foulds, Kamalika Chaudhuri und Max Welling. „Variational Bayes In Private Settings (VIPS)“. Journal of Artificial Intelligence Research 68 (05.05.2020): 109–57. http://dx.doi.org/10.1613/jair.1.11763.

Der volle Inhalt der Quelle
Annotation:
Many applications of Bayesian data analysis involve sensitive information such as personal documents or medical records, motivating methods which ensure that privacy is protected. We introduce a general privacy-preserving framework for Variational Bayes (VB), a widely used optimization-based Bayesian inference method. Our framework respects differential privacy, the gold-standard privacy criterion, and encompasses a large class of probabilistic models, called the Conjugate Exponential (CE) family. We observe that we can straightforwardly privatise VB’s approximate posterior distributions for models in the CE family, by perturbing the expected sufficient statistics of the complete-data likelihood. For a broadly-used class of non-CE models, those with binomial likelihoods, we show how to bring such models into the CE family, such that inferences in the modified model resemble the private variational Bayes algorithm as closely as possible, using the Pólya-Gamma data augmentation scheme. The iterative nature of variational Bayes presents a further challenge since iterations increase the amount of noise needed. We overcome this by combining: (1) an improved composition method for differential privacy, called the moments accountant, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation, Bayesian logistic regression, and sigmoid belief networks, evaluated on real-world datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kaji, Daisuke, und Sumio Watanabe. „Two design methods of hyperparameters in variational Bayes learning for Bernoulli mixtures“. Neurocomputing 74, Nr. 11 (Mai 2011): 2002–7. http://dx.doi.org/10.1016/j.neucom.2010.06.027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nakajima, Shinichi, und Sumio Watanabe. „Variational Bayes Solution of Linear Neural Networks and Its Generalization Performance“. Neural Computation 19, Nr. 4 (April 2007): 1112–53. http://dx.doi.org/10.1162/neco.2007.19.4.1112.

Der volle Inhalt der Quelle
Annotation:
It is well known that in unidentifiable models, the Bayes estimation provides much better generalization performance than the maximum likelihood (ML) estimation. However, its accurate approximation by Markov chain Monte Carlo methods requires huge computational costs. As an alternative, a tractable approximation method, called the variational Bayes (VB) approach, has recently been proposed and has been attracting attention. Its advantage over the expectation maximization (EM) algorithm, often used for realizing the ML estimation, has been experimentally shown in many applications; nevertheless, it has not yet been theoretically shown. In this letter, through analysis of the simplest unidentifiable models, we theoretically show some properties of the VB approach. We first prove that in three-layer linear neural networks, the VB approach is asymptotically equivalent to a positive-part James-Stein type shrinkage estimation. Then we theoretically clarify its free energy, generalization error, and training error. Comparing them with those of the ML estimation and the Bayes estimation, we discuss the advantage of the VB approach. We also show that unlike in the Bayes estimation, the free energy and the generalization error are less simply related with each other and that in typical cases, the VB free energy well approximates the Bayes one, while the VB generalization error significantly differs from the Bayes one.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MA, ZHANYU, und ANDREW E. TESCHENDORFF. „A VARIATIONAL BAYES BETA MIXTURE MODEL FOR FEATURE SELECTION IN DNA METHYLATION STUDIES“. Journal of Bioinformatics and Computational Biology 11, Nr. 04 (16.07.2013): 1350005. http://dx.doi.org/10.1142/s0219720013500054.

Der volle Inhalt der Quelle
Annotation:
An increasing number of studies are using beadarrays to measure DNA methylation on a genome-wide basis. The purpose is to identify novel biomarkers in a wide range of complex genetic diseases including cancer. A common difficulty encountered in these studies is distinguishing true biomarkers from false positives. While statistical methods aimed at improving the feature selection step have been developed for gene expression, relatively few methods have been adapted to DNA methylation data, which is naturally beta-distributed. Here we explore and propose an innovative application of a recently developed variational Bayesian beta-mixture model (VBBMM) to the feature selection problem in the context of DNA methylation data generated from a highly popular beadarray technology. We demonstrate that VBBMM offers significant improvements in inference and feature selection in this type of data compared to an Expectation-Maximization (EM) algorithm, at a significantly reduced computational cost. We further demonstrate the added value of VBBMM as a feature selection and prioritization step in the context of identifying prognostic markers in breast cancer. A variational Bayesian approach to feature selection of DNA methylation profiles should thus be of value to any study undergoing large-scale DNA methylation profiling in search of novel biomarkers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Svensson, Valentine, Adam Gayoso, Nir Yosef und Lior Pachter. „Interpretable factor models of single-cell RNA-seq via variational autoencoders“. Bioinformatics 36, Nr. 11 (16.03.2020): 3418–21. http://dx.doi.org/10.1093/bioinformatics/btaa169.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation Single-cell RNA-seq makes possible the investigation of variability in gene expression among cells, and dependence of variation on cell type. Statistical inference methods for such analyses must be scalable, and ideally interpretable. Results We present an approach based on a modification of a recently published highly scalable variational autoencoder framework that provides interpretability without sacrificing much accuracy. We demonstrate that our approach enables identification of gene programs in massive datasets. Our strategy, namely the learning of factor models with the auto-encoding variational Bayes framework, is not domain specific and may be useful for other applications. Availability and implementation The factor model is available in the scVI package hosted at https://github.com/YosefLab/scVI/. Contact v@nxn.se Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yuan, Ke, Mark Girolami und Mahesan Niranjan. „Markov Chain Monte Carlo Methods for State-Space Models with Point Process Observations“. Neural Computation 24, Nr. 6 (Juni 2012): 1462–86. http://dx.doi.org/10.1162/neco_a_00281.

Der volle Inhalt der Quelle
Annotation:
This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhao, Yuexuan, und Jing Huang. „Dirichlet Process Prior for Student’s t Graph Variational Autoencoders“. Future Internet 13, Nr. 3 (16.03.2021): 75. http://dx.doi.org/10.3390/fi13030075.

Der volle Inhalt der Quelle
Annotation:
Graph variational auto-encoder (GVAE) is a model that combines neural networks and Bayes methods, capable of deeper exploring the influential latent features of graph reconstruction. However, several pieces of research based on GVAE employ a plain prior distribution for latent variables, for instance, standard normal distribution (N(0,1)). Although this kind of simple distribution has the advantage of convenient calculation, it will also make latent variables contain relatively little helpful information. The lack of adequate expression of nodes will inevitably affect the process of generating graphs, which will eventually lead to the discovery of only external relations and the neglect of some complex internal correlations. In this paper, we present a novel prior distribution for GVAE, called Dirichlet process (DP) construction for Student’s t (St) distribution. The DP allows the latent variables to adapt their complexity during learning and then cooperates with heavy-tailed St distribution to approach sufficient node representation. Experimental results show that this method can achieve a relatively better performance against the baselines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shapovalova, Yuliya. „“Exact” and Approximate Methods for Bayesian Inference: Stochastic Volatility Case Study“. Entropy 23, Nr. 4 (15.04.2021): 466. http://dx.doi.org/10.3390/e23040466.

Der volle Inhalt der Quelle
Annotation:
We conduct a case study in which we empirically illustrate the performance of different classes of Bayesian inference methods to estimate stochastic volatility models. In particular, we consider how different particle filtering methods affect the variance of the estimated likelihood. We review and compare particle Markov Chain Monte Carlo (MCMC), RMHMC, fixed-form variational Bayes, and integrated nested Laplace approximation to estimate the posterior distribution of the parameters. Additionally, we conduct the review from the point of view of whether these methods are (1) easily adaptable to different model specifications; (2) adaptable to higher dimensions of the model in a straightforward way; (3) feasible in the multivariate case. We show that when using the stochastic volatility model for methods comparison, various data-generating processes have to be considered to make a fair assessment of the methods. Finally, we present a challenging specification of the multivariate stochastic volatility model, which is rarely used to illustrate the methods but constitutes an important practical application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bresson, Georges, Anoop Chaturvedi, Mohammad Arshad Rahman und Shalabh. „Seemingly unrelated regression with measurement error: estimation via Markov Chain Monte Carlo and mean field variational Bayes approximation“. International Journal of Biostatistics 17, Nr. 1 (21.09.2020): 75–97. http://dx.doi.org/10.1515/ijb-2019-0120.

Der volle Inhalt der Quelle
Annotation:
Abstract Linear regression with measurement error in the covariates is a heavily studied topic, however, the statistics/econometrics literature is almost silent to estimating a multi-equation model with measurement error. This paper considers a seemingly unrelated regression model with measurement error in the covariates and introduces two novel estimation methods: a pure Bayesian algorithm (based on Markov chain Monte Carlo techniques) and its mean field variational Bayes (MFVB) approximation. The MFVB method has the added advantage of being computationally fast and can handle big data. An issue pertinent to measurement error models is parameter identification, and this is resolved by employing a prior distribution on the measurement error variance. The methods are shown to perform well in multiple simulation studies, where we analyze the impact on posterior estimates for different values of reliability ratio or variance of the true unobserved quantity used in the data generating process. The paper further implements the proposed algorithms in an application drawn from the health literature and shows that modeling measurement error in the data can improve model fitting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tichý, Ondřej, und Václav Smídl. „Estimation of input function from dynamic PET brain data using Bayesian blind source separation“. Computer Science and Information Systems 12, Nr. 4 (2015): 1273–87. http://dx.doi.org/10.2298/csis141201051t.

Der volle Inhalt der Quelle
Annotation:
Selection of regions of interest in an image sequence is a typical prerequisite step for estimation of time-activity curves in dynamic positron emission tomography (PET). This procedure is done manually by a human operator and therefore suffers from subjective errors. Another such problem is to estimate the input function. It can be measured from arterial blood or it can be searched for a vascular structure on the images which is hard to be done, unreliable, and often impossible. In this study, we focus on blind source separation methods with no needs of manual interaction. Recently, we developed sparse blind source separation and deconvolution (S-BSS-vecDC) method for separation of original sources from dynamic medical data based on probability modeling and Variational Bayes approximation methodology. In this paper, we extend this method and we apply the methods on dynamic brain PET data and application and comparison of derived algorithms with those of similar assumptions are given. The S-BSS-vecDC algorithm is publicly available for download.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Takiyama, Ken, und Masato Okada. „Detection of Hidden Structures in Nonstationary Spike Trains“. Neural Computation 23, Nr. 5 (Mai 2011): 1205–33. http://dx.doi.org/10.1162/neco_a_00109.

Der volle Inhalt der Quelle
Annotation:
We propose an algorithm for simultaneously estimating state transitions among neural states and nonstationary firing rates using a switching state-space model (SSSM). This algorithm enables us to detect state transitions on the basis of not only discontinuous changes in mean firing rates but also discontinuous changes in the temporal profiles of firing rates (e.g., temporal correlation). We construct estimation and learning algorithms for a nongaussian SSSM, whose nongaussian property is caused by binary spike events. Local variational methods can transform the binary observation process into a quadratic form. The transformed observation process enables us to construct a variational Bayes algorithm that can determine the number of neural states based on automatic relevance determination. Additionally, our algorithm can estimate model parameters from single-trial data using a priori knowledge about state transitions and firing rates. Synthetic data analysis reveals that our algorithm has higher performance for estimating nonstationary firing rates than previous methods. The analysis also confirms that our algorithm can detect state transitions on the basis of discontinuous changes in temporal correlation, which are transitions that previous hidden Markov models could not detect. We also analyze neural data recorded from the medial temporal area. The statistically detected neural states probably coincide with transient and sustained states that have been detected heuristically. Estimated parameters suggest that our algorithm detects the state transitions on the basis of discontinuous changes in the temporal correlation of firing rates. These results suggest that our algorithm is advantageous in real-data analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Deleforge, Antoine, Florence Forbes und Radu Horaud. „Acoustic Space Learning for Sound-Source Separation and Localization on Binaural Manifolds“. International Journal of Neural Systems 25, Nr. 01 (06.01.2015): 1440003. http://dx.doi.org/10.1142/s0129065714400036.

Der volle Inhalt der Quelle
Annotation:
In this paper, we address the problems of modeling the acoustic space generated by a full-spectrum sound source and using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A nonlinear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound-source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound-source direction. We extend this solution to deal with missing data and redundancy in real-world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Milosevic, Sara, Philipp Frank, Reimar H. Leike, Ancla Müller und Torsten A. Enßlin. „Bayesian decomposition of the Galactic multi-frequency sky using probabilistic autoencoders“. Astronomy & Astrophysics 650 (Juni 2021): A100. http://dx.doi.org/10.1051/0004-6361/202039435.

Der volle Inhalt der Quelle
Annotation:
Context. All-sky observations show both Galactic and non-Galactic diffuse emission, for example from interstellar matter or the cosmic microwave background (CMB). The decomposition of the emission into different underlying radiative components is an important signal reconstruction problem. Aims. We aim to reconstruct radiative all-sky components using spectral data, without incorporating knowledge about physical or spatial correlations. Methods. We built a self-instructing algorithm based on variational autoencoders following three steps: (1)We stated a forward model describing how the data set was generated from a smaller set of features, (2) we used Bayes’ theorem to derive a posterior probability distribution, and (3) we used variational inference and statistical independence of the features to approximate the posterior. From this, we derived a loss function and optimized it with neural networks. The resulting algorithm contains a quadratic error norm with a self-adaptive variance estimate to minimize the number of hyperparameters. We trained our algorithm on independent pixel vectors, each vector representing the spectral information of the same pixel in 35 Galactic all-sky maps ranging from the radio to the γ-ray regime. Results. The algorithm calculates a compressed representation of the input data. We find the feature maps derived in the algorithm’s latent space show spatial structures that can be associated with all-sky representations of known astrophysical components. Our resulting feature maps encode (1) the dense interstellar medium (ISM), (2) the hot and dilute regions of the ISM, and (3) the CMB, without being informed about these components a priori. Conclusions. We conclude that Bayesian signal reconstruction with independent Gaussian latent space statistics is sufficient to reconstruct the dense and the dilute ISM, as well as the CMB, from spectral correlations only. The computational approximation of the posterior can be performed efficiently using variational inference and neural networks, making them a suitable approach to probabilistic data analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Tichý, Ondřej, Václav Šmídl, Radek Hofman und Andreas Stohl. „LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination“. Geoscientific Model Development 9, Nr. 11 (25.11.2016): 4297–311. http://dx.doi.org/10.5194/gmd-9-4297-2016.

Der volle Inhalt der Quelle
Annotation:
Abstract. Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Yue, Zongsheng, Deyu Meng, Yongqing Sun und Qian Zhao. „Hyperspectral Image Restoration under Complex Multi-Band Noises“. Remote Sensing 10, Nr. 10 (14.10.2018): 1631. http://dx.doi.org/10.3390/rs10101631.

Der volle Inhalt der Quelle
Annotation:
Hyperspectral images (HSIs) are always corrupted by complicated forms of noise during the acquisition process, such as Gaussian noise, impulse noise, stripes, deadlines and so on. Specifically, different bands of the practical HSIs generally contain different noises of evidently distinct type and extent. While current HSI restoration methods give less consideration to such band-noise-distinctness issues, this study elaborately constructs a new HSI restoration technique, aimed at more faithfully and comprehensively taking such noise characteristics into account. Particularly, through a two-level hierarchical Dirichlet process (HDP) to model the HSI noise structure, the noise of each band is depicted by a Dirichlet process Gaussian mixture model (DP-GMM), in which its complexity can be flexibly adapted in an automatic manner. Besides, the DP-GMM of each band comes from a higher level DP-GMM that relates the noise of different bands. The variational Bayes algorithm is also designed to solve this model, and closed-form updating equations for all involved parameters are deduced. The experiment indicates that, in terms of the mean peak signal-to-noise ratio (MPSNR), the proposed method is on average 1 dB higher compared with the existing state-of-the-art methods, as well as performing better in terms of the mean structural similarity index (MSSIM) and Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Sun, Yang, Jungang Yang, Miao Li und Wei An. „Infrared Small-Faint Target Detection Using Non-i.i.d. Mixture of Gaussians and Flux Density“. Remote Sensing 11, Nr. 23 (28.11.2019): 2831. http://dx.doi.org/10.3390/rs11232831.

Der volle Inhalt der Quelle
Annotation:
The robustness of infrared small-faint target detection methods to noisy situations has been a challenging and meaningful research spot. The targets are usually spatially small due to the far observation distance. Considering the underlying assumption of noise distribution in the existing methods is impractical; a state-of-the-art method has been developed to dig out valuable information in the temporal domain and separate small-faint targets from background noise. However, there are still two drawbacks: (1) The mixture of Gaussians (MoG) model assumes that noise of different frames satisfies independent and identical distribution (i.i.d.); (2) the assumption of Markov random field (MRF) would fail in more complex noise scenarios. In real scenarios, the noise is actually more complicated than the MoG model. To address this problem, a method using the non-i.i.d. mixture of Gaussians (NMoG) with modified flux density (MFD) is proposed in this paper. We firstly construct a novel data structure containing spatial and temporal information with an infrared image sequence. Then, we use an NMoG model to describe the noise, which can be separated with the background via the variational Bayes algorithm. Finally, we can select the component containing true targets through the obvious difference of target and noise in an MFD maple. Extensive experiments demonstrate that the proposed method performs better in complicated noisy scenarios than the competitive approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Soleimani, Hossein, und David J. Miller. „Semisupervised, Multilabel, Multi-Instance Learning for Structured Data“. Neural Computation 29, Nr. 4 (April 2017): 1053–102. http://dx.doi.org/10.1162/neco_a_00939.

Der volle Inhalt der Quelle
Annotation:
Many classification tasks require both labeling objects and determining label associations for parts of each object. Example applications include labeling segments of images or determining relevant parts of a text document when the training labels are available only at the image or document level. This task is usually referred to as multi-instance (MI) learning, where the learner typically receives a collection of labeled (or sometimes unlabeled) bags, each containing several segments (instances). We propose a semisupervised MI learning method for multilabel classification. Most MI learning methods treat instances in each bag as independent and identically distributed samples. However, in many practical applications, instances are related to each other and should not be considered independent. Our model discovers a latent low-dimensional space that captures structure within each bag. Further, unlike many other MI learning methods, which are primarily developed for binary classification, we model multiple classes jointly, thus also capturing possible dependencies between different classes. We develop our model within a semisupervised framework, which leverages both labeled and, typically, a larger set of unlabeled bags for training. We develop several efficient inference methods for our model. We first introduce a Markov chain Monte Carlo method for inference, which can handle arbitrary relations between bag labels and instance labels, including the standard hard-max MI assumption. We also develop an extension of our model that uses stochastic variational Bayes methods for inference, and thus scales better to massive data sets. Experiments show that our approach outperforms several MI learning and standard classification methods on both bag-level and instance-level label prediction. All code for replicating our experiments is available from https://github.com/hsoleimani/MLTM .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Alhalaseh, Rania, und Suzan Alasasfeh. „Machine-Learning-Based Emotion Recognition System Using EEG Signals“. Computers 9, Nr. 4 (30.11.2020): 95. http://dx.doi.org/10.3390/computers9040095.

Der volle Inhalt der Quelle
Annotation:
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Holota, P. „Variational methods in geoid determination and function bases“. Physics and Chemistry of the Earth, Part A: Solid Earth and Geodesy 24, Nr. 1 (Januar 1999): 3–14. http://dx.doi.org/10.1016/s1464-1895(98)00003-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Pös, Ondrej, Jan Radvanszky, Jakub Styk, Zuzana Pös, Gergely Buglyó, Michal Kajsik, Jaroslav Budis, Bálint Nagy und Tomas Szemes. „Copy Number Variation: Methods and Clinical Applications“. Applied Sciences 11, Nr. 2 (16.01.2021): 819. http://dx.doi.org/10.3390/app11020819.

Der volle Inhalt der Quelle
Annotation:
Gains and losses of large segments of genomic DNA, known as copy number variants (CNVs) gained considerable interest in clinical diagnostics lately, as particular forms may lead to inherited genetic diseases. In recent decades, researchers developed a wide variety of cytogenetic and molecular methods with different detection capabilities to detect clinically relevant CNVs. In this review, we summarize methodological progress from conventional approaches to current state of the art techniques capable of detecting CNVs from a few bases up to several megabases. Although the recent rapid progress of sequencing methods has enabled precise detection of CNVs, determining their functional effect on cellular and whole-body physiology remains a challenge. Here, we provide a comprehensive list of databases and bioinformatics tools that may serve as useful assets for researchers, laboratory diagnosticians, and clinical geneticists facing the challenge of CNV detection and interpretation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Cao, Xiao-Qun, Ya-Nan Guo, Shi-Cheng Hou, Cheng-Zhuo Zhang und Ke-Cheng Peng. „Variational Principles for Two Kinds of Coupled Nonlinear Equations in Shallow Water“. Symmetry 12, Nr. 5 (22.05.2020): 850. http://dx.doi.org/10.3390/sym12050850.

Der volle Inhalt der Quelle
Annotation:
It is a very important but difficult task to seek explicit variational formulations for nonlinear and complex models because variational principles are theoretical bases for many methods to solve or analyze the nonlinear problem. By designing skillfully the trial-Lagrange functional, different groups of variational principles are successfully constructed for two kinds of coupled nonlinear equations in shallow water, i.e., the Broer-Kaup equations and the (2+1)-dimensional dispersive long-wave equations, respectively. Both of them contain many kinds of soliton solutions, which are always symmetric or anti-symmetric in space. Subsequently, the obtained variational principles are proved to be correct by minimizing the functionals with the calculus of variations. The established variational principles are firstly discovered, which can help to study the symmetries and find conserved quantities for the equations considered, and might find lots of applications in numerical simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Aprizal, Yarza, Rabin Ibnu Zainal und Afriyudi Afriyudi. „Perbandingan Metode Backpropagation dan Learning Vector Quantization (LVQ) Dalam Menggali Potensi Mahasiswa Baru di STMIK PalComTech“. MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer 18, Nr. 2 (30.05.2019): 294–301. http://dx.doi.org/10.30812/matrik.v18i2.387.

Der volle Inhalt der Quelle
Annotation:
The Research aimst to compare backpropagation and Learning Vector Quantization (LVQ) methods in exploring the potential of new students at STMIK PalComTech. Comparisons in this study involve four input variables used which consist of four basic subjects of informatics engineering and information systems (math, basic programming, computer networks and management bases) which then make informatics techniques and information systems as outputs, to get the accuracy level high in this study, the researchers used several variations of parameters which eventually produced the best accuracy of the two methods. From 120 data tested using variations in test data and training data which are then processed using variations in the learning rate parameters and epochs. From the test results obtained the level of accuracy of pattern recognition in the backpropagation method is 99.17% with a learning rate variation of 0.1 and epoch 100, the learning vector quantization method has an accuracy rate of 96.67% with a variation of learning rate 1 and epoch 20 From the results of the comparison the Backpropagation method is superior in terms of accuracy so that it becomes the right method to use in exploring the potential of new students at STMIK PalComTech.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Levsen, N. D., D. J. Crawford, J. K. Archibald, A. Santos-Geurra und M. E. Mort. „Nei's to Bayes': comparing computational methods and genetic markers to estimate patterns of genetic variation in Tolpis (Asteraceae)“. American Journal of Botany 95, Nr. 11 (08.10.2008): 1466–74. http://dx.doi.org/10.3732/ajb.0800091.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Norberg, Ragnar. „Linear Estimation and Credibility in Continuous Time“. ASTIN Bulletin 22, Nr. 2 (November 1992): 149–65. http://dx.doi.org/10.2143/ast.22.2.2005112.

Der volle Inhalt der Quelle
Annotation:
AbstractThe theory of linear filtering of stochastic processes provides continuous time analogues of finite-dimensional linear Bayes estimators known to actuaries as credibility methods. In the present paper a selfcontained theory is built for processes of bounded variation, which are of particular relevance to insurance. Two methods for constructing the optimal estimator and its mean squared error are deviced. Explicit solutions are obtained in a continuous time variation of Hachemeister's regression model and in a homogeneous doubly stochastic generalized Poisson process. The traditional discrete time set-up is compared to the one with continuous time, and some merits of the latter are pointed out.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Joshi, Sarang C., Michael I. Miller und Ulf Grenander. „On the Geometry and Shape of Brain Sub-Manifolds“. International Journal of Pattern Recognition and Artificial Intelligence 11, Nr. 08 (Dezember 1997): 1317–43. http://dx.doi.org/10.1142/s0218001497000615.

Der volle Inhalt der Quelle
Annotation:
This paper develops mathematical representations for neuro-anatomically significant substructures of the brain and their variability in a population. The focus of the paper is on the neuro-anatomical variation of the geometry and the "shape" of two-dimensional surfaces in the brain. As examples, we focus on the cortical and hippocampal surfaces in an ensemble of Macaque monkeys and human MRI brains. The "shapes" of the substructures are quantified via the construction of templates; the variations are represented by defining probabilistic deformations of the template. Methods for empirically estimating probability measures on these deformations are developed by representing the deformations as Gaussian random vector fields on the embedded sub-manifolds. The Gaussian random vector fields are constructed as quadratic mean limits using complete orthonormal bases on the sub-manifolds. The complete orthonormal bases are generated using modes of vibrations of the geometries of the brain sub-manifolds. The covariances are empirically estimated from an ensemble of brain data. Principal component analysis is presented for characterizing the "eigen-shape" of the hippocampus in an ensemble of MRI-MPRAGE whole brain images. Clustering based on eigen-shape is presented for two sub-populations of normal and schizophrenic.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wang, Yaguang, Xiaoming Liu, Zhiqing Xie, Huimin Wang, Wei Zhang und Yang Xue. „Rapid Evaluation of the Pozzolanic Activity of Bayer Red Mud by a Polymerization Degree Method: Correlations with Alkali Dissolution of (Si+Al) and Strength“. Materials 14, Nr. 19 (24.09.2021): 5546. http://dx.doi.org/10.3390/ma14195546.

Der volle Inhalt der Quelle
Annotation:
A large amount of Bayer process red mud is discharged in the process of alumina production, which has caused significant pollution in the environment. The pozzolanic activity of Bayer red mud as a supplementary cementitious material is a research hotspot. In this work, a new method for Fourier-transform infrared spectrometry is used to determine the polymerization degree of Bayer red mud in order to evaluate its pozzolanic activity. Based on the results of the dissolution concentration of (Si+Al), strength index and polymerization degree of Bayer red mud, the relationships between different evaluation methods were analyzed, and the relevant calculation formulas of pozzolanic activity were obtained. The results showed that different evaluation methods can reflect the variation law of pozzolanic activity in Bayer red mud. The polymerization degree of Bayer red mud had a good linear relationship with the pozzolanic activity index obtained by the strength index and dissolution concentration of (Si+Al), respectively. The polymerization degree was negatively correlated with pozzolanic activity index and dissolution concentration of (Si+Al), and the correlation coefficients were greater than 0.85. Therefore, this method was found to be effective and hence can be used as a rapid and simple test for pozzolanic activity evaluation of Bayer red mud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Jakubus, Monika, Mirosław Krzyśko, Waldemar Wołyński und Małgorzata Graczyk. „The mineralization effect of wheat straw on soil properties described by MFPC analysis and other methods“. Biometrical Letters 53, Nr. 2 (01.12.2016): 133–47. http://dx.doi.org/10.1515/bile-2016-0010.

Der volle Inhalt der Quelle
Annotation:
AbstractRecycling of crop residues is essential to sustain soil fertility and crop production. Despite the positive effect of straw incorporation, the slow decomposition of that organic substance is a serious issue. The aim of the study was to assess the influence of winter wheat straws with different degrees of stem solidness on the rate of decomposition and soil properties. An incubation experiment lasting 425 days was carried out in controlled conditions. To perform analyses, soil samples were collected after 7, 14, 21, 28, 35, 49, 63, 77, 91, 119, 147, 175, 203, 231, 259, 313, 341, 369, 397 and 425 days of incubation. The addition of two types of winter wheat straw with different degree of stem solidness into the sandy soil differentiated the experimental treatments. The results demonstrate that straw mineralization was a relatively slow process and did not depend on the degree of filling of the stem by pith. Multivariate functional principal component analysis (MFPC) gave proof of significant variation between the control soil and the soil incubated with the straws. The first functional principal component describes 48.53% and the second 18.55%, of the variability of soil properties. Organic carbon, mineral nitrogen and sum of bases impact on the first functional principal component, whereas, magnesium, sum of bases and total nitrogen impact on the second functional principal component.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Sherif, Fayroz F., Nourhan Zayed und Mahmoud Fakhr. „Discovering Alzheimer Genetic Biomarkers Using Bayesian Networks“. Advances in Bioinformatics 2015 (23.08.2015): 1–8. http://dx.doi.org/10.1155/2015/639367.

Der volle Inhalt der Quelle
Annotation:
Single nucleotide polymorphisms (SNPs) contribute most of the genetic variation to the human genome. SNPs associate with many complex and common diseases like Alzheimer’s disease (AD). Discovering SNP biomarkers at different loci can improve early diagnosis and treatment of these diseases. Bayesian network provides a comprehensible and modular framework for representing interactions between genes or single SNPs. Here, different Bayesian network structure learning algorithms have been applied in whole genome sequencing (WGS) data for detecting the causal AD SNPs and gene-SNP interactions. We focused on polymorphisms in the top ten genes associated with AD and identified by genome-wide association (GWA) studies. New SNP biomarkers were observed to be significantly associated with Alzheimer’s disease. These SNPs are rs7530069, rs113464261, rs114506298, rs73504429, rs7929589, rs76306710, and rs668134. The obtained results demonstrated the effectiveness of using BN for identifying AD causal SNPs with acceptable accuracy. The results guarantee that the SNP set detected by Markov blanket based methods has a strong association with AD disease and achieves better performance than both naïve Bayes and tree augmented naïve Bayes. Minimal augmented Markov blanket reaches accuracy of 66.13% and sensitivity of 88.87% versus 61.58% and 59.43% in naïve Bayes, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Ball, Vaughn, Luis Tenorio, Christian Schiøtt, Michelle Thomas und J. P. Blangy. „Three-term amplitude-variation-with-offset projections“. GEOPHYSICS 83, Nr. 5 (01.09.2018): N51—N65. http://dx.doi.org/10.1190/geo2017-0763.1.

Der volle Inhalt der Quelle
Annotation:
A three-term (3T) amplitude-variation-with-offset projection is a weighted sum of three elastic reflectivities. Parameterization of the weighting coefficients requires two angle parameters, which we denote by the pair [Formula: see text]. Visualization of this pair is accomplished using a globe-like cartographic representation, in which longitude is [Formula: see text], and latitude is [Formula: see text]. Although the formal extension of existing two-term (2T) projection methods to 3T methods is trivial, practical implementation requires a more comprehensive inversion framework than is required in 2T projections. We distinguish between projections of true elastic reflectivities computed from well logs and reflectivities estimated from seismic data. When elastic reflectivities are computed from well logs, their projection relationships are straightforward, and they are given in a form that depends only on elastic properties. In contrast, projection relationships between reflectivities estimated from seismic may also depend on the maximum angle of incidence and the specific reflectivity inversion method used. Such complications related to projections of seismic-estimated elastic reflectivities are systematized in a 3T projection framework by choosing an unbiased reflectivity triplet as the projection basis. Other biased inversion estimates are then given exactly as 3T projections of the unbiased basis. The 3T projections of elastic reflectivities are connected to Bayesian inversion of other subsurface properties through the statistical notion of Bayesian sufficiency. The triplet of basis reflectivities is computed so that it is Bayes sufficient for all rock properties in the hierarchical seismic rock-physics model; that is, the projection basis contains all information about rock properties that is contained in the original seismic.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Vynnykov, Yuriy, Muhlis Hajiyev, Aleksej Aniskin und Irina Miroshnychenko. „Improvement of settlement calculations of building foundations by increasing the reliability of determining soil compressibility indices“. ACADEMIC JOURNAL Series: Industrial Machine Building, Civil Engineering 1, Nr. 52 (05.07.2019): 115–23. http://dx.doi.org/10.26906/znp.2019.52.1684.

Der volle Inhalt der Quelle
Annotation:
Ways to improve the methods of calculating the foundations bases’ settlements by increasing the reliability of determiningthe soil compressibility indices are substantiated. The complex approach to refinement of the buildings bases' settlements calculationby the layer summation method is investigated by accounting for the soil deformation modulus variability in the fullpressure range perceived by the base at loading; soil strength coefficient βZ; soil deformation anisotropy by elastic orthotropicmodel; tendencies to magnitude variation in the soil deformation modulus in depth of the body under the foundations andwithin the artificial bases built with the soil compaction. There was also proved the possibility of increasing the accuracy ofthe predicting method for the buildings' foundations base settling using the soil compression index and accounting for thepressure effect on the soil deformation parameters in depth of the compressible strata.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Myers, Ransom A., Brian R. MacKenzie, Keith G. Bowen und Nicholas J. Barrowman. „What is the carrying capacity for fish in the ocean? A meta-analysis of population dynamics of North Atlantic cod“. Canadian Journal of Fisheries and Aquatic Sciences 58, Nr. 7 (01.07.2001): 1464–76. http://dx.doi.org/10.1139/f01-082.

Der volle Inhalt der Quelle
Annotation:
Population and community data in one study are usually analyzed in isolation from other data. Here, we introduce statistical methods that allow many data sets to be analyzed simultaneously such that different studies may "borrow strength" from each other. In the simplest case, we simultaneously model 21 Atlanic cod (Gadus morhua) stocks in the North Atlantic assuming that the maximum reproductive rate and the carrying capacity per unit area are random variables. This method uses a nonlinear mixed model and is a natural approach to investigate how carrying capacity varies among populations. We used empirical Bayes techniques to estimate the maximum reproductive rate and carrying capacity of each stock. In all cases, the empirical Bayes estimates were biologically reasonable, whereas a stock by stock analysis occasionally yielded nonsensical parameter estimates (e.g., infinite values). Our analysis showed that the carrying capacity per unit area varied by more than 20-fold among populations and that much of this variation was related to temperature. That is, the carrying capacity per square kilometre declines as temperature increases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Wang, C. R., J. F. Gao, Q. C. Chang, F. C. Zou, Q. Zhao und X. Q. Zhu. „Sequence variability in four mitochondrial genes among Bunostomum trigonocephalum isolates from four provinces in China“. Journal of Helminthology 87, Nr. 4 (15.10.2012): 416–21. http://dx.doi.org/10.1017/s0022149x12000570.

Der volle Inhalt der Quelle
Annotation:
AbstractThe present study examined sequence variability in four mitochondrial genes, namely cytochrome c oxidase subunit (cox1), cytochrome b (cytb) and NADH dehydrogenase subunits 1 and 5 (nad1 and nad5), among Bunostomum trigonocephalum isolates from four different geographic regions in China. Ten B. trigonocephalum samples were collected from each of the four provinces (Heilongjiang, Jilin, Shaanxi and Yunnan), China. A part of the cox1 (pcox1), cytb (pcytb), nad1 and nad5 genes (pnad1 and pnad5) were amplified separately from individual hookworms by polymerase chain reaction (PCR) and were subjected to direct sequencing in order to define sequence variations and their phylogenetic relationships. The intra-specific sequence variations within B. trigonocephalum were 0–1.9% for pcox1, 0–2.0% for pcytb, 0–1.6% for pnad1 and 0–1.7% for pnad5. The A+T contents of the sequences were 69.6–70.4% (pcox1), 71.9–72.7 (pcytb), 70.4–71.1% (pnad1) and 72.0–72.6% (pnad5). However, the inter-specific sequence differences among members of the family Ancylostomatidae were significantly higher, being 12.1–14.2% for pcox1, 13.7–16.0 for cytb, 17.6–19.4 for nad1 and 16.0–21.6 for nad5. Phylogenetic analyses based on the combined partial sequences of cox1, cytb, nad1 and nad5 using three inference methods, namely Bayesian inference (Bayes), maximum likelihood (ML) and maximum parsimony (MP), revealed that all the B. trigonocephalum samples form monophyletic groups, but samples from the same geographical origin did not always cluster together, suggesting that there was no obvious geographical distinction within B. trigonocephalum based on sequences of the four mtDNA genes. These results demonstrated the existence of low-level intra-specific variation in mitochondrial DNA (mtDNA) sequences among B. trigonocephalum isolates from different geographic regions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Et.al, Madeline D. Cabauatan. „Statistical Evaluation of Item Nonresponse Methods Using the World Bank’s 2015 Philippines Enterprise Survey“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, Nr. 3 (11.04.2021): 4077–88. http://dx.doi.org/10.17762/turcomat.v12i3.1698.

Der volle Inhalt der Quelle
Annotation:
The main objective of the study was to evaluate item nonresponse procedures through a simulation study of different nonresponse levels or missing rates. A simulation study was used to explore how each of the response rates performs under a variety of circumstances. It also investigated the performance of procedures suggested for item nonresponse under various conditions and variable trends. The imputation methods considered were the cell mean imputation, random hotdeck, nearest neighbor, and simple regression. These variables are some of the major indicators for measuring productive labor and decent work in the country. For the purpose of this study, the researcher is interested in evaluating methods for imputing missing data for the number of workers and total cost of labor per establishment from the World Bank’s 2015 Enterprise Survey for the Philippines. The performances of the imputation techniques for item nonresponse were evaluated in terms of bias and coefficient of variation for accuracy and precision. Based on the results, the cell-mean imputation was seen to be most appropriate for imputing missing values for the total number of workers and total cost of labor per establishment. Since the study was limited to the variables cited, it is recommended to explore other labor indicators. Moreover, exploring choice of other clustering groups is highly recommended as clustering groups have great effect in the resulting estimates of imputation estimation. It is also recommended to explore other imputation techniques like multiple regression and other parametric models for nonresponse such as the Bayes estimation method. For regression based imputation, since the study is limited only in using the cluster groupings estimation, it is highly recommended to use other possible variables that might be related to the variable of interest to verify the results of this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Lu, Zhengdong, Todd K. Leen und Jeffrey Kaye. „Kernels for Longitudinal Data with Variable Sequence Length and Sampling Intervals“. Neural Computation 23, Nr. 9 (September 2011): 2390–420. http://dx.doi.org/10.1162/neco_a_00164.

Der volle Inhalt der Quelle
Annotation:
We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (and the improvement over standard regression models for such classifiers), we develop novel Fisher kernels based on mixture of mixed-effects models and use them in support vector machine classifiers. The hierarchical generative model allows us to handle variations in sequence length and sampling interval gracefully. We also give nonparametric kernels not based on generative models, but rather on the reproducing kernel Hilbert space. We apply the methods to detecting cognitive decline from longitudinal clinical data on motor and neuropsychological tests. The likelihood ratio classifiers based on the neuropsychological tests perform better than than classifiers based on the motor behavior. Discriminant classifiers performed better than likelihood ratio classifiers for the motor behavior tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Liu, Guowei, Fengshan Ma, Gang Liu, Haijun Zhao, Jie Guo und Jiayuan Cao. „Application of Multivariate Statistical Analysis to Identify Water Sources in A Coastal Gold Mine, Shandong, China“. Sustainability 11, Nr. 12 (17.06.2019): 3345. http://dx.doi.org/10.3390/su11123345.

Der volle Inhalt der Quelle
Annotation:
Submarine mine water inrush has become a problem that must be urgently solved in coastal gold mining operations in Shandong, China. Research on water in subway systems introduced classifications for the types of mine groundwater and then established the functions used to identify each type of water sample. We analyzed 31 water samples from −375 m underground using multivariate statistical analysis methods. Cluster analysis combined with principle component analysis and factor analysis divided water samples into two types, with one type being near the F3 fault. Principal component analysis identified four principle components accounting for 91.79% of the total variation. These four principle components represented almost all the information about the water samples, which were then used as clustering variables. A Bayes model created by discriminant analysis demonstrated that water samples could also be divided into two types, which was consistent with the cluster analysis result. The type of water samples could be determined by placing Na+ and CHO3− concentrations of water samples into Bayes functions. The results demonstrated that F3, which is a regional fault and runs across the whole Xishan gold mine, may be the potential channel for water inrush, providing valuable information for predicting the possibility of water inrush and thus reducing the costs of the mining operation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Crysnanto, Danang, Alexander S. Leonard, Zih-Hua Fang und Hubert Pausch. „Novel functional sequences uncovered through a bovine multiassembly graph“. Proceedings of the National Academy of Sciences 118, Nr. 20 (10.05.2021): e2101056118. http://dx.doi.org/10.1073/pnas.2101056118.

Der volle Inhalt der Quelle
Annotation:
Many genomic analyses start by aligning sequencing reads to a linear reference genome. However, linear reference genomes are imperfect, lacking millions of bases of unknown relevance and are unable to reflect the genetic diversity of populations. This makes reference-guided methods susceptible to reference-allele bias. To overcome such limitations, we build a pangenome from six reference-quality assemblies from taurine and indicine cattle as well as yak. The pangenome contains an additional 70,329,827 bases compared to the Bos taurus reference genome. Our multiassembly approach reveals 30 and 10.1 million bases private to yak and indicine cattle, respectively, and between 3.3 and 4.4 million bases unique to each taurine assembly. Utilizing transcriptomes from 56 cattle, we show that these nonreference sequences encode transcripts that hitherto remained undetected from the B. taurus reference genome. We uncover genes, primarily encoding proteins contributing to immune response and pathogen-mediated immunomodulation, differentially expressed between Mycobacterium bovis–infected and noninfected cattle that are also undetectable in the B. taurus reference genome. Using whole-genome sequencing data of cattle from five breeds, we show that reads which were previously misaligned against the Bos taurus reference genome now align accurately to the pangenome sequences. This enables us to discover 83,250 polymorphic sites that segregate within and between breeds of cattle and capture genetic differentiation across breeds. Our work makes a so-far unused source of variation amenable to genetic investigations and provides methods and a framework for establishing and exploiting a more diverse reference genome.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Travnik, Isadora de Castro, Daiana de Souza Machado, Luana da Silva Gonçalves, Maria Camila Ceballos und Aline Cristina Sant’Anna. „Temperament in Domestic Cats: A Review of Proximate Mechanisms, Methods of Assessment, Its Effects on Human—Cat Relationships, and One Welfare“. Animals 10, Nr. 9 (27.08.2020): 1516. http://dx.doi.org/10.3390/ani10091516.

Der volle Inhalt der Quelle
Annotation:
Temperament can be defined as interindividual differences in behavior that are stable over time and in different contexts. The terms ‘personality’, ‘coping styles’, and ‘behavioral syndromes’ have also been used to describe these interindividual differences. In this review, the main aspects of cat temperament research are summarized and discussed, based on 43 original research papers published between 1986 and 2020. We aimed to present current advances in cat temperament research and identify potential gaps in knowledge, as well as opportunities for future research. Proximate mechanisms, such as genetic bases of temperament, ontogenesis and developmental factors, physiological mechanisms, and relationships with morphology, were reviewed. Methods traditionally used to assess the temperament of cats might be classified based on the duration of procedures (short- vs. long-term measures) and the nature of data recordings (coding vs. rating methods). The structure of cat temperament is frequently described using a set of behavioral dimensions, primarily based on interindividual variations in cats’ responses toward humans and conspecifics (e.g., friendliness, sociability, boldness, and aggressiveness). Finally, cats’ temperaments have implications for human–animal interactions and the one welfare concept. Temperament assessment can also contribute to practical aspects, for example, the adoption of shelter cats.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Zhan, Jun, Ronglin Wang, Lingzhi Yi, Yaguo Wang und Zhengjuan Xie. „Health Assessment Methods for Wind Turbines Based on Power Prediction and Mahalanobis Distance“. International Journal of Pattern Recognition and Artificial Intelligence 33, Nr. 02 (24.10.2018): 1951001. http://dx.doi.org/10.1142/s0218001419510017.

Der volle Inhalt der Quelle
Annotation:
The output power of wind turbine has great relation with its health state, and the health status assessment for wind turbines influences operational maintenance and economic benefit of wind farm. Aiming at the current problem that the health status for the whole machine in wind farm is hard to get accurately, in this paper, we propose a health status assessment method in order to assess and predict the health status of the whole wind turbine, which is based on the power prediction and Mahalanobis distance (MD). Firstly, on the basis of Bates theory, the scientific analysis for historical data from SCADA system in wind farm explains the relation between wind power and running states of wind turbines. Secondly, the active power prediction model is utilized to obtain the power forecasting value under the health status of wind turbines. And the difference between the forecasting value and actual value constructs the standard residual set which is seen as the benchmark of health status assessment for wind turbines. In the process of assessment, the test set residual is gained by network model. The MD is calculated by the test residual set and normal residual set and then normalized as the health status assessment value of wind turbines. This method innovatively constructs evaluation index which can reflect the electricity generating performance of wind turbines rapidly and precisely. So it effectively avoids the defect that the existing methods are generally and easily influenced by subjective consciousness. Finally, SCADA system data in one wind farm of Fujian province has been used to verify this method. The results indicate that this new method can make effective assessment for the health status variation trend of wind turbines and provide new means for fault warning of wind turbines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Thomas, Laine E., und Phillip J. Schulte. „Separating variability in healthcare practice patterns from random error“. Statistical Methods in Medical Research 28, Nr. 4 (31.01.2018): 1247–60. http://dx.doi.org/10.1177/0962280217754230.

Der volle Inhalt der Quelle
Annotation:
Improving the quality of care that patients receive is a major focus of clinical research, particularly in the setting of cardiovascular hospitalization. Quality improvement studies seek to estimate and visualize the degree of variability in dichotomous treatment patterns and outcomes across different providers, whereby naive techniques either over-estimate or under-estimate the actual degree of variation. Various statistical methods have been proposed for similar applications including (1) the Gaussian hierarchical model, (2) the semi-parametric Bayesian hierarchical model with a Dirichlet process prior and (3) the non-parametric empirical Bayes approach of smoothing by roughening. Alternatively, we propose that a recently developed method for density estimation in the presence of measurement error, moment-adjusted imputation, can be adapted for this problem. The methods are compared by an extensive simulation study. In the present context, we find that the Bayesian methods are sensitive to the choice of prior and tuning parameters, whereas moment-adjusted imputation performs well with modest sample size requirements. The alternative approaches are applied to identify disparities in the receipt of early physician follow-up after myocardial infarction across 225 hospitals in the CRUSADE registry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

DOMUȚA, Cristian G., Cornel DOMUȚA, Manuel A. GÎTEA, Ioana M. BORZA, Ana C. PEREȘ, Ionuț PANTIȘ, Nicolae CENUȘA und Radu P. BREJEA. „The Bases of Peach Tree Irrigation in the Fruit-Growing Basin from Oradea, Romania, and the Use of the Microsprinkler System“. Notulae Botanicae Horti Agrobotanici Cluj-Napoca 46, Nr. 1 (01.01.2018): 213–22. http://dx.doi.org/10.15835/nbha46110771.

Der volle Inhalt der Quelle
Annotation:
The paper presents the results obtained on preluvo-soil in Sâniob, Oradea, during 2007-2015. In order to maintain the soil water content between easily available water content and field capacity, the irrigation rate used was between 60 mm/ha and 470 mm/ha. Irrigation determined an increase of the total water consumption by 53% (712 mm/ha vs. 466 mm/ha). For the non-irrigated variant (71%) and for the irrigated one (46%), the rainfalls registered between 15 March-1 October yearly represented the main source of supplying the total water consumption, while irrigation supplied 40% of the total water consumption (with a variation range 11%-61%). The microsprinkler irrigation system led to a 30.6% yield gain, with a variation range of 15.8%-58.7%. It also determined a higher size index in comparison with the non-irrigated variant and a smaller percentage of kernels. All differences were statistically very significant. Several correlations were quantified in the soil-water-plant-atmosphere system. The parameters of the system were: pedological drought, strong pedological drought and water consumption. All correlations were statistically very significant; the best mathematical expression was the polynomial function. Four methods (Penman Monteith, Pan, Piche and Thornthwaite evaporimeter methods) were studied to determine the reference evapotranspiration (ETo) in comparison with the optimal water consumption of the peach tree. As it was cheaper and easier to use, the Pan evaporation method was recommended in the irrigation scheduling, although the Penman Monteith method could have given more accurate results in assessing the optimal water consumption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Martinez, Nerea, Ignacio Varela, Jose P. Vaque, Sophia Derdak, Sergi Beltran, Manuela Mollejo, Margarita Sanchez-Beato et al. „Mutational Status of Splenic Marginal Zone Lymphoma Revealed by Whole Exome Sequencing.“ Blood 120, Nr. 21 (16.11.2012): 2698. http://dx.doi.org/10.1182/blood.v120.21.2698.2698.

Der volle Inhalt der Quelle
Annotation:
Abstract Abstract 2698 Background: Splenic marginal zone lymphoma (SMZL) is a small B cell neoplasm whose molecular pathogenesis is still unknown. It has a relatively indolent course, but a fraction of the cases may show an aggressive behavior. The lack of comprehensive molecular analysis for SMZL precludes the development of targeted therapy. Here we studied the mutational status of 6 SMZL samples using Whole Exome Next Generation Sequencing. Methods: Genomic DNA was extracted from splenic tumor or peripheral blood samples and oral mucosa as the corresponding non-tumor control. Whole exome sequencing was performed at CNAG (Barcelona, Spain) following standard protocols for high-throughput paired-end sequencing on the Illumina HiSeq2000 instruments (Illumina Inc., San Diego, CA). The variant calling was performed using an in house written software calling potential mutations showing a minimum independent multi-aligner evidence. Results: We performed paired-end-76pb whole exome sequencing on 6 SMZL samples and the corresponding normal counterpart. Three of the samples corresponded to CD19 isolated cells from peripheral blood, while other three corresponded to spleen freshly frozen tissue. The mean coverage obtained was 104.07 (82.46–119.59) with a mean of 91.41% (90.41–93.73) of bases with at least 15× coverage. After filtering, 237 substitutions and 21 indels where obtained. No recurrent variation was found. Six of the variations found here were already described in other malignancies. Variations were classified into silent (75), missense (147), nonsense (8), and essential splice (5), according to their potential functional effect, and into tolerated (54) and deleterious (76) according to the “variant effect predictor” tool of Ensembl Genome Browser. Whole exome sequencing permitted us to identify variations in several genes of TLR/NFkB pathway (Myd88, Peli3), BCR (Myd88, Arid3A) or signal transduction (ARHGAP32), essential pathways for B-cell differentiation. These variations and other involving selected genes, such as the Bcl6 repressor BCOR, were validated by capillary sequencing. These results were confirmed and expanded in a second series of 10 new cases by exome sequencing. Conclusions: SMZL samples contain somatic mutation involving genes regulating BCR signaling, TLR/NFKB pathways and chromatin remodeling. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Tambunan, Suryani, und Teuku Nanda Saifullah Sulaiman. „Gel Formulation of Lemongrass Essential Oil with HPMC and Carbopol Bases“. Majalah Farmaseutik 14, Nr. 2 (14.01.2019): 87. http://dx.doi.org/10.22146/farmaseutik.v14i2.42598.

Der volle Inhalt der Quelle
Annotation:
Lemongrass (Cymbopogon citratus) essential oil is proven to have efficacy as an antibacterial against pathogenic bacteria, such as Staphylococcus aureus. Essential oils was formulate into gel dosage forms with a base combination of HPMC and carbopol. The use of this combination is known to produce a gel with physical properties better than a single use. This research aimed to investigate the effect of variations of HPMC and carbopol to the physical properties of the gel, the concentration of HPMC and carbopol to produce the optimum formula, and the physical stability of essential oil gel of lemongrass during storage. Gel was made by lemongrass essential oil at a concentration of 6% with base of HPMC and carbopol. Each formula were made and tested the physical properties that include organoleptic, homogenity, pH, viscosity, spreadability and adhesiveness. HPMC and carbopol composition were determined through the process of screening and optimization Simplex Lattice Design methods by Design Expert 7.1.5 software. Experimental results and predictions of SLD were verified by testing one sample t-test with a 95% confidence level. The optimum formula gel of lemongrass essential oil consisting of 4.00% HPMC and 1.00% carbopol. The results test of physical properties of lemongrass essential oil gel obtained homogeneous gel with a pH value of 6.00 ± 0.00, viscosity of 280.00 ± 26.46 dPa.S, spreadability of 9.36 ± 0.47 cm2, and adhesiveness of 2.36 ± 0.10 seconds. Lemongrass essential oil gel was stable for 3 cycles of testing include organoleptic, homogeneity, syneresis, pH, adhesion, and the viscosity of the gel. Spreadability of gel was not stable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Гарафутдинов, Р. Р., Д. А. Чемерис, А. Р. Мавзютов, Л. У. Ахметзянова, Т. М. Давлеткулов, И. М. Губайдуллин und А. В. Чемерис. „LAMP amplification of nucleic acids. I. Two decades of development and improvement.“ Biomics 13, Nr. 2 (2021): 176–226. http://dx.doi.org/10.31301/2221-6197.bmcs.2021-14.

Der volle Inhalt der Quelle
Annotation:
Over the two decades that have passed since the development of loop amplification (Loop AMPlification, LAMP), that carry out to detect specific nucleic acid under isothermal conditions, it has undergone quite a lot of improvements. This review presents data represented methodological bases of about a hundred variations of the LAMP, classified according to the methods of detecting both target DNA products (lamplicons) and by-products (pyrophosphate and protons), taking into account the specificity of the processes, and according to the purpose of certain LAMP options and implementation, including microfluidics. Particular attention is paid to quantitative LAMP amplification and promising digital LAMP. The prospects for the development of the method are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Pütz, Martin, Justyna A. Robinson und Monika Reif. „The emergence of Cognitive Sociolinguistics“. Review of Cognitive Linguistics 10, Nr. 2 (07.12.2012): 241–63. http://dx.doi.org/10.1075/rcl.10.2.01int.

Der volle Inhalt der Quelle
Annotation:
This paper explores the contexts of emergence and application of Cognitive Sociolinguistics. This novel field of scientific enquiry draws on the convergence of methods and theoretical frameworks typically associated with Cognitive Linguistics and Sociolinguistics. Here, we trace and systematize the key theoretical and epistemological bases for the emergence of Cognitive Sociolinguistics, by outlining main research strands and highlighting some challenges that face the development of this field. More specifically, we focus on the following terms and concepts which are foundational to the discussion of Cognitive Sociolinguistics: (i) usage-based linguistics and language-internal variation; (ii) rule-based vs. usage-based conceptions of language; (iii) meaning variation; (iv) categorization and prototypes; and (v) the interplay between language, culture, and ideology. Finally, we consider the benefits of taking a Cognitive Sociolinguistic perspective in research by looking at the actual studies that are presented in the current volume.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Jia, Songmin, Jinbuo Sheng, Daisuke Chugo und Kunikatsu Takase. „Human Recognition Using RFID Technology and Stereo Vision“. Journal of Robotics and Mechatronics 21, Nr. 1 (20.02.2009): 28–35. http://dx.doi.org/10.20965/jrm.2009.p0028.

Der volle Inhalt der Quelle
Annotation:
In this paper, a method of human recognition in indoor environment for mobile robot using RFID (Radio Frequency Identification) technology and stereo vision is proposed as it is inexpensive, flexible and easy to use in practical environment. Because information of human being can be written in ID tags, the proposed method can detect the human easily and quickly compared with the other methods. The proposed method first calculates the probability where human with ID tag exists using Bayes rule and determines the ROI for stereo camera processing in order to get accurate position and orientation of human. Hu moment invariants was introduced to recognize the human being because this method is insensitive to the variations in position, size and orientation. The proposed method does not need to process all image and easily gets some information of obstacle such as size, color, thus decreases the processing computation. This paper introduces the architecture of the proposed method and presents some experimental results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Souza, João W. M. de, Shara S. A. Alves, Elizângela de S. Rebouças, Jefferson S. Almeida und Pedro P. Rebouças Filho. „A New Approach to Diagnose Parkinson’s Disease Using a Structural Cooccurrence Matrix for a Similarity Analysis“. Computational Intelligence and Neuroscience 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/7613282.

Der volle Inhalt der Quelle
Annotation:
Parkinson’s disease affects millions of people around the world and consequently various approaches have emerged to help diagnose this disease, among which we can highlight handwriting exams. Extracting features from handwriting exams is an important contribution of the computational field for the diagnosis of this disease. In this paper, we propose an approach that measures the similarity between the exam template and the handwritten trace of the patient following the exam template. This similarity was measured using the Structural Cooccurrence Matrix to calculate how close the handwritten trace of the patient is to the exam template. The proposed approach was evaluated using various exam templates and the handwritten traces of the patient. Each of these variations was used together with the Naïve Bayes, OPF, and SVM classifiers. In conclusion the proposed approach was proven to be better than the existing methods found in the literature and is therefore a promising tool for the diagnosis of Parkinson’s disease.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Gbenga*, Fadare Oluwaseun, Prof Adetunmbi Adebayo Olusola, Dr (Mrs) Oyinloye Oghenerukevwe Eloho und Dr Mogaji Stephen Alaba. „Towards Optimization of Malware Detection using Chi-square Feature Selection on Ensemble Classifiers“. International Journal of Engineering and Advanced Technology 10, Nr. 4 (30.04.2021): 254–62. http://dx.doi.org/10.35940/ijeat.d2359.0410421.

Der volle Inhalt der Quelle
Annotation:
The multiplication of malware variations is probably the greatest problem in PC security and the protection of information in form of source code against unauthorized access is a central issue in computer security. In recent times, machine learning has been extensively researched for malware detection and ensemble technique has been established to be highly effective in terms of detection accuracy. This paper proposes a framework that combines combining the exploit of both Chi-square as the feature selection method and eight ensemble learning classifiers on five base learners- K-Nearest Neighbors, Naïve Bayes, Support Vector Machine, Decision Trees, and Logistic Regression. K-Nearest Neighbors returns the highest accuracy of 95.37%, 87.89% on chi-square, and without feature selection respectively. Extreme Gradient Boosting Classifier ensemble accuracy is the highest with 97.407%, 91.72% with Chi-square as feature selection, and ensemble methods without feature selection respectively. Extreme Gradient Boosting Classifier and Random Forest are leading in the seven evaluative measures of chi-square as a feature selection method and ensemble methods without feature selection respectively. The study results show that the tree-based ensemble model is compelling for malware classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Toma, Adina, Zheleva Ivanka, Cristian Puşcaşu, Alexandru Paraschiv und Mihaela Grigorescu. „Technical project for a new water purification solution“. MATEC Web of Conferences 145 (2018): 03013. http://dx.doi.org/10.1051/matecconf/201814503013.

Der volle Inhalt der Quelle
Annotation:
This research is part of the RO-BG Cross-Border Cooperation Program, project “CLEANDANUBE”, MIS-ETC 653, which has finalised by providing a common strategy to prevent the Danube’s pollution technological risks with oil and oil products. This paper presents a new sustainable water purification solution. A short introduction will be offered and an overview regarding the research and new methods to greening the waste is provided. The theoretical aspects of the centrifugal separation phenomenon are studied and the preliminary project bases were established. The paper conveys the possible constructive variations and the technological implications of those. Ultimately, the technical project for a new water purification solution and conclusions with critical points encountered during the designing phase are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Rezgui, Yasmine, Ling Pei, Xin Chen, Fei Wen und Chen Han. „An Efficient Normalized Rank Based SVM for Room Level Indoor WiFi Localization with Diverse Devices“. Mobile Information Systems 2017 (2017): 1–19. http://dx.doi.org/10.1155/2017/6268797.

Der volle Inhalt der Quelle
Annotation:
This paper proposes an efficient and effective WiFi fingerprinting-based indoor localization algorithm, which uses the Received Signal Strength Indicator (RSSI) of WiFi signals. In practical harsh indoor environments, RSSI variation and hardware variance can significantly degrade the performance of fingerprinting-based localization methods. To address the problem of hardware variance and signal fluctuation in WiFi fingerprinting-based localization, we propose a novel normalized rank based Support Vector Machine classifier (NR-SVM). Moving from RSSI value based analysis to the normalized rank transformation based analysis, the principal features are prioritized and the dimensionalities of signature vectors are taken into account. The proposed method has been tested using sixteen different devices in a shopping mall with 88 shops. The experimental results demonstrate its robustness with no less than 98.75% correct estimation in 93.75% of the tested cases and 100% correct rate in 56.25% of cases. In the experiments, the new method shows better performance over the KNN, Naïve Bayes, Random Forest, and Neural Network algorithms. Furthermore, we have compared the proposed approach with three popular calibration-free transformation based methods, including difference method (DIFF), Signal Strength Difference (SSD), and the Hyperbolic Location Fingerprinting (HLF) based SVM. The results show that the NR-SVM outperforms these popular methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Yang, Kaixu, und Tapabrata Maiti. „Statistical Aspects of High-Dimensional Sparse Artificial Neural Network Models“. Machine Learning and Knowledge Extraction 2, Nr. 1 (02.01.2020): 1–19. http://dx.doi.org/10.3390/make2010001.

Der volle Inhalt der Quelle
Annotation:
An artificial neural network (ANN) is an automatic way of capturing linear and nonlinear correlations, spatial and other structural dependence among features. This machine performs well in many application areas such as classification and prediction from magnetic resonance imaging, spatial data and computer vision tasks. Most commonly used ANNs assume the availability of large training data compared to the dimension of feature vector. However, in modern applications, as mentioned above, the training sample sizes are often low, and may be even lower than the dimension of feature vector. In this paper, we consider a single layer ANN classification model that is suitable for analyzing high-dimensional low sample-size (HDLSS) data. We investigate the theoretical properties of the sparse group lasso regularized neural network and show that under mild conditions, the classification risk converges to the optimal Bayes classifier’s risk (universal consistency). Moreover, we proposed a variation on the regularization term. A few examples in popular research fields are also provided to illustrate the theory and methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie