Academic literature on the topic 'Latent block models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Latent block models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Latent block models"

1

Wyse, Jason, and Nial Friel. "Block clustering with collapsed latent block models." Statistics and Computing 22, no. 2 (May 5, 2011): 415–28. http://dx.doi.org/10.1007/s11222-011-9233-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bartolucci, Francesco, Silvia Pandolfi, and Fulvia Pennoni. "Discrete Latent Variable Models." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 425–52. http://dx.doi.org/10.1146/annurev-statistics-040220-091910.

Full text
Abstract:
We review the discrete latent variable approach, which is very popular in statistics and related fields. It allows us to formulate interpretable and flexible models that can be used to analyze complex datasets in the presence of articulated dependence structures among variables. Specific models including discrete latent variables are illustrated, such as finite mixture, latent class, hidden Markov, and stochastic block models. Algorithms for maximum likelihood and Bayesian estimation of these models are reviewed, focusing, in particular, on the expectation–maximization algorithm and the Markov chain Monte Carlo method with data augmentation. Model selection, particularly concerning the number of support points of the latent distribution, is discussed. The approach is illustrated by summarizing applications available in the literature; a brief review of the main software packages to handle discrete latent variable models is also provided. Finally, some possible developments in this literature are suggested.
APA, Harvard, Vancouver, ISO, and other styles
3

Watanabe, Chihiro, and Taiji Suzuki. "Goodness-of-fit test for latent block models." Computational Statistics & Data Analysis 154 (February 2021): 107090. http://dx.doi.org/10.1016/j.csda.2020.107090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Norget, Julia, and Axel Mayer. "Block-Wise Model Fit for Structural Equation Models With Experience Sampling Data." Zeitschrift für Psychologie 230, no. 1 (January 2022): 47–59. http://dx.doi.org/10.1027/2151-2604/a000482.

Full text
Abstract:
Abstract. Common model fit indices behave poorly in structural equation models for experience sampling data which typically contain many manifest variables. In this article, we propose a block-wise fit assessment for large models as an alternative. The entire model is estimated jointly, and block-wise versions of common fit indices are then determined from smaller blocks of the variance-covariance matrix using simulated degrees of freedom. In a first simulation study, we show that block-wise fit indices, contrary to global fit indices, correctly identify correctly specified latent state-trait models with 49 occasions and N = 200. In a second simulation, we find that block-wise fit indices cannot identify misspecification purely between days but correctly rejects other misspecified models. In some cases, the block-wise fit is superior in judging the strength of the misspecification. Lastly, we discuss the practical use of block-wise fit evaluation and its limitations.
APA, Harvard, Vancouver, ISO, and other styles
5

Moron-Lopez, Sara, Sushama Telwatte, Indra Sarabia, Emilie Battivelli, Mauricio Montano, Amanda B. Macedo, Dvir Aran, et al. "Human splice factors contribute to latent HIV infection in primary cell models and blood CD4+ T cells from ART-treated individuals." PLOS Pathogens 16, no. 11 (November 30, 2020): e1009060. http://dx.doi.org/10.1371/journal.ppat.1009060.

Full text
Abstract:
It is unclear what mechanisms govern latent HIV infection in vivo or in primary cell models. To investigate these questions, we compared the HIV and cellular transcription profile in three primary cell models and peripheral CD4+ T cells from HIV-infected ART-suppressed individuals using RT-ddPCR and RNA-seq. All primary cell models recapitulated the block to HIV multiple splicing seen in cells from ART-suppressed individuals, suggesting that this may be a key feature of HIV latency in primary CD4+ T cells. Blocks to HIV transcriptional initiation and elongation were observed more variably among models. A common set of 234 cellular genes, including members of the minor spliceosome pathway, was differentially expressed between unstimulated and activated cells from primary cell models and ART-suppressed individuals, suggesting these genes may play a role in the blocks to HIV transcription and splicing underlying latent infection. These genes may represent new targets for therapies designed to reactivate or silence latently-infected cells.
APA, Harvard, Vancouver, ISO, and other styles
6

Mariadassou, Mahendra, and Catherine Matias. "Convergence of the groups posterior distribution in latent or stochastic block models." Bernoulli 21, no. 1 (February 2015): 537–73. http://dx.doi.org/10.3150/13-bej579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

SANTOS, Naiara Caroline Aparecido dos, and Jorge Luiz BAZÁN. "RESIDUAL ANALYSIS IN RASCH POISSON COUNTS MODELS." REVISTA BRASILEIRA DE BIOMETRIA 39, no. 1 (March 31, 2021): 206–20. http://dx.doi.org/10.28951/rbb.v39i1.531.

Full text
Abstract:
A Rasch Poisson counts (RPC) model is described to identify individual latent traits and facilities of the items of tests that model the error (or success) count in several tasks over time, instead of modeling the correct responses to items in a test as in the dichotomous item response theory (IRT) model. These types of tests can be more informative than traditional tests. To estimate the model parameters, we consider a Bayesian approach using the integrated nested Laplace approximation (INLA). We develop residual analysis to assess model t by introducing randomized quantile residuals for items. The data used to illustrate the method comes from 228 people who took a selective attention test. The test has 20 blocks (items), with a time limit of 15 seconds for each block. The results of the residual analysis of the RPC were promising and indicated that the studied attention data are not well tted by the RPC model.
APA, Harvard, Vancouver, ISO, and other styles
8

Kihal-Talantikite, Wahida, Pauline Le Nouveau, Pierre Legendre, Denis Zmirou Navier, Arlette Danzon, Marion Carayol, and Séverine Deguen. "Adverse Birth Outcomes as Indicators of Poor Fetal Growth Conditions in a French Newborn Population—A Stratified Analysis by Neighborhood Deprivation Level." International Journal of Environmental Research and Public Health 16, no. 21 (October 23, 2019): 4069. http://dx.doi.org/10.3390/ijerph16214069.

Full text
Abstract:
Background: Adverse birth outcomes are related to unfavorable fetal growth conditions. A latent variable, named Favorable Fetal Growth Condition (FFGC), has been defined by Bollen et al., in 2013; he showed that this FFGC latent variable mediates the effects of maternal characteristics on several birth outcomes. Objectives: The objectives of the present study were to replicate Bollen’s approach in a population of newborns in Paris and to investigate the potential differential effect of the FFGC latent variable according to the neighborhood socioeconomic level. Methods: Newborn health data were available from the first birth certificate registered by the Maternal and Child Care department of the City of Paris. All newborns (2008–2011) were geocoded at the mother residential census block. Each census block was assigned a socioeconomic deprivation level. Several mothers’ characteristics were collected from the birth certificates: age, parity, education and occupational status and the occupational status of the father. Three birth outcomes were considered: birth weight (BW), birth length (BL) and gestational age (GA). Results: Using a series of structural equation models, we confirm that the undirected model (that includes the FFGC latent variable) provided a better fit for the data compared with the model where parental characteristics directly affected BW, BL, and/or GA. However, the strength, the direction and statistical significance of the associations between the exogenous variables and the FFGC were different according to the neighborhood deprivation level. Conclusion: Future research should be designed to assess the how robust the FFGC latent variable is across populations and should take into account neighborhood characteristics to identify the most vulnerable group and create better design prevention policies.
APA, Harvard, Vancouver, ISO, and other styles
9

Xie, Fangzheng, and Yanxun Xu. "Optimal Bayesian estimation for random dot product graphs." Biometrika 107, no. 4 (July 6, 2020): 875–89. http://dx.doi.org/10.1093/biomet/asaa031.

Full text
Abstract:
Summary We propose and prove the optimality of a Bayesian approach for estimating the latent positions in random dot product graphs, which we call posterior spectral embedding. Unlike classical spectral-based adjacency, or Laplacian spectral embedding, posterior spectral embedding is a fully likelihood-based graph estimation method that takes advantage of the Bernoulli likelihood information of the observed adjacency matrix. We develop a minimax lower bound for estimating the latent positions, and show that posterior spectral embedding achieves this lower bound in the following two senses: it both results in a minimax-optimal posterior contraction rate and yields a point estimator achieving the minimax risk asymptotically. The convergence results are subsequently applied to clustering in stochastic block models with positive semidefinite block probability matrices, strengthening an existing result concerning the number of misclustered vertices. We also study a spectral-based Gaussian spectral embedding as a natural Bayesian analogue of adjacency spectral embedding, but the resulting posterior contraction rate is suboptimal by an extra logarithmic factor. The practical performance of the proposed methodology is illustrated through extensive synthetic examples and the analysis of Wikipedia graph data.
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Shiqi, Peiyan Hu, Qi Meng, Yue Wang, Rongchan Zhu, Bingguang Chen, Zhiming Ma, Hao Ni, and Tie-Yan Liu. "Deep Latent Regularity Network for Modeling Stochastic Partial Differential Equations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7740–47. http://dx.doi.org/10.1609/aaai.v37i6.25938.

Full text
Abstract:
Stochastic partial differential equations (SPDEs) are crucial for modelling dynamics with randomness in many areas including economics, physics, and atmospheric sciences. Recently, using deep learning approaches to learn the PDE solution for accelerating PDE simulation becomes increasingly popular. However, SPDEs have two unique properties that require new design on the models. First, the model to approximate the solution of SPDE should be generalizable over both initial conditions and the random sampled forcing term. Second, the random forcing terms usually have poor regularity whose statistics may diverge (e.g., the space-time white noise). To deal with the problems, in this work, we design a deep neural network called \emph{Deep Latent Regularity Net} (DLR-Net). DLR-Net includes a regularity feature block as the main component, which maps the initial condition and the random forcing term to a set of regularity features. The processing of regularity features is inspired by regularity structure theory and the features provably compose a set of basis to represent the SPDE solution. The regularity features are then fed into a small backbone neural operator to get the output. We conduct experiments on various SPDEs including the dynamic $\Phi^4_1$ model and the stochastic 2D Navier-Stokes equation to predict their solutions, and the results demonstrate that the proposed DLR-Net can achieve SOTA accuracy compared with the baselines. Moreover, the inference time is over 20 times faster than the traditional numerical solver and is comparable with the baseline deep learning models.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Latent block models"

1

Corneli, Marco. "Dynamic stochastic block models, clustering and segmentation in dynamic graphs." Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E012/document.

Full text
Abstract:
Cette thèse porte sur l’analyse de graphes dynamiques, définis en temps discret ou continu. Nous introduisons une nouvelle extension dynamique du modèle a blocs stochastiques (SBM), appelée dSBM, qui utilise des processus de Poisson non homogènes pour modéliser les interactions parmi les paires de nœuds d’un graphe dynamique. Les fonctions d’intensité des processus ne dépendent que des classes des nœuds comme dans SBM. De plus, ces fonctions d’intensité ont des propriétés de régularité sur des intervalles temporels qui sont à estimer, et à l’intérieur desquels les processus de Poisson redeviennent homogènes. Un récent algorithme d’estimation pour SBM, qui repose sur la maximisation d’un critère exact (ICL exacte) est ici adopté pour estimer les paramètres de dSBM et sélectionner simultanément le modèle optimal. Ensuite, un algorithme exact pour la détection de rupture dans les séries temporelles, la méthode «pruned exact linear time» (PELT), est étendu pour faire de la détection de rupture dans des données de graphe dynamique selon le modèle dSBM. Enfin, le modèle dSBM est étendu ultérieurement pour faire de l’analyse de réseau textuel dynamique. Les réseaux sociaux sont un exemple de réseaux textuels: les acteurs s’échangent des documents (posts, tweets, etc.) dont le contenu textuel peut être utilisé pour faire de la classification et détecter la structure temporelle du graphe dynamique. Le modèle que nous introduisons est appelé «dynamic stochastic topic block model» (dSTBM)
This thesis focuses on the statistical analysis of dynamic graphs, both defined in discrete or continuous time. We introduce a new extension of the stochastic block model (SBM) for dynamic graphs. The proposed approach, called dSBM, adopts non homogeneous Poisson processes to model the interaction times between pairs of nodes in dynamic graphs, either in discrete or continuous time. The intensity functions of the processes only depend on the node clusters, in a block modelling perspective. Moreover, all the intensity functions share some regularity properties on hidden time intervals that need to be estimated. A recent estimation algorithm for SBM, based on the greedy maximization of an exact criterion (exact ICL) is adopted for inference and model selection in dSBM. Moreover, an exact algorithm for change point detection in time series, the "pruned exact linear time" (PELT) method is extended to deal with dynamic graph data modelled via dSBM. The approach we propose can be used for change point analysis in graph data. Finally, a further extension of dSBM is developed to analyse dynamic net- works with textual edges (like social networks, for instance). In this context, the graph edges are associated with documents exchanged between the corresponding vertices. The textual content of the documents can provide additional information about the dynamic graph topological structure. The new model we propose is called "dynamic stochastic topic block model" (dSTBM).Graphs are mathematical structures very suitable to model interactions between objects or actors of interest. Several real networks such as communication networks, financial transaction networks, mobile telephone networks and social networks (Facebook, Linkedin, etc.) can be modelled via graphs. When observing a network, the time variable comes into play in two different ways: we can study the time dates at which the interactions occur and/or the interaction time spans. This thesis only focuses on the first time dimension and each interaction is assumed to be instantaneous, for simplicity. Hence, the network evolution is given by the interaction time dates only. In this framework, graphs can be used in two different ways to model networks. Discrete time […] Continuous time […]. In this thesis both these perspectives are adopted, alternatively. We consider new unsupervised methods to cluster the vertices of a graph into groups of homogeneous connection profiles. In this manuscript, the node groups are assumed to be time invariant to avoid possible identifiability issues. Moreover, the approaches that we propose aim to detect structural changes in the way the node clusters interact with each other. The building block of this thesis is the stochastic block model (SBM), a probabilistic approach initially used in social sciences. The standard SBM assumes that the nodes of a graph belong to hidden (disjoint) clusters and that the probability of observing an edge between two nodes only depends on their clusters. Since no further assumption is made on the connection probabilities, SBM is a very flexible model able to detect different network topologies (hubs, stars, communities, etc.)
APA, Harvard, Vancouver, ISO, and other styles
2

Febrissy, Mickaël. "Nonnegative Matrix Factorization and Probabilistic Models : A unified framework for text data." Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1291.

Full text
Abstract:
Depuis l’avènement du Big data, les techniques de réduction de la dimension sont devenues essentielles pour l’exploration et l’analyse de données hautement dimensionnelles issues de nombreux domaines scientifiques. En créant un espace à faible dimension intrinsèque à l’espace de données original, ces techniques offrent une meilleure compréhension dans de nombreuses applications de la science des données. Dans le contexte de l’analyse de textes où les données recueillies sont principalement non négatives, les techniques couramment utilisées produisant des transformations dans l’espace des nombres réels (par exemple, l’analyse en composantes principales, l’analyse sémantique latente) sont devenues moins intuitives car elles ne pouvaient pas fournir une interprétation directe. De telles applications montrent la nécessité de techniques de réduction de la dimensionnalité comme la factorisation matricielle non négative (NMF), utile pour intégrer par exemple, des documents ou des mots dans l’espace de dimension réduite. Par définition, la NMF vise à approximer une matrice non négative par le produit de deux matrices non négatives de plus faible dimension, ce qui aboutit à la résolution d’un problème d’optimisation non linéaire. Notons cependant que cet objectif peut être exploité dans le domaine du regroupement de documents/mots, même si ce n’est pas l’objectif de la NMF. En s’appuyant sur la NMF, cette thèse se concentre sur l’amélioration de la qualité du clustering de grandes données textuelles se présentant sous la forme de matrices document-terme très creuses. Cet objectif est d’abord atteint en proposant plusieurs types de régularisations de la fonction objectif originale de la NMF. En plaçant cet objectif dans un contexte probabiliste, un nouveau modèle NMF est introduit, apportant des bases théoriques pour établir la connexion entre la NMF et les modèles de mélange finis de familles exponentielles, ce qui permet d’offrir des régularisations intéressantes. Cela permet d’inscrire, entre autres, la NMF dans un véritable esprit de clustering. Enfin, un modèle bayésien de blocs latents de Poisson est proposé pour améliorer le regroupement de documents et de mots simultanément en capturant des caractéristiques de termes bruyants. Ce modèle peut être relié à la NMTF (Nonnegative Matrix Tri-Factorization) consacrée au co-clustering. Des expériences sur des jeux de données réelles ont été menées pour soutenir les propositions de la thèse
Since the exponential growth of available Data (Big data), dimensional reduction techniques became essential for the exploration and analysis of high-dimensional data arising from many scientific areas. By creating a low-dimensional space intrinsic to the original data space, theses techniques offer better understandings across many data Science applications. In the context of text analysis where the data gathered are mainly nonnegative, recognized techniques producing transformations in the space of real numbers (e.g. Principal component analysis, Latent semantic analysis) became less intuitive as they could not provide a straightforward interpretation. Such applications show the need of dimensional reduction techniques like Nonnegative Matrix factorization (NMF) useful to embed, for instance, documents or words in the space of reduced dimension. By definition, NMF aims at approximating a nonnegative matrix by the product of two lower dimensionalnonnegative matrices, which results in the solving of a nonlinear optimization problem. Note however that this objective can be harnessed to document/word clustering domain even it is not the objective of NMF. In relying on NMF, this thesis focuses on improving clustering of large text data arising in the form of highly sparse document-term matrices. This objective is first achieved, by proposing several types of regularizations of the original NMF objective function. Setting this objective in a probabilistic context, a new NMF model is introduced bringing theoretical foundations for establishing the connection between NMF and Finite Mixture Models of exponential families leading, therefore, to offer interesting regularizations. This allows to set NMF in a real clustering spirit. Finally, a Bayesian Poisson Latent Block model is proposed to improve document andword clustering simultaneously by capturing noisy term features. This can be connected to NMTF (Nonnegative Matrix factorization Tri-factorization) devoted to co-clustering. Experiments on real datasets have been carried out to support the proposals of the thesis
APA, Harvard, Vancouver, ISO, and other styles
3

Galindo-Prieto, Beatriz. "Novel variable influence on projection (VIP) methods in OPLS, O2PLS, and OnPLS models for single- and multi-block variable selection : VIPOPLS, VIPO2PLS, and MB-VIOP methods." Doctoral thesis, Umeå universitet, Kemiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-130579.

Full text
Abstract:
Multivariate and multiblock data analysis involves useful methodologies for analyzing large data sets in chemistry, biology, psychology, economics, sensory science, and industrial processes; among these methodologies, partial least squares (PLS) and orthogonal projections to latent structures (OPLS®) have become popular. Due to the increasingly computerized instrumentation, a data set can consist of thousands of input variables which contain latent information valuable for research and industrial purposes. When analyzing a large number of data sets (blocks) simultaneously, the number of variables and underlying connections between them grow very much indeed; at this point, reducing the number of variables keeping high interpretability becomes a much needed strategy. The main direction of research in this thesis is the development of a variable selection method, based on variable influence on projection (VIP), in order to improve the model interpretability of OnPLS models in multiblock data analysis. This new method is called multiblock variable influence on orthogonal projections (MB-VIOP), and its novelty lies in the fact that it is the first multiblock variable selection method for OnPLS models. Several milestones needed to be reached in order to successfully create MB-VIOP. The first milestone was the development of a single-block variable selection method able to handle orthogonal latent variables in OPLS models, i.e. VIP for OPLS (denoted as VIPOPLS or OPLS-VIP in Paper I), which proved to increase the interpretability of PLS and OPLS models, and afterwards, was successfully extended to multivariate time series analysis (MTSA) aiming at process control (Paper II). The second milestone was to develop the first multiblock VIP approach for enhancement of O2PLS® models, i.e. VIPO2PLS for two-block multivariate data analysis (Paper III). And finally, the third milestone and main goal of this thesis, the development of the MB-VIOP algorithm for the improvement of OnPLS model interpretability when analyzing a large number of data sets simultaneously (Paper IV). The results of this thesis, and their enclosed papers, showed that VIPOPLS, VIPO2PLS, and MB-VIOP methods successfully assess the most relevant variables for model interpretation in PLS, OPLS, O2PLS, and OnPLS models. In addition, predictability, robustness, dimensionality reduction, and other variable selection purposes, can be potentially improved/achieved by using these methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Robert, Valérie. "Classification croisée pour l'analyse de bases de données de grandes dimensions de pharmacovigilance." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS111/document.

Full text
Abstract:
Cette thèse regroupe des contributions méthodologiques à l'analyse statistique des bases de données de pharmacovigilance. Les difficultés de modélisation de ces données résident dans le fait qu'elles produisent des matrices souvent creuses et de grandes dimensions. La première partie des travaux de cette thèse porte sur la classification croisée du tableau de contingence de pharmacovigilance à l’aide du modèle des blocs latents de Poisson normalisé. L'objectif de la classification est d'une part de fournir aux pharmacologues des zones intéressantes plus réduites à explorer de manière plus précise, et d'autre part de constituer une information a priori utilisable lors de l'analyse des données individuelles de pharmacovigilance. Dans ce cadre, nous détaillons une procédure d'estimation partiellement bayésienne des paramètres du modèle et des critères de sélection de modèles afin de choisir le modèle le plus adapté aux données étudiées. Les données étant de grandes dimensions, nous proposons également une procédure pour explorer de manière non exhaustive mais pertinente, l'espace des modèles en coclustering. Enfin, pour mesurer la performance des algorithmes, nous développons un indice de classification croisée calculable en pratique pour un nombre de classes élevé. Les développements de ces outils statistiques ne sont pas spécifiques à la pharmacovigilance et peuvent être utile à toute analyse en classification croisée. La seconde partie des travaux de cette thèse porte sur l'analyse statistique des données individuelles, plus nombreuses mais également plus riches en information. L'objectif est d'établir des classes d'individus selon leur profil médicamenteux et des sous-groupes d'effets et de médicaments possiblement en interaction, palliant ainsi le phénomène de coprescription et de masquage que peuvent présenter les méthodes existantes sur le tableau de contingence. De plus, l'interaction entre plusieurs effets indésirables y est prise en compte. Nous proposons alors le modèle des blocs latents multiple qui fournit une classification croisée simultanée des lignes et des colonnes de deux tableaux de données binaires en leur imposant le même classement en ligne. Nous discutons des hypothèses inhérentes à ce nouveau modèle et nous énonçons des conditions suffisantes de son identifiabilité. Ensuite, nous présentons une procédure d'estimation de ses paramètres et développons des critères de sélection de modèles associés. De plus, un modèle de simulation numérique des données individuelles de pharmacovigilance est proposé et permet de confronter les méthodes entre elles et d'étudier leurs limites. Enfin, la méthodologie proposée pour traiter les données individuelles de pharmacovigilance est explicitée et appliquée à un échantillon de la base française de pharmacovigilance entre 2002 et 2010
This thesis gathers methodological contributions to the statistical analysis of large datasets in pharmacovigilance. The pharmacovigilance datasets produce sparse and large matrices and these two characteritics are the main statistical challenges for modelling them. The first part of the thesis is dedicated to the coclustering of the pharmacovigilance contingency table thanks to the normalized Poisson latent block model. The objective is on the one hand, to provide pharmacologists with some interesting and reduced areas to explore more precisely. On the other hand, this coclustering remains a useful background information for dealing with individual database. Within this framework, a parameter estimation procedure for this model is detailed and objective model selection criteria are developed to choose the best fit model. Datasets are so large that we propose a procedure to explore the model space in coclustering, in a non exhaustive way but a relevant one. Additionnally, to assess the performances of the methods, a convenient coclustering index is developed to compare partitions with high numbers of clusters. The developments of these statistical tools are not specific to pharmacovigilance and can be used for any coclustering issue. The second part of the thesis is devoted to the statistical analysis of the large individual data, which are more numerous but also provides even more valuable information. The aim is to produce individual clusters according their drug profiles and subgroups of drugs and adverse effects with possible links, which overcomes the coprescription and masking phenomenons, common contingency table issues in pharmacovigilance. Moreover, the interaction between several adverse effects is taken into account. For this purpose, we propose a new model, the multiple latent block model which enables to cocluster two binary tables by imposing the same row ranking. Assertions inherent to the model are discussed and sufficient identifiability conditions for the model are presented. Then a parameter estimation algorithm is studied and objective model selection criteria are developed. Moreover, a numeric simulation model of the individual data is proposed to compare existing methods and study its limits. Finally, the proposed methodology to deal with individual pharmacovigilance data is presented and applied to a sample of the French pharmacovigilance database between 2002 and 2010
APA, Harvard, Vancouver, ISO, and other styles
5

Brault, Vincent. "Estimation et sélection de modèle pour le modèle des blocs latents." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112238/document.

Full text
Abstract:
Le but de la classification est de partager des ensembles de données en sous-ensembles les plus homogènes possibles, c'est-à-dire que les membres d'une classe doivent plus se ressembler entre eux qu'aux membres des autres classes. Le problème se complique lorsque le statisticien souhaite définir des groupes à la fois sur les individus et sur les variables. Le modèle des blocs latents définit une loi pour chaque croisement de classe d'objets et de classe de variables, et les observations sont supposées indépendantes conditionnellement au choix de ces classes. Toutefois, il est impossible de factoriser la loi jointe des labels empêchant le calcul de la logvraisemblance et l'utilisation de l'algorithme EM. Plusieurs méthodes et critères existent pour retrouver ces partitions, certains fréquentistes, d'autres bayésiens, certains stochastiques, d'autres non. Dans cette thèse, nous avons d'abord proposé des conditions suffisantes pour obtenir l'identifiabilité. Dans un second temps, nous avons étudié deux algorithmes proposés pour contourner le problème de l'algorithme EM : VEM de Govaert et Nadif (2008) et SEM-Gibbs de Keribin, Celeux et Govaert (2010). En particulier, nous avons analysé la combinaison des deux et mis en évidence des raisons pour lesquelles les algorithmes dégénèrent (terme utilisé pour dire qu'ils renvoient des classes vides). En choisissant des lois a priori judicieuses, nous avons ensuite proposé une adaptation bayésienne permettant de limiter ce phénomène. Nous avons notamment utilisé un échantillonneur de Gibbs dont nous proposons un critère d'arrêt basé sur la statistique de Brooks-Gelman (1998). Nous avons également proposé une adaptation de l'algorithme Largest Gaps (Channarond et al. (2012)). En reprenant leurs démonstrations, nous avons démontré que les estimateurs des labels et des paramètres obtenus sont consistants lorsque le nombre de lignes et de colonnes tendent vers l'infini. De plus, nous avons proposé une méthode pour sélectionner le nombre de classes en ligne et en colonne dont l'estimation est également consistante à condition que le nombre de ligne et de colonne soit très grand. Pour estimer le nombre de classes, nous avons étudié le critère ICL (Integrated Completed Likelihood) dont nous avons proposé une forme exacte. Après avoir étudié l'approximation asymptotique, nous avons proposé un critère BIC (Bayesian Information Criterion) puis nous conjecturons que les deux critères sélectionnent les mêmes résultats et que ces estimations seraient consistantes ; conjecture appuyée par des résultats théoriques et empiriques. Enfin, nous avons comparé les différentes combinaisons et proposé une méthodologie pour faire une analyse croisée de données
Classification aims at sharing data sets in homogeneous subsets; the observations in a class are more similar than the observations of other classes. The problem is compounded when the statistician wants to obtain a cross classification on the individuals and the variables. The latent block model uses a law for each crossing object class and class variables, and observations are assumed to be independent conditionally on the choice of these classes. However, factorizing the joint distribution of the labels is impossible, obstructing the calculation of the log-likelihood and the using of the EM algorithm. Several methods and criteria exist to find these partitions, some frequentist ones, some bayesian ones, some stochastic ones... In this thesis, we first proposed sufficient conditions to obtain the identifiability of the model. In a second step, we studied two proposed algorithms to counteract the problem of the EM algorithm: the VEM algorithm (Govaert and Nadif (2008)) and the SEM-Gibbs algorithm (Keribin, Celeux and Govaert (2010)). In particular, we analyzed the combination of both and highlighted why the algorithms degenerate (term used to say that it returns empty classes). By choosing priors wise, we then proposed a Bayesian adaptation to limit this phenomenon. In particular, we used a Gibbs sampler and we proposed a stopping criterion based on the statistics of Brooks-Gelman (1998). We also proposed an adaptation of the Largest Gaps algorithm (Channarond et al. (2012)). By taking their demonstrations, we have shown that the labels and parameters estimators obtained are consistent when the number of rows and columns tend to infinity. Furthermore, we proposed a method to select the number of classes in row and column, the estimation provided is also consistent when the number of row and column is very large. To estimate the number of classes, we studied the ICL criterion (Integrated Completed Likelihood) whose we proposed an exact shape. After studying the asymptotic approximation, we proposed a BIC criterion (Bayesian Information Criterion) and we conjecture that the two criteria select the same results and these estimates are consistent; conjecture supported by theoretical and empirical results. Finally, we compared the different combinations and proposed a methodology for co-clustering
APA, Harvard, Vancouver, ISO, and other styles
6

Tami, Myriam. "Approche EM pour modèles multi-blocs à facteurs à une équation structurelle." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT303/document.

Full text
Abstract:
Les modèles d'équations structurelles à variables latentes permettent de modéliser des relations entre des variables observables et non observables. Les deux paradigmes actuels d'estimation de ces modèles sont les méthodes de moindres carrés partiels sur composantes et l'analyse de la structure de covariance. Dans ce travail, après avoir décrit les deux principales méthodes d'estimation que sont PLS et LISREL, nous proposons une approche d'estimation fondée sur la maximisation par algorithme EM de la vraisemblance globale d'un modèle à facteurs latents et à une équation structurelle. Nous en étudions les performances sur des données simulées et nous montrons, via une application sur des données réelles environnementales, comment construire pratiquement un modèle et en évaluer la qualité. Enfin, nous appliquons l'approche développée dans le contexte d'un essai clinique en cancérologie pour l'étude de données longitudinales de qualité de vie. Nous montrons que par la réduction efficace de la dimension des données, l'approche EM simplifie l'analyse longitudinale de la qualité de vie en évitant les tests multiples. Ainsi, elle contribue à faciliter l'évaluation du bénéfice clinique d'un traitement
Structural equation models enable the modeling of interactions between observed variables and latent ones. The two leading estimation methods are partial least squares on components and covariance-structure analysis. In this work, we first describe the PLS and LISREL methods and, then, we propose an estimation method using the EM algorithm in order to maximize the likelihood of a structural equation model with latent factors. Through a simulation study, we investigate how fast and accurate the method is, and thanks to an application to real environmental data, we show how one can handly construct a model or evaluate its quality. Finally, in the context of oncology, we apply the EM approach on health-related quality-of-life data. We show that it simplifies the longitudinal analysis of quality-of-life and helps evaluating the clinical benefit of a treatment
APA, Harvard, Vancouver, ISO, and other styles
7

Laclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.

Full text
Abstract:
Notre capacité grandissante à collecter et stocker des données a fait de l'apprentissage non supervisé un outil indispensable qui permet la découverte de structures et de modèles sous-jacents aux données, sans avoir à \étiqueter les individus manuellement. Parmi les différentes approches proposées pour aborder ce type de problème, le clustering est très certainement le plus répandu. Le clustering suppose que chaque groupe, également appelé cluster, est distribué autour d'un centre défini en fonction des valeurs qu'il prend pour l'ensemble des variables. Cependant, dans certaines applications du monde réel, et notamment dans le cas de données de dimension importante, cette hypothèse peut être invalidée. Aussi, les algorithmes de co-clustering ont-ils été proposés: ils décrivent les groupes d'individus par un ou plusieurs sous-ensembles de variables au regard de leur pertinence. La structure des données finalement obtenue est composée de blocs communément appelés co-clusters. Dans les deux premiers chapitres de cette thèse, nous présentons deux approches de co-clustering permettant de différencier les variables pertinentes du bruit en fonction de leur capacité \`a révéler la structure latente des données, dans un cadre probabiliste d'une part et basée sur la notion de métrique, d'autre part. L'approche probabiliste utilise le principe des modèles de mélanges, et suppose que les variables non pertinentes sont distribuées selon une loi de probabilité dont les paramètres sont indépendants de la partition des données en cluster. L'approche métrique est fondée sur l'utilisation d'une distance adaptative permettant d'affecter à chaque variable un poids définissant sa contribution au co-clustering. D'un point de vue théorique, nous démontrons la convergence des algorithmes proposés en nous appuyant sur le théorème de convergence de Zangwill. Dans les deux chapitres suivants, nous considérons un cas particulier de structure en co-clustering, qui suppose que chaque sous-ensemble d'individus et décrit par un unique sous-ensemble de variables. La réorganisation de la matrice originale selon les partitions obtenues sous cette hypothèse révèle alors une structure de blocks homogènes diagonaux. Comme pour les deux contributions précédentes, nous nous plaçons dans le cadre probabiliste et métrique. L'idée principale des méthodes proposées est d'imposer deux types de contraintes : (1) nous fixons le même nombre de cluster pour les individus et les variables; (2) nous cherchons une structure de la matrice de données d'origine qui possède les valeurs maximales sur sa diagonale (par exemple pour le cas des données binaires, on cherche des blocs diagonaux majoritairement composés de valeurs 1, et de 0 à l’extérieur de la diagonale). Les approches proposées bénéficient des garanties de convergence issues des résultats des chapitres précédents. Enfin, pour chaque chapitre, nous dérivons des algorithmes permettant d'obtenir des partitions dures et floues. Nous évaluons nos contributions sur un large éventail de données simulées et liées a des applications réelles telles que le text mining, dont les données peuvent être binaires ou continues. Ces expérimentations nous permettent également de mettre en avant les avantages et les inconvénients des différentes approches proposées. Pour conclure, nous pensons que cette thèse couvre explicitement une grande majorité des scénarios possibles découlant du co-clustering flou et dur, et peut être vu comme une généralisation de certaines approches de biclustering populaires
With the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
APA, Harvard, Vancouver, ISO, and other styles
8

Laclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data." Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.

Full text
Abstract:
Notre capacité grandissante à collecter et stocker des données a fait de l'apprentissage non supervisé un outil indispensable qui permet la découverte de structures et de modèles sous-jacents aux données, sans avoir à \étiqueter les individus manuellement. Parmi les différentes approches proposées pour aborder ce type de problème, le clustering est très certainement le plus répandu. Le clustering suppose que chaque groupe, également appelé cluster, est distribué autour d'un centre défini en fonction des valeurs qu'il prend pour l'ensemble des variables. Cependant, dans certaines applications du monde réel, et notamment dans le cas de données de dimension importante, cette hypothèse peut être invalidée. Aussi, les algorithmes de co-clustering ont-ils été proposés: ils décrivent les groupes d'individus par un ou plusieurs sous-ensembles de variables au regard de leur pertinence. La structure des données finalement obtenue est composée de blocs communément appelés co-clusters. Dans les deux premiers chapitres de cette thèse, nous présentons deux approches de co-clustering permettant de différencier les variables pertinentes du bruit en fonction de leur capacité \`a révéler la structure latente des données, dans un cadre probabiliste d'une part et basée sur la notion de métrique, d'autre part. L'approche probabiliste utilise le principe des modèles de mélanges, et suppose que les variables non pertinentes sont distribuées selon une loi de probabilité dont les paramètres sont indépendants de la partition des données en cluster. L'approche métrique est fondée sur l'utilisation d'une distance adaptative permettant d'affecter à chaque variable un poids définissant sa contribution au co-clustering. D'un point de vue théorique, nous démontrons la convergence des algorithmes proposés en nous appuyant sur le théorème de convergence de Zangwill. Dans les deux chapitres suivants, nous considérons un cas particulier de structure en co-clustering, qui suppose que chaque sous-ensemble d'individus et décrit par un unique sous-ensemble de variables. La réorganisation de la matrice originale selon les partitions obtenues sous cette hypothèse révèle alors une structure de blocks homogènes diagonaux. Comme pour les deux contributions précédentes, nous nous plaçons dans le cadre probabiliste et métrique. L'idée principale des méthodes proposées est d'imposer deux types de contraintes : (1) nous fixons le même nombre de cluster pour les individus et les variables; (2) nous cherchons une structure de la matrice de données d'origine qui possède les valeurs maximales sur sa diagonale (par exemple pour le cas des données binaires, on cherche des blocs diagonaux majoritairement composés de valeurs 1, et de 0 à l’extérieur de la diagonale). Les approches proposées bénéficient des garanties de convergence issues des résultats des chapitres précédents. Enfin, pour chaque chapitre, nous dérivons des algorithmes permettant d'obtenir des partitions dures et floues. Nous évaluons nos contributions sur un large éventail de données simulées et liées a des applications réelles telles que le text mining, dont les données peuvent être binaires ou continues. Ces expérimentations nous permettent également de mettre en avant les avantages et les inconvénients des différentes approches proposées. Pour conclure, nous pensons que cette thèse couvre explicitement une grande majorité des scénarios possibles découlant du co-clustering flou et dur, et peut être vu comme une généralisation de certaines approches de biclustering populaires
With the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
APA, Harvard, Vancouver, ISO, and other styles
9

Schmutz, Amandine. "Contributions à l'analyse de données fonctionnelles multivariées, application à l'étude de la locomotion du cheval de sport." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1241.

Full text
Abstract:
Avec l'essor des objets connectés pour fournir un suivi systématique, objectif et fiable aux sportifs et à leur entraineur, de plus en plus de paramètres sont collectés pour un même individu. Une alternative aux méthodes d'évaluation en laboratoire est l'utilisation de capteurs inertiels qui permettent de suivre la performance sans l'entraver, sans limite d'espace et sans procédure d'initialisation fastidieuse. Les données collectées par ces capteurs peuvent être vues comme des données fonctionnelles multivariées : se sont des entités quantitatives évoluant au cours du temps de façon simultanée pour un même individu statistique. Cette thèse a pour objectif de chercher des paramètres d'analyse de la locomotion du cheval athlète à l'aide d'un capteur positionné dans la selle. Cet objet connecté (centrale inertielle, IMU) pour le secteur équestre permet de collecter l'accélération et la vitesse angulaire au cours du temps, dans les trois directions de l'espace et selon une fréquence d'échantillonnage de 100 Hz. Une base de données a ainsi été constituée rassemblant 3221 foulées de galop, collectées en ligne droite et en courbe et issues de 58 chevaux de sauts d'obstacles de niveaux et d'âges variés. Nous avons restreint notre travail à la prédiction de trois paramètres : la vitesse par foulée, la longueur de foulée et la qualité de saut. Pour répondre aux deux premiers objectifs nous avons développé une méthode de clustering fonctionnelle multivariée permettant de diviser notre base de données en sous-groupes plus homogènes du point de vue des signaux collectés. Cette méthode permet de caractériser chaque groupe par son profil moyen, facilitant leur compréhension et leur interprétation. Mais, contre toute attente, ce modèle de clustering n'a pas permis d'améliorer les résultats de prédiction de vitesse, les SVM restant le modèle ayant le pourcentage d'erreur inférieur à 0.6 m/s le plus faible. Il en est de même pour la longueur de foulée où une précision de 20 cm est atteinte grâce aux Support Vector Machine (SVM). Ces résultats peuvent s'expliquer par le fait que notre base de données est composée uniquement de 58 chevaux, ce qui est un nombre d'individus très faible pour du clustering. Nous avons ensuite étendu cette méthode au co-clustering de courbes fonctionnelles multivariées afin de faciliter la fouille des données collectées pour un même cheval au cours du temps. Cette méthode pourrait permettre de détecter et prévenir d'éventuels troubles locomoteurs, principale source d'arrêt du cheval de saut d'obstacle. Pour finir, nous avons investigué les liens entre qualité du saut et les signaux collectés par l'IMU. Nos premiers résultats montrent que les signaux collectés par la selle seuls ne suffisent pas à différencier finement la qualité du saut d'obstacle. Un apport d'information supplémentaire sera nécessaire, à l'aide d'autres capteurs complémentaires par exemple ou encore en étoffant la base de données de façon à avoir un panel de chevaux et de profils de sauts plus variés
With the growth of smart devices market to provide athletes and trainers a systematic, objective and reliable follow-up, more and more parameters are monitored for a same individual. An alternative to laboratory evaluation methods is the use of inertial sensors which allow following the performance without hindering it, without space limits and without tedious initialization procedures. Data collected by those sensors can be classified as multivariate functional data: some quantitative entities evolving along time and collected simultaneously for a same individual. The aim of this thesis is to find parameters for analysing the athlete horse locomotion thanks to a sensor put in the saddle. This connected device (inertial sensor, IMU) for equestrian sports allows the collection of acceleration and angular velocity along time in the three space directions and with a sampling frequency of 100 Hz. The database used for model development is made of 3221 canter strides from 58 ridden jumping horses of different age and level of competition. Two different protocols are used to collect data: one for straight path and one for curved path. We restricted our work to the prediction of three parameters: the speed per stride, the stride length and the jump quality. To meet the first to objectives, we developed a multivariate functional clustering method that allow the division of the database into smaller more homogeneous sub-groups from the collected signals point of view. This method allows the characterization of each group by it average profile, which ease the data understanding and interpretation. But surprisingly, this clustering model did not improve the results of speed prediction, Support Vector Machine (SVM) is the model with the lowest percentage of error above 0.6 m/s. The same applied for the stride length where an accuracy of 20 cm is reached thanks to SVM model. Those results can be explained by the fact that our database is build from 58 horses only, which is a quite low number of individuals for a clustering method. Then we extend this method to the co-clustering of multivariate functional data in order to ease the datamining of horses’ follow-up databases. This method might allow the detection and prevention of locomotor disturbances, main source of interruption of jumping horses. Lastly, we looked for correlation between jumping quality and signals collected by the IMU. First results show that signals collected by the saddle alone are not sufficient to differentiate finely the jumping quality. Additional information will be needed, for example using complementary sensors or by expanding the database to have a more diverse range of horses and jump profiles
APA, Harvard, Vancouver, ISO, and other styles
10

Ben, slimen Yosra. "Knowledge extraction from huge volume of heterogeneous data for an automated radio network management." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2046.

Full text
Abstract:
En vue d’aider les opérateurs mobiles avec la gestion de leurs réseaux d’accès radio, trois modèles sont proposés. Le premier modèle est une approche supervisée pour une prévention des anomalies. Son objectif est de détecter les dysfonctionnements futurs d’un ensemble de cellules en observant les indicateurs clés de performance considérés comme des données fonctionnelles. Par conséquent, en alertant les ingénieurs et les réseaux auto-organisés, les opérateurs mobiles peuvent être sauvés d’une dégradation de performance de leurs réseaux. Le modèle a prouvé son efficacité avec une application sur données réelles qui vise à détecter la dégradation de capacité, les problèmes d’accessibilités et les coupures d’appel dans des réseaux LTE.A cause de la diversité des technologies mobiles, le volume de données qui doivent être quotidiennement observées par les opérateurs mobiles devient énorme. Ce grand volume a devenu un obstacle pour la gestion des réseaux mobiles. Le second modèle vise à fournir une représentation simplifiée des indicateurs clés de performance pour une analyse plus facile. Du coup, un modèle de classification croisée pour données fonctionnelles est proposé. L’algorithme est basé sur un modèle de blocs latents dont chaque courbe est identifiée par ses composantes principales fonctionnelles. Ces dernières sont modélisées par une distribution Gaussienne dont les paramètres sont spécifiques à chaque bloc. Les paramètres sont estimés par un algorithme EM stochastique avec un échantillonnage de Gibbs. Ce modèle est le premier modèle de classification croisée pour données fonctionnelles et il a prouvé son efficacité sur des données simulées et aussi sur une application réelle qui vise à aider dans l’optimisation de la topologie des réseaux mobiles 4G.Le troisième modèle vise à résumer l’information issue des indicateurs clés de performance et aussi des alarmes réseaux. Un modèle de classification croisée des données mixtes : fonctionnelles et binaires est alors proposé. L’approche est basé sur un modèle de blocs latents et trois algorithmes sont comparés pour son inférence : EM stochastique avec un échantillonneur de Gibbs, EM de classification et EM variationnelle. Le modèle proposé est le premier algorithme de classification croisée pour données fonctionnelles et binaires. Il a prouvé son efficacité sur des données simulées et sur des données réelles extraites à partir de plusieurs réseaux mobiles 4G
In order to help the mobile operators with the management of their radio access networks, three models are proposed. The first model is a supervised approach for mobile anomalies prevention. Its objective is to detect future malfunctions of a set of cells, by only observing key performance indicators (KPIs) that are considered as functional data. Thus, by alerting the engineers as well as self-organizing networks, mobile operators can be saved from a certain performance degradation. The model has proven its efficiency with an application on real data that aims to detect capacity degradation, accessibility and call drops anomalies for LTE networks.Due to the diversity of mobile network technologies, the volume of data that has to be observed by mobile operators in a daily basis became enormous. This huge volume became an obstacle to mobile networks management. The second model aims to provide a simplified representation of KPIs for an easier analysis. Hence, a model-based co-clustering algorithm for functional data is proposed. The algorithm relies on the latent block model in which each curve is identified by its functional principal components that are modeled by a multivariate Gaussian distribution whose parameters are block-specific. These latter are estimated by a stochastic EM algorithm embedding a Gibbs sampling. This model is the first co-clustering approach for functional data and it has proven its efficiency on simulated data and on a real data application that helps to optimize the topology of 4G mobile networks.The third model aims to resume the information of data issued from KPIs and also alarms. A model-based co-clustering algorithm for mixed data, functional and binary, is therefore proposed. The approach relies on the latent block model, and three algorithms are compared for its inference: stochastic EM within Gibbs sampling, classification EM and variational EM. The proposed model is the first co-clustering algorithm for mixed data that deals with functional and binary features. It has proven its efficiency on simulated data and on real data extracted from live 4G mobile networks
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Latent block models"

1

Fedorov, Viktor, and Mihail San'kov. Management: theory and practice. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1859086.

Full text
Abstract:
The textbook presents the most important aspects of the theory and practice of modern management in a concise and accessible form. The section "Management Theory" is accompanied by questions and tasks for self-control, topics of abstracts and reports, as well as a list of additional literary sources for self-study. The section "Management Practice" contains test methods, practical tasks for individual and collective work of students, business situations for analysis, discussion and management decision-making. The manual additionally includes a block of self-test tasks and a glossary that can be used to monitor the development of the course. Meets the requirements of the federal state educational standards of secondary vocational education of the latest generation. It is intended for students studying in economic and managerial specialties to form basic knowledge in the field of management.
APA, Harvard, Vancouver, ISO, and other styles
2

Efremov, German. Modeling of chemical and technological processes. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1090526.

Full text
Abstract:
In an accessible form, the textbook presents the theoretical foundations of physical and mathematical modeling; considers the modeling of mass, heat and momentum transfer processes, the relationship and analogy between them; studies the theory of similarity, its application in modeling, models of the structure of flows in apparatuses. Experimental-statistical and experimental-analytical modeling methods are also described, which include "black box" methods, planning passive, active full and fractional factor experiments, and adjusting models based on the results of the experiment. At the same time, modeling of chemical reactors, methods of optimization of chemical-technological processes, their selection, comparison and application examples are considered. Examples of modeling and optimization of processes in chemical, petrochemical and biotechnology on a computer in Excel and MathCAD environments are given. The appendices provide the basics of working in the MathCAD environment and elements of matrix algebra. Meets the requirements of the Federal state educational standards of higher education of the latest generation. It is intended for bachelors who are trained for the chemical, petrochemical, food, textile and light industries. It can be useful for specialists and undergraduates, as well as for scientists, engineers and postgraduates dealing with the problem under consideration.
APA, Harvard, Vancouver, ISO, and other styles
3

Kuz'mina, Natal'ya. Criminology and crime prevention. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1900600.

Full text
Abstract:
The textbook presents modern material on all sections of the discipline "Criminology and crime prevention". The content of the teachings on crime and its causes, the identity of the criminal, the mechanism of committing a specific crime is revealed. The characteristics of the current state of certain types of crime using qualitative and quantitative criminological indicators are given. The problems of carrying out criminological research in the modern period are analyzed using a broad empirical base (including data from criminal law statistics). The section "Crime prevention System" has a practical orientation, which includes the legal foundations and areas of law enforcement agencies' activities in the implementation of crime prevention and prevention in Russia. At the end of each chapter of the textbook, a block of control questions and tasks is offered, with the help of which students can test their knowledge and consolidate the studied material. Meets the requirements of the federal state educational standard of secondary vocational education of the latest generation. For students of secondary vocational education institutions studying in the specialty 40.02.02 "Law enforcement", as well as teachers.
APA, Harvard, Vancouver, ISO, and other styles
4

From the Norman Conquest to the Black Death: An anthology of writings from England. Oxford: Oxford University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The Black campus movement: Black students and the racial reconstitution of higher education, 1965-1972. New York: Palgrave Macmillan, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1941-, Schumacher Ulrich, Lotz Rouven, and Emil Schumacher Museum, eds. Karel Appel: Der abstrakte Blick. Hagen: Emil Schumacher Museum Hagen, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rankine, Patrice D. Ulysses in Black: Ralph Ellison, classicism, and African American literature. Madison, WS: University of Wisconsin Press, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Succi, Sauro. Lattice Boltzmann Models without Underlying Boolean Microdynamics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199592357.003.0013.

Full text
Abstract:
Chapter 12 showed how to circumvent two major stumbling blocks of the LGCA approach: statistical noise and exponential complexity of the collision rule. Yet, the ensuing LB still remains connected to low Reynolds flows, due to the low collisionality of the underlying LGCA rules. The high-viscosity barrier was broken just a few months later, when it was realized how to devise LB models top-down, i.e., based on the macroscopic hydrodynamic target, rather than bottom-up, from underlying microdynamics. Most importantly, besides breaking the low-Reynolds barrier, the top-down approach has proven very influential for many subsequent developments of the LB method to this day.
APA, Harvard, Vancouver, ISO, and other styles
9

Palomäki, Outi, and Petri Volmanen. Alternative neural blocks for labour analgesia. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198713333.003.0018.

Full text
Abstract:
Although neuraxial analgesia is available to the majority of parturients in developed countries, alternative neural blocks for labour analgesia are needed for medical, individual, and institutional reasons. Paracervical and pudendal blocks are usually administered transvaginally by an obstetrician. An injection of 0.25% bupivacaine using a superficial technique into the lateral fornixes gives rapid pain relief and has been found to have no negative effect on either fetal oxygenation, or maternal and neonatal outcomes. Low rates of post-analgesic bradycardia and high rates of spontaneous vaginal delivery have been described in low-risk populations. The analgesic effect of a paracervical block is moderate and is limited to the first stage of labour. A pudendal block, administered transvaginally, can be used for pain relief in the late first stage, the second stage, in cases of vacuum extraction, or for episiotomy repair. In clinical use, 1% lidocaine gives rapid pain relief but the success rate is variable. The complications of pudendal block are rare and localized. The sympathetic and paravertebral blocks are currently mainly of historic interest. However, they may benefit parturients in exceptional conditions if the anaesthesiologist is experienced in the techniques. Lumbar sympathetic block provides fast pain relief during the first stage of labour when a combination of 0.5% bupivacaine with fentanyl and epinephrine is employed. With the currently available data, no conclusion on the analgesic effects of thoracic paravertebral block can be drawn when it is used for labour pain relief. Potential maternal risks limit the use of these methods in modern obstetrics.
APA, Harvard, Vancouver, ISO, and other styles
10

Hoffnung-Garskof, Jesse E. Racial Migrations. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691183534.001.0001.

Full text
Abstract:
In the late nineteenth century, a small group of Cubans and Puerto Ricans of African descent settled in the segregated tenements of New York City. At an immigrant educational society in Greenwich Village, these early Afro-Latino New Yorkers taught themselves to be poets, journalists, and revolutionaries. At the same time, these individuals built a political network and articulated an ideal of revolutionary nationalism centered on the projects of racial and social justice. These efforts were critical to the poet and diplomat José Martí's writings about race and his bid for leadership among Cuban exiles, and to the later struggle to create space for black political participation in the Cuban Republic. This book presents a vivid portrait of these largely forgotten migrant revolutionaries, weaving together their experiences of migrating while black, their relationships with African American civil rights leaders, and their evolving participation in nationalist political movements. By placing Afro-Latino New Yorkers at the center of the story, the book offers a new interpretation of the revolutionary politics of the Spanish Caribbean, including the idea that Cuba could become a nation without racial divisions. A model of transnational and comparative research, the book reveals the complexities of race-making within migrant communities and the power of small groups of immigrants to transform their home societies.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Latent block models"

1

Boutalbi, Rafika, Lazhar Labiod, and Mohamed Nadif. "Latent Block Regression Model." In Studies in Classification, Data Analysis, and Knowledge Organization, 73–81. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09034-9_9.

Full text
Abstract:
AbstractWhen dealing with high dimensional sparse data, such as in recommender systems,co-clusteringturnsouttobemorebeneficialthanone-sidedclustering,even if one is interested in clustering along one dimension only. Thereby, co-clusterwise is a natural extension of clusterwise. Unfortunately, all of the existing approaches do not consider covariates on both dimensions of a data matrix. In this paper, we propose a Latent Block Regression Model (LBRM) overcoming this limit. For inference, we propose an algorithm performing simultaneously co-clustering and regression where a linear regression model characterizes each block. Placing the estimate of the model parameters under the maximum likelihood approach, we derive a Variational Expectation–Maximization (VEM) algorithm for estimating the model’s parameters. The finality of the proposed VEM-LBRM is illustrated through simulated datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Khoufache, Reda, Anisse Belhadj, Hanene Azzag, and Mustapha Lebbah. "Distributed MCMC Inference for Bayesian Non-parametric Latent Block Model." In Advances in Knowledge Discovery and Data Mining, 271–83. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2242-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lücke, Jörg, Zhenwen Dai, and Georgios Exarchakis. "Truncated Variational Sampling for ‘Black Box’ Optimization of Generative Models." In Latent Variable Analysis and Signal Separation, 467–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93764-9_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guarino, Stefano, Enrico Mastrostefano, and Davide Torre. "The Hyperbolic Geometric Block Model and Networks with Latent and Explicit Geometries." In Complex Networks and Their Applications XI, 109–21. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21131-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Salinas Ruíz, Josafhat, Osval Antonio Montesinos López, Gabriela Hernández Ramírez, and Jose Crossa Hiriart. "Generalized Linear Mixed Models for Repeated Measurements." In Generalized Linear Mixed Models with Applications in Agriculture and Biology, 377–423. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32800-8_9.

Full text
Abstract:
AbstractRepeated measures data, also known as longitudinal data, are those derived from experiments in which observations are made on the same experimental units at various planned times. These experiments can be of the regression or analysis of variance (ANOVA) type, can contain two or more treatments, and are set up using familiar designs, such as CRD (Completely Randomized design), randomized complete block design (RCBD), or randomized incomplete blocks, if blocking is appropriate, or using row and column designs such as Latin squares when appropriate. Repeated measures designs are widely used in the biological sciences and are fairly well understood for normally distributed data but less so with binary, ordinal, count data, and so on. Nevertheless, recent developments in statistical computing methodology and software have greatly increased the number of tools available for analyzing categorical data.
APA, Harvard, Vancouver, ISO, and other styles
6

Osborne, Martin J., and Ariel Rubinstein. "Choice." In Models in Microeconomic Theory, 17–30. 2nd ed. Cambridge, UK: Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0362.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Osborne, Martin J., and Ariel Rubinstein. "Choice." In Models in Microeconomic Theory, 17–30. 2nd ed. Cambridge, UK: Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0361.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Banaś, Monika. "Women's “Black Protest” in Poland." In Protest in Late Modern Societies, 117–31. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003270065-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Marchello, Giulia, Marco Corneli, and Charles Bouveyron. "A Deep Dynamic Latent Block Model for the Co-Clustering of Zero-Inflated Data Matrices." In Machine Learning and Knowledge Discovery in Databases: Research Track, 695–710. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43412-9_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.

Full text
Abstract:
Abstract We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Latent block models"

1

Li, Changsheng, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, and Guoren Wang. "On Deep Unsupervised Active Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/364.

Full text
Abstract:
Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating. Most existing works are based on shallow linear models by assuming that each sample can be well approximated by the span (i.e., the set of all linear combinations) of certain selected samples, and then take these selected samples as representative ones to label. However, in practice, the data do not necessarily conform to linear models, and how to model nonlinearity of data often becomes the key point to success. In this paper, we present a novel Deep neural network framework for Unsupervised Active Learning, called DUAL. DUAL can explicitly learn a nonlinear embedding to map each input into a latent space through an encoder-decoder architecture, and introduce a selection block to select representative samples in the the learnt latent space. In the selection block, DUAL considers to simultaneously preserve the whole input patterns as well as the cluster structure of data. Extensive experiments are performed on six publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared with state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Feida, Junwei Zhu, Wenqing Chu, Ying Tai, Zhifeng Xie, Xiaoming Huang, and Chengjie Wang. "HifiHead: One-Shot High Fidelity Neural Head Synthesis with 3D Control." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/244.

Full text
Abstract:
We propose HifiHead, a high fidelity neural talking head synthesis method, which can well preserve the source image's appearance and control the motion (e.g., pose, expression, gaze) flexibly with 3D morphable face models (3DMMs) parameters derived from a driving image or indicated by users. Existing head synthesis works mainly focus on low-resolution inputs. Instead, we exploit the powerful generative prior embedded in StyleGAN to achieve high-quality head synthesis and editing. Specifically, we first extract the source image's appearance and driving image's motion to construct 3D face descriptors, which are employed as latent style codes for the generator. Meanwhile, hierarchical representations are extracted from the source and rendered 3D images respectively to provide faithful appearance and shape guidance. Considering the appearance representations need high-resolution flow fields for spatial transform, we propose a coarse-to-fine style-based generator consisting of a series of feature alignment and refinement (FAR) blocks. Each FAR block updates the dense flow fields and refines RGB outputs simultaneously for efficiency. Extensive experiments show that our method blends source appearance and target motion more accurately along with more photo-realistic results than previous state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Rywik, Marcin, Axel Zimmermann, Alexander J. Eder, Edoardo Scoletta, and Wolfgang Polifke. "Spatially Resolved Modeling of the Nonlinear Dynamics of a Laminar Premixed Flame With a Multilayer Perceptron - Convolution Autoencoder Network." In ASME Turbo Expo 2023: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/gt2023-102543.

Full text
Abstract:
Abstract This work presents a multilayer perceptron-convolutional autoencoder (MLP-CAE) neural network model, which accurately predicts the two-dimensional flame field dynamics of an acoustically excited premixed laminar flame. The obtained architecture maps the acoustic perturbation time series to a spatially distributed heat release rate field, capturing the flame lengths and shapes. This extends to previous neural network models, which predicted only the field-integrated value of the heat release rate. The MLP-CAE comprises two sub-models: a fully connected MLP and a CAE. The key idea behind the CAE network is to find a lower dimensional latent space representation of the heat release rate field. The MLP is responsible for modeling the flame dynamics by transforming the acoustic forcing signal into this latent space, enabling the decoder to produce the flow field distributions. To train the MLP-CAE, computational fluid dynamics (CFD) flame simulations with a broadband acoustic forcing were used. Its normalized amplitude was set to 0.5 and 1.0, resulting in a nonlinear flame response. The network was found to accurately predict the perturbed flame shapes — both under broadband and harmonic forcing. Additionally, it conserved the correct frequency response characteristics as verified by the global and local flame describing functions. The MLP-CAE provides a building block towards a potential shift away from a purely ‘0D’ analysis with the assumption of acoustic compactness of the flame. When combined with an acoustic network, the generated flame fields could provide more physical insight in the thermoacoustic dynamics of combustion chambers. Those capabilities do not come at an additional significant computational cost, as even the previous nonspatial flame models had to train on the CFD data, which readily included field distributions.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Bingyi, Kenneth A. Ross, Martha A. Kim, and Stephen A. Edwards. "Implementing latency-insensitive dataflow blocks." In 2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE). IEEE, 2015. http://dx.doi.org/10.1109/memcod.2015.7340485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ailem, Melissa, Francois Role, and Mohamed Nadif. "Sparse Poisson Latent Block Model for Document Clustering (Extended Abstract)." In 2018 IEEE 34th International Conference on Data Engineering (ICDE). IEEE, 2018. http://dx.doi.org/10.1109/icde.2018.00229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lomet, Aurore, Gerard Govaert, and Yves Grandvalet. "An Approximation of the Integrated Classification Likelihood for the Latent Block Model." In 2012 IEEE 12th International Conference on Data Mining Workshops. IEEE, 2012. http://dx.doi.org/10.1109/icdmw.2012.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shahiri, Mohammad, and Mahdi Eskandari. "Exact Recovery of Two-Latent Variable Stochastic Block Model with Side Information." In 2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM). IEEE, 2021. http://dx.doi.org/10.1109/iccitm53167.2021.9677645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Schindler, Miguel Horacio. "Phase Envelopes From Black-Oil Models." In Latin American & Caribbean Petroleum Engineering Conference. Society of Petroleum Engineers, 2007. http://dx.doi.org/10.2118/106855-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bishop, M., X. Moonan, W. Lalla, and L. Anderson. "Another Look at Bovallius, Onshore Southern Basin Trinidad." In SPE Latin American and Caribbean Petroleum Engineering Conference. SPE, 2023. http://dx.doi.org/10.2118/213174-ms.

Full text
Abstract:
Abstract The Bovallius field, located in the southeastern part of Trinidad is bordered to the west by the Balata East block and to the south by the Ortoire block. The field spans approximately 942 acres with six wells drilled in the block to date, the first well being drilled in 1954. Regionally the Bovallius field occurs on trend with a WSW plunging, SE verging thrusted anticlinal system. The major tectonic features within the Bovallius field consists of the eastern extension of the Balata anticline which is truncated by several NE-SW trending thrust faults, several NW-SE trending normal faults and bordered to the east by the NE-S/SW Cedar Grove fault. Previous evaluations done on the Bovallius field resulted in little prospectivity for the block. Recent discovery wells within the Ortoire Block, created renewed interest over the Bovallius field. It is known from analogous fields that there are three structural limbs which occur in the Herrera formation: Overthrust, Intermediate and Sub-thrust. Field wide mapping indicated that most of the wells within the Bovallius field penetrated three thrust sheets within the Overthrust limb only. As such the undrilled Intermediate and Sub-thrust limbs present an exploration opportunity within the field. Sand maps constructed on the producing reservoirs show that these sands trend in a NE-SW direction with the thickest accumulations occurring sub-parallel to the axis of the anticline. Similarly, as seen in the Penal/Barrackpore and Cascadura fields, within the Bovallius field the anticlinal system plunges to the SW with NW-SE trending normal faults dissecting the anticlinal feature. The most prospective areas tend to be in the eastern crestal structurally up-dip parts of these fault blocks. This updated model resulted in three outstep prospects. Field development in the southern part of the Bovallius block was heavily influenced by Balata East wells, BE-6, and BE-19. These wells are located on the southern flank of the Balata anticline and encountered the Herrera sands wet. At the time it was believed that as the majority of the Bovallius block lies downdip of these wet wells, the Herrera sands would also be wet over this acreage. However, on examination of some of the surrounding wells in the area, this assumption has been relooked. There is a thrust fault running in a NE-SW direction, south of the wet Balata wells which allow for hydrocarbon accumulation within the OL-4/Royston area. This makes the southern part of the Bovallius block prospective for the Herrera sands.
APA, Harvard, Vancouver, ISO, and other styles
10

Doersch, Stefan, Maria Starnberg, and Haike Brick. "Acoustic Certification of New Composite Brake Blocks." In EuroBrake 2021. FISITA, 2021. http://dx.doi.org/10.46720/1766833eb2021-stp-022.

Full text
Abstract:
In the latest amendment to the TSI Noise, the Commission Implementing Regulation (EU) 2019/774 from year 2019 (TSI NOI EU 2019/774, 2019), the term “quieter brake blocks” was introduced. The purpose was to distinguish between brake blocks that cause a high rolling noise level by roughening the surface of the wheels and quieter brake blocks with acoustic properties that better correspond to the pass-by noise limit for freight wagons. However, it has remained an open point which methods and procedures should be used for the assessment of the acoustic properties of new brake blocks. This open point shall be closed in the new revision of the TSI Noise, which will become effective in year 2022. It requires a new acoustic certification procedure for brake blocks to be developed. A new procedure for the acoustic certification of new brake blocks should be reliable, easy to use and less expensive in terms of time and costs than full scale pass-by noise measurements in field. These conditions could be fulfilled by a certification procedure based on the wheel roughness level caused by the specific brake block. The relationship to the TSI-noise limit value can be established by defining reference values for the rail roughness and transfer function according to the well-established rolling noise model. Besides the certification procedure, a practical method should be defined how to generate and assess the wheel roughness that is characteristic for a specific brake block product. This project is financed by the German Centre for Rail Traffic Research in cooperation with the Federal Railway Authority and executed by DB Systemtechnik GmbH. The objective of the presentation is to introduce the research project “Acoustic Certification of New Composite Brake Blocks”. This presentation summarizes the project work so far and gives explanations and background knowledge to the development of the methods as well as to railway noise. A calculation example is given to comprehensibly demonstrate the proposed procedure. At the time of the EuroBrake conference the project is still ongoing, and the final results cannot yet be presented. The focus for the discussions is to put on the practicability of the methods and the needs of the user regarding for instance documentation, required efforts or material and qualification.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Latent block models"

1

Chronopoulos, Ilias, Katerina Chrysikou, George Kapetanios, James Mitchell, and Aristeidis Raftapostolos. Deep Neural Network Estimation in Panel Data Models. Federal Reserve Bank of Cleveland, July 2023. http://dx.doi.org/10.26509/frbc-wp-202315.

Full text
Abstract:
In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimators to forecast the progression of new COVID-19 cases across the G7 countries during the pandemic. We find significant forecasting gains over both linear panel and nonlinear time-series models. Containment or lockdown policies, as instigated at the national level by governments, are found to have out-of-sample predictive power for new COVID-19 cases. We illustrate how the use of partial derivatives can help open the "black box" of neural networks and facilitate semi-structural analysis: school and workplace closures are found to have been effective policies at restricting the progression of the pandemic across the G7 countries. But our methods illustrate significant heterogeneity and time variation in the effectiveness of specific containment policies.
APA, Harvard, Vancouver, ISO, and other styles
2

Zagorevski, A., and C. R. van Staal. Cordilleran magmatism in Yukon and northern British Columbia: characteristics, temporal variations, and significance for the tectonic evolution of the northern Cordillera. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/326063.

Full text
Abstract:
Geochemical and temporal characterization of magmatic rocks is an effective way to test terrane definitions and to evaluate tectonic models. In the northern Cordillera, magmatic episodes are mostly interpreted as products of continental arc and back-arc settings. Re-evaluation of Paleozoic and Late Mesozoic magmatic episodes presented herein highlights fundamental gaps in the understanding of the tectonic framework of the northern Cordillera. In many cases, the character of magmatism and temporal relationships between various magma types do not support existing tectonic models. The present re-evaluation indicates that some of the magmatic episodes are best explained by lithospheric extension rather than arc magmatism. In addition, comparison to modern analogues suggests that many presently defined terranes are not the fundamental tectonic building blocks, but rather combine distinctly different tectonic elements that may not be related each other. Grouping of these distinctly different tectonic elements into single terranes hinders the understanding of Cordilleran evolution and its mineral deposits.
APA, Harvard, Vancouver, ISO, and other styles
3

Sentcоv, Valentin, Andrei Reutov, and Vyacheslav Kuzmin. Electronic training manual "Acute poisoning with alcohols and alcohol-containing liquids". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0778.29012024.

Full text
Abstract:
In the structure of acute poisonings, ethanol poisoning currently accounts, according to various sources, from 10 to 20%. The mortality rate in poison control centers for ethanol poisoning is 1-2%, but the mortality rate is much higher due to those who died before medical care was provided. The widespread use of methanol and ethylene glycol in various industries and the high mortality rate with late recognition of poisoning with these alcohols determine the high relevance of a detailed study of the clinic, diagnosis and treatment of these poisonings by doctors of various specialties. In particular, toxicologists from health care institutions, anesthesiologists and resuscitators from health care institutions, doctors from specialized emergency medical services teams, and disaster medicine doctors. Competent and timely diagnosis, hospitalization in a specialized hospital and previously started treatment greatly increases the patient’s chances of life and its further quality. This electronic educational resourse consists of six theoretical educational modules: general issues of clinical toxicology, acute poisoning with veratrine, acute poisoning with ethanol, poisoning with methanol, poisoning with ethylene glycol, acute poisoning with other alcohols. The theoretical block of modules is presented by presentations, the text of lectures with illustrations. Control classes in the form of test control accompany each theoretical module. After studying all modules, the student passes the final test control. Mastering the electronic educational resourse will ensure a high level of readiness to provide specialized toxicological care by doctors of various specialties.
APA, Harvard, Vancouver, ISO, and other styles
4

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, December 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Full text
Abstract:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
APA, Harvard, Vancouver, ISO, and other styles
5

Chejanovsky, Nor, and Suzanne M. Thiem. Isolation of Baculoviruses with Expanded Spectrum of Action against Lepidopteran Pests. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7586457.bard.

Full text
Abstract:
Our long-term goal is to learn to control (expand and restrict) the host range of baculoviruses. In this project our aim was to expand the host range of the prototype baculovirus Autographa cali/arnica nuclear polyhedrosis virus (AcMNPV) towards American and Israeli pests. To achieve this objective we studied AcMNPV infection in the non-permissive hosts L. dispar and s. littoralis (Ld652Y and SL2 cells, respectively) as a model system and the major barriers to viral replication. We isolated recombinant baculoviruses with expanded infectivity towards L. dispar and S. littoralis and tested their infectivity towards other Lepidopteran pests. The restricted host range displayed by baculoviruses constitutes an obstacle to their further implementation in the control of diverse Lepidopteran pests, increasing the development costs. Our work points out that cellular defenses are major role blocks to AcMNPV replication in non- and semi-permissive hosts. Therefore a major determinant ofbaculovirus host range is the ability of the virus to effectively counter cellular defenses of host cells. This is exemplified by our findings showing tliat expressing the viral gene Ldhrf-l overcomes global translation arrest in AcMNPV -infected Ld652Y cells. Our data suggests that Ld652Y cells have two anti-viral defense pathways, because they are subject to global translation arrest when infected with AcMNPV carrying a baculovirus apoptotic suppressor (e.g., wild type AcMNPV carryingp35, or recombinant AcMNPV carrying Opiap, Cpiap. or p49 genes) but apoptose when infected with AcMNPV-Iacking a functional apoptotic suppressor. We have yet to elucidate how hrf-l precludes the translation arrest mechanism(s) in AcMNPV-infected Ld652Y cells. Ribosomal profiles of AcMNPV infected Ld652Y cells suggested that translation initiation is a major control point, but we were unable to rule-out a contribution from a block in translation elongation. Phosphorylation of eIF-2a did not appear to playa role in AcMNPV -induced translation arrest. Mutagenesis studies ofhrf-l suggest that a highly acidic domain plays a role in precluding translation arrest. Our findings indicate that translation arrest may be linked to apoptosis either through common sensors of virus infection or as a consequence of late events in the virus life-cycle that occur only if apoptosis is suppressed. ~ AcMNPV replicates poorly in SL2 cells and induces apoptosis. Our studies in AcMNPV - infected SL2ceils led us to conclude that the steady-state levels of lEI (product of the iel gene, major AcMNPV -transactivator and multifunctional protein) relative to those of the immediate early viral protein lEO, playa critical role in regulating the viral infection. By increasing the IEl\IEO ratio we achieved AcMNPV replication in S. littoralis and we were able to isolate recombinant AcMNPV s that replicated efficiently in S. lifforalis cells and larvae. Our data that indicated that AcMNPV - infection may be regulated by an interaction between IE 1 and lED (of previously unknown function). Indeed, we showed that IE 1 associates with lED by using protein "pull down" and immunoprecipitation approaches High steady state levels of "functional" IE 1 resulted in increased expression of the apoptosis suppressor p35 facilitating AcMNPV -replication in SL2 cells. Finally, we determined that lED accelerates the viral infection in AcMNPV -permissive cells. Our results show that expressing viral genes that are able to overcome the insect-pest defense system enable to expand baculovirus host range. Scientifically, this project highlights the need to further study the anti-viral defenses of invertebrates not only to maximi~e the possibilities for manipulating baculovirus genomes, but to better understand the evolutionary underpinnings of the immune systems of vertebrates towards virus infection.
APA, Harvard, Vancouver, ISO, and other styles
6

Karacic, Almir, and Anneli Adler. Fertilization of poplar plantations with dried sludge : a demonstration trial in Hillebola - central Sweden. Department of Crop Production Ecology, Swedish University of Agricultural Sciences, 2023. http://dx.doi.org/10.54612/a.2q9iahfphk.

Full text
Abstract:
Wastewater sludge contains essential nutrients for plant growth and is frequently used as fertilizer in European agriculture. Sludge contains elevated concentrations of heavy metals, microplastics, and other substances that may pose potential risks to human health and the environment. Nevertheless, dried pelletized sludge emerges as a viable product for fertilizing short-rotation poplar plantations within a circular model, enabling nutrient recycling and converting waste into a valuable resource to enhance biomass production for different markets. In Hillebola, central Sweden, we demonstrated the application of dried pelletized sludge to pilot plantations with climate-adapted Populus trichocarpa clones. The trial was established in four blocks with four treatments three years after the poplar trees were planted. The treatments were: mineral NPK fertilizer + soil cultivation between poplar rows, dried pelletized sludge + soil cultivation, no fertilization + soil cultivation only, and control (no treatments). The effect of fertilization on poplar growth was evaluated two years later, after the fifth growing season. The results showed a significantly improved basal area increment in NPK and sludge treatments compared to the control. The ground vegetation inventory revealed substantial differences in weed biomass between control and cultivated plots. Control plots contained double the amount of aboveground grass and herbaceous biomass (8.6 ton ha-1 ) compared to cultivated and cultivated + fertilized plots. The low-intensity Nordic-Baltic poplar establishment practices allow for a substantial amount of ground vegetation to develop until the canopy closure, potentially contributing to the soil carbon pool more than it is usually recognized when modeling carbon balances in short-rotation poplar plantations, which is the theme of our next report.
APA, Harvard, Vancouver, ISO, and other styles
7

Meir, Shimon, Michael Reid, Cai-Zhong Jiang, Amnon Lers, and Sonia Philosoph-Hadas. Molecular Studies of Postharvest Leaf and Flower Abscission. United States Department of Agriculture, 2005. http://dx.doi.org/10.32747/2005.7696523.bard.

Full text
Abstract:
Original objectives: Understanding the regulation of abscission competence by exploring the nature and function of auxin-related gene expression changes in the leaf and pedicelAZs of tomato (as a model system), was the main goal of the previously submitted proposal. We proposed to achieve this goal by using microarray GeneChip analysis, to identify potential target genes for functional analysis by virus-induced gene silencing (VIGS). To increase the potential of accomplishing the objectives of the previously submitted proposal, we were asked by BARD to show feasibility for the use of these two modern techniques in our abscission system. Thus, the following new objectives were outlined for the one-year feasibility study: 1.to demonstrate the feasibility of the VIGS system in tomato to perform functional analysis of known abscission-related genes; 2. to demonstrate that by using microarray analysis we can identify target genes for further VIGS functional analysis. Background to the topic: It is a generally accepted model that auxin flux through the abscission zone (AZ) prevents organ abscission by rendering the AZ insensitive to ethylene. However, the molecular mechanisms responsible for acquisition of abscission competence and the way in which the auxin gradient modulates it are still unknown. Understanding this basic stage of the abscission process may provide us with future tools to control abscission for agricultural applications. Based on our previous study, performed to investigate the molecular changes occurring in leaf and stem AZs of MirabillisJalapaL., we have expanded our research to tomato, using genomic approaches that include modern techniques for gene discovery and functional gene characterization. In our one-year feasibility study, the US team has established a useful system for VIGS in tomato, using vectors based on the tobacco rattle virus (TRV), a Lcreporter gene for silencing (involved in regulation of anthocyanin biosynthesis), and the gene of interest. In parallel, the Israeli team has used the newly released Affymetrix Tomato GeneChip to measure gene expression in AZ and non-AZ tissues at various time points after flower removal, when increased sensitivity to ethylene is acquired prior to abscission (at 0-8 h), and during pedicelabscission (at 14 h). In addition, gene expression was measured in the pedicel AZ pretreated with the ethylene action inhibitor, 1-methylcyclopropene (1-MCP) before flower removal, to block any direct effects of ethylene. Major conclusions, solutions and achievements: 1) The feasibility study unequivocally established that VIGS is an ideal tool for testing the function of genes with putative roles in abscission; 2) The newly released Affymetrix Tomato GeneChip was found to be an excellent tool to identify AZ genes possibly involved in regulation and execution of abscission. The VIGS-based study allowed us to show that TAPG, a polygalacturonase specifically associated with the tomato AZ, is a key enzyme in the abscission process. Using the newly released Affymetrix Tomato GeneChip we have identified potential abscission regulatory genes as well as new AZ-specific genes, the expression of which was modified after flower removal. These include: members of the Aux/IAAgene family, ethylene signal transduction-related genes, early and late expressed transcription factors, genes which encode post-translational regulators whose expression was modified specifically in the AZ, and many additional novel AZ-specific genes which were previously not associated with abscission. This microarray analysis allowed us to select an initial set of target genes for further functional analysis by VIGS. Implications: Our success in achieving the two objectives of this feasibility study provides us with a solid basis for further research outlined in the original proposal. This will significantly increase the probability of success of a full 3-year project. Additionally, our feasibility study yielded highly innovative results, as they represent the first direct demonstration of the functional involvement of a TAPG in abscission, and the first microarray analysis of the abscission process. Using these approaches we could identify a large number of genes involved in abscission regulation, initiation and execution, and in auxin-ethylene cross-talk, which are of great importance, and could enable their potential functional analysis by VIGS.
APA, Harvard, Vancouver, ISO, and other styles
8

Harris, L. B., P. Adiban, and E. Gloaguen. The role of enigmatic deep crustal and upper mantle structures on Au and magmatic Ni-Cu-PGE-Cr mineralization in the Superior Province. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328984.

Full text
Abstract:
Aeromagnetic and ground gravity data for the Canadian Superior Province, filtered to extract long wavelength components and converted to pseudo-gravity, highlight deep, N-S trending regional-scale, rectilinear faults and margins to discrete, competent mafic or felsic granulite blocks (i.e. at high angles to most regional mapped structures and sub-province boundaries) with little to no surface expression that are spatially associated with lode ('orogenic') Au and Ni-Cu-PGE-Cr occurrences. Statistical and machine learning analysis of the Red Lake-Stormy Lake region in the W Superior Province confirms visual inspection for a greater correlation between Au deposits and these deep N-S structures than with mapped surface to upper crustal, generally E-W trending, faults and shear zones. Porphyry Au, Ni, Mo and U-Th showings are also located above these deep transverse faults. Several well defined concentric circular to elliptical structures identified in the Oxford Stull and Island Lake domains along the S boundary of the N Superior proto-craton, intersected by N- to NNW striking extensional fractures and/or faults that transect the W Superior Province, again with little to no direct surface or upper crustal expression, are spatially associated with magmatic Ni-Cu-PGE-Cr and related mineralization and Au occurrences. The McFaulds Lake greenstone belt, aka. 'Ring of Fire', constitutes only a small, crescent-shaped belt within one of these concentric features above which 2736-2733 Ma mafic-ultramafic intrusions bodies were intruded. The Big Trout Lake igneous complex that hosts Cr-Pt-Pd-Rh mineralization west of the Ring of Fire lies within a smaller concentrically ringed feature at depth and, near the Ontario-Manitoba border, the Lingman Lake Au deposit, numerous Au occurrences and minor Ni showings, are similarly located on concentric structures. Preliminary magnetotelluric (MT) interpretations suggest that these concentric structures appear to also have an expression in the subcontinental lithospheric mantle (SCLM) and that lithospheric mantle resistivity features trend N-S as well as E-W. With diameters between ca. 90 km to 185 km, elliptical structures are similar in size and internal geometry to coronae on Venus which geomorphological, radar, and gravity interpretations suggest formed above mantle upwellings. Emplacement of mafic-ultramafic bodies hosting Ni-Cr-PGE mineralization along these ringlike structures at their intersection with coeval deep transverse, ca. N-S faults (viz. phi structures), along with their location along the margin to the N Superior proto-craton, are consistent with secondary mantle upwellings portrayed in numerical models of a mantle plume beneath a craton with a deep lithospheric keel within a regional N-S compressional regime. Early, regional ca. N-S faults in the W Superior were reactivated as dilatational antithetic (secondary Riedel/R') sinistral shears during dextral transpression and as extensional fractures and/or normal faults during N-S shortening. The Kapuskasing structural zone or uplift likely represents Proterozoic reactivation of a similar deep transverse structure. Preservation of discrete faults in the deep crust beneath zones of distributed Neoarchean dextral transcurrent to transpressional shear zones in the present-day upper crust suggests a 'millefeuille' lithospheric strength profile, with competent SCLM, mid- to deep, and upper crustal layers. Mechanically strong deep crustal felsic and mafic granulite layers are attributed to dehydration and melt extraction. Intra-crustal decoupling along a ductile décollement in the W Superior led to the preservation of early-formed deep structures that acted as conduits for magma transport into the overlying crust and focussed hydrothermal fluid flow during regional deformation. Increase in the thickness of semi-brittle layers in the lower crust during regional metamorphism would result in an increase in fracturing and faulting in the lower crust, facilitating hydrothermal and carbonic fluid flow in pathways linking SCLM to the upper crust, a factor explaining the late timing for most orogenic Au. Results provide an important new dataset for regional prospectively mapping, especially with machine learning, and exploration targeting for Au and Ni-Cr-Cu-PGE mineralization. Results also furnish evidence for parautochthonous development of the S Superior Province during plume-related rifting and cannot be explained by conventional subduction and arc-accretion models.
APA, Harvard, Vancouver, ISO, and other styles
9

Community involvement in reproductive health: Findings from research in Karnataka, India. Population Council, 2004. http://dx.doi.org/10.31899/rh17.1007.

Full text
Abstract:
In 1996, the government of India decided to provide a package of reproductive and child health services through the existing family welfare program, adopting a community needs assessment approach (CNAA). To implement this approach, the government abolished its practice of setting contraceptive targets centrally and introduced a decentralized planning strategy whereby health workers assessed the reproductive health needs of women in their respective areas and prepared local plans to meet those needs. They also involved community leaders to promote community participation in the reproductive and child health program. Since 1998, several evaluation studies have assessed the impact of CNAA on the program’s performance and community participation. These studies showed that the performance of the maternal health-care program improved, whereas the functioning of the family planning program initially declined but later recovered. The approach achieved little in boosting community involvement. This project tested a new model of health committee to help stimulate community participation in reproductive and child health activities at the village level. The experiment, described in this report, was conducted in the Hunsur block of the Mysore District in Karnataka for two years. Researchers evaluated the impact in terms of community involvement and utilization of reproductive and child health services.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography