Literatura científica selecionada sobre o tema "Latent Blocks Models"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Latent Blocks Models".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Latent Blocks Models"

1

Moron-Lopez, Sara, Sushama Telwatte, Indra Sarabia, Emilie Battivelli, Mauricio Montano, Amanda B. Macedo, Dvir Aran et al. "Human splice factors contribute to latent HIV infection in primary cell models and blood CD4+ T cells from ART-treated individuals". PLOS Pathogens 16, n.º 11 (30 de novembro de 2020): e1009060. http://dx.doi.org/10.1371/journal.ppat.1009060.

Texto completo da fonte
Resumo:
It is unclear what mechanisms govern latent HIV infection in vivo or in primary cell models. To investigate these questions, we compared the HIV and cellular transcription profile in three primary cell models and peripheral CD4+ T cells from HIV-infected ART-suppressed individuals using RT-ddPCR and RNA-seq. All primary cell models recapitulated the block to HIV multiple splicing seen in cells from ART-suppressed individuals, suggesting that this may be a key feature of HIV latency in primary CD4+ T cells. Blocks to HIV transcriptional initiation and elongation were observed more variably among models. A common set of 234 cellular genes, including members of the minor spliceosome pathway, was differentially expressed between unstimulated and activated cells from primary cell models and ART-suppressed individuals, suggesting these genes may play a role in the blocks to HIV transcription and splicing underlying latent infection. These genes may represent new targets for therapies designed to reactivate or silence latently-infected cells.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

SANTOS, Naiara Caroline Aparecido dos, e Jorge Luiz BAZÁN. "RESIDUAL ANALYSIS IN RASCH POISSON COUNTS MODELS". REVISTA BRASILEIRA DE BIOMETRIA 39, n.º 1 (31 de março de 2021): 206–20. http://dx.doi.org/10.28951/rbb.v39i1.531.

Texto completo da fonte
Resumo:
A Rasch Poisson counts (RPC) model is described to identify individual latent traits and facilities of the items of tests that model the error (or success) count in several tasks over time, instead of modeling the correct responses to items in a test as in the dichotomous item response theory (IRT) model. These types of tests can be more informative than traditional tests. To estimate the model parameters, we consider a Bayesian approach using the integrated nested Laplace approximation (INLA). We develop residual analysis to assess model t by introducing randomized quantile residuals for items. The data used to illustrate the method comes from 228 people who took a selective attention test. The test has 20 blocks (items), with a time limit of 15 seconds for each block. The results of the residual analysis of the RPC were promising and indicated that the studied attention data are not well tted by the RPC model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Norget, Julia, e Axel Mayer. "Block-Wise Model Fit for Structural Equation Models With Experience Sampling Data". Zeitschrift für Psychologie 230, n.º 1 (janeiro de 2022): 47–59. http://dx.doi.org/10.1027/2151-2604/a000482.

Texto completo da fonte
Resumo:
Abstract. Common model fit indices behave poorly in structural equation models for experience sampling data which typically contain many manifest variables. In this article, we propose a block-wise fit assessment for large models as an alternative. The entire model is estimated jointly, and block-wise versions of common fit indices are then determined from smaller blocks of the variance-covariance matrix using simulated degrees of freedom. In a first simulation study, we show that block-wise fit indices, contrary to global fit indices, correctly identify correctly specified latent state-trait models with 49 occasions and N = 200. In a second simulation, we find that block-wise fit indices cannot identify misspecification purely between days but correctly rejects other misspecified models. In some cases, the block-wise fit is superior in judging the strength of the misspecification. Lastly, we discuss the practical use of block-wise fit evaluation and its limitations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Vidal, E., A. Moreno, E. Bertolini e M. Cambra. "Estimation of the accuracy of two diagnostic methods for the detection of Plum pox virus in nursery blocks by latent class models". Plant Pathology 61, n.º 2 (13 de julho de 2011): 413–22. http://dx.doi.org/10.1111/j.1365-3059.2011.02505.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Messick, Troy E., Garry R. Smith, Samantha S. Soldan, Mark E. McDonnell, Julianna S. Deakyne, Kimberly A. Malecka, Lois Tolvinski et al. "Structure-based design of small-molecule inhibitors of EBNA1 DNA binding blocks Epstein-Barr virus latent infection and tumor growth". Science Translational Medicine 11, n.º 482 (6 de março de 2019): eaau5612. http://dx.doi.org/10.1126/scitranslmed.aau5612.

Texto completo da fonte
Resumo:
Epstein-Barr virus (EBV) is a DNA tumor virus responsible for 1 to 2% of human cancers including subtypes of Burkitt’s lymphoma, Hodgkin’s lymphoma, gastric carcinoma, and nasopharyngeal carcinoma (NPC). Persistent latent infection drives EBV-associated tumorigenesis. Epstein-Barr nuclear antigen 1 (EBNA1) is the only viral protein consistently expressed in all EBV-associated tumors and is therefore an attractive target for therapeutic intervention. It is a multifunctional DNA binding protein critical for viral replication, genome maintenance, viral gene expression, and host cell survival. Using a fragment-based approach and x-ray crystallography, we identify a 2,3-disubstituted benzoic acid series that selectively inhibits the DNA binding activity of EBNA1. We characterize these inhibitors biochemically and in cell-based assays, including chromatin immunoprecipitation and DNA replication assays. In addition, we demonstrate the potency of EBNA1 inhibitors to suppress tumor growth in several EBV-dependent xenograft models, including patient-derived xenografts for NPC. These inhibitors selectively block EBV gene transcription and alter the cellular transforming growth factor–β (TGF-β) signaling pathway in NPC tumor xenografts. These EBNA1-specific inhibitors show favorable pharmacological properties and have the potential to be further developed for the treatment of EBV-associated malignancies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Cisneros, William J., Shimaa H. A. Soliman, Miriam Walter, Lacy M. Simons, Daphne Cornish, Simone De Fabritiis, Ariel W. Halle et al. "Release of P-TEFb from the Super Elongation Complex promotes HIV-1 latency reversal". PLOS Pathogens 20, n.º 9 (11 de setembro de 2024): e1012083. http://dx.doi.org/10.1371/journal.ppat.1012083.

Texto completo da fonte
Resumo:
The persistence of HIV-1 in long-lived latent reservoirs during suppressive antiretroviral therapy (ART) remains one of the principal barriers to a functional cure. Blocks to transcriptional elongation play a central role in maintaining the latent state, and several latency reversal strategies focus on the release of positive transcription elongation factor b (P-TEFb) from sequestration by negative regulatory complexes, such as the 7SK complex and BRD4. Another major cellular reservoir of P-TEFb is in Super Elongation Complexes (SECs), which play broad regulatory roles in host gene expression. Still, it is unknown if the release of P-TEFb from SECs is a viable latency reversal strategy. Here, we demonstrate that the SEC is not required for HIV-1 replication in primary CD4+ T cells and that a small molecular inhibitor of the P-TEFb/SEC interaction (termed KL-2) increases viral transcription. KL-2 acts synergistically with other latency reversing agents (LRAs) to reactivate viral transcription in several cell line models of latency in a manner that is, at least in part, dependent on the viral Tat protein. Finally, we demonstrate that KL-2 enhances viral reactivation in peripheral blood mononuclear cells (PBMCs) from people living with HIV (PLWH) on suppressive ART, most notably in combination with inhibitor of apoptosis protein antagonists (IAPi). Taken together, these results suggest that the release of P-TEFb from cellular SECs may be a novel route for HIV-1 latency reactivation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ma, Suqiang, Chun Liu, Zheng Li e Wei Yang. "Integrating Adversarial Generative Network with Variational Autoencoders towards Cross-Modal Alignment for Zero-Shot Remote Sensing Image Scene Classification". Remote Sensing 14, n.º 18 (11 de setembro de 2022): 4533. http://dx.doi.org/10.3390/rs14184533.

Texto completo da fonte
Resumo:
Remote sensing image scene classification takes image blocks as classification units and predicts their semantic descriptors. Because it is difficult to obtain enough labeled samples for all classes of remote sensing image scenes, zero-shot classification methods which can recognize image scenes that are not seen in the training stage are of great significance. By projecting the image visual features and the class semantic features into the latent space and ensuring their alignment, the variational autoencoder (VAE) generative model has been applied to address remote-sensing image scene classification under a zero-shot setting. However, the VAE model takes the element-wise square error as the reconstruction loss, which may not be suitable for measuring the reconstruction quality of the visual and semantic features. Therefore, this paper proposes to augment the VAE models with the generative adversarial network (GAN) to make use of the GAN’s discriminator in order to learn a suitable reconstruction quality metric for VAE. To promote feature alignment in the latent space, we have also proposed cross-modal feature-matching loss to make sure that the visual features of one class are aligned with the semantic features of the class and not those of other classes. Based on a public dataset, our experiments have shown the effects of the proposed improvements. Moreover, taking the ResNet models of ResNet18, extracting 512-dimensional visual features, and ResNet50 and ResNet101, both extracting 2048-dimensional visual features for testing, the impact of the different visual feature extractors has also been investigated. The experimental results show that better performance is achieved by ResNet18. This indicates that more layers of the extractors and larger dimensions of the extracted features may not contribute to the image scene classification under a zero-shot setting.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Demir, Rezan, Lewis B. Haberly e Meyer B. Jackson. "Characteristics of Plateau Activity During the Latent Period Prior to Epileptiform Discharges in Slices From Rat Piriform Cortex". Journal of Neurophysiology 83, n.º 2 (1 de fevereiro de 2000): 1088–98. http://dx.doi.org/10.1152/jn.2000.83.2.1088.

Texto completo da fonte
Resumo:
The deep piriform region has an unusually high seizure susceptibility. Voltage imaging previously located the sites of epileptiform discharge onset in slices of rat piriform cortex and revealed the spatiotemporal pattern of development of two types of electrical activity during the latent period prior to discharge onset. A ramplike depolarization (onset activity) appears at the site of discharge onset. Onset activity is preceded by a sustained low-amplitude depolarization (plateau activity) at another site, which shows little if any overlap with the site of onset. Because synaptic blockade at either of these two sites blocks discharges, it was proposed that both forms of latent period activity are necessary for the generation of epileptiform discharges and that the onset and plateau sites work together in the amplification of electrical activity. The capacity for amplification was examined here by studying subthreshold responses in slices of piriform cortex using two different in vitro models of epilepsy. Under some conditions electrically evoked responses showed a nonlinear dependence on stimulus current, suggesting amplification by strong polysynaptic excitatory responses. The sites of plateau and onset activity were mapped for different in vitro models of epilepsy and different sites of stimulation. These experiments showed that the site of plateau activity expanded into deep layers of neighboring neocortex in parallel with expansions of the onset site into neocortex. These results provide further evidence that interactions between the sites of onset and plateau activity play an important role in the initiation of epileptiform discharges. The site of plateau activity showed little variation with different stimulation sites in the piriform cortex, but when stimulation was applied in the endopiriform nucleus (in the sites of onset of plateau activity), plateau activity had a lower amplitude and became distributed over a much wider area. These results indicate that in the initiation of epileptiform discharges, the location of the circuit that generates plateau activity is not rigidly defined but can exhibit flexibility.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Bessac, Julie, Pierre Ailliot, Julien Cattiaux e Valerie Monbet. "Comparison of hidden and observed regime-switching autoregressive models for (u, v)-components of wind fields in the northeastern Atlantic". Advances in Statistical Climatology, Meteorology and Oceanography 2, n.º 1 (29 de fevereiro de 2016): 1–16. http://dx.doi.org/10.5194/ascmo-2-1-2016.

Texto completo da fonte
Resumo:
Abstract. Several multi-site stochastic generators of zonal and meridional components of wind are proposed in this paper. A regime-switching framework is introduced to account for the alternation of intensity and variability that is observed in wind conditions due to the existence of different weather types. This modeling blocks time series into periods in which the series is described by a single model. The regime-switching is modeled by a discrete variable that can be introduced as a latent (or hidden) variable or as an observed variable. In the latter case a clustering algorithm is used before fitting the model to extract the regime. Conditional on the regimes, the observed wind conditions are assumed to evolve as a linear Gaussian vector autoregressive (VAR) model. Various questions are explored, such as the modeling of the regime in a multi-site context, the extraction of relevant clusterings from extra variables or from the local wind data, and the link between weather types extracted from wind data and large-scale weather regimes derived from a descriptor of the atmospheric circulation. We also discuss the relative advantages of hidden and observed regime-switching models. For artificial stochastic generation of wind sequences, we show that the proposed models reproduce the average space–time motions of wind conditions, and we highlight the advantage of regime-switching models in reproducing the alternation of intensity and variability in wind conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lai, Zhi-Fei, Gang Zhang, Xiao-Bo Zhang e Hong-Tao Liu. "High-Resolution Histopathological Image Classification Model Based on Fused Heterogeneous Networks with Self-Supervised Feature Representation". BioMed Research International 2022 (21 de agosto de 2022): 1–10. http://dx.doi.org/10.1155/2022/8007713.

Texto completo da fonte
Resumo:
Applying machine learning technology to automatic image analysis and auxiliary diagnosis of whole slide image (WSI) may help to improve the efficiency, objectivity, and consistency of pathological diagnosis. Due to its extremely high resolution, it is still a great challenge to directly process WSI through deep neural networks. In this paper, we propose a novel model for the task of classification of WSIs. The model is composed of two parts. The first part is a self-supervised encoding network with a UNet-like architecture. Each patch from a WSI is encoded as a compressed latent representation. These features are placed according to their corresponding patch’s original location in WSI, forming a feature cube. The second part is a classification network fused by 4 famous network blocks with heterogeneous architectures, with feature cube as input. Our model effectively expresses the feature and preserves location information of each patch. The fused network integrates heterogeneous features generated by different networks which yields robust classification results. The model is evaluated on two public datasets with comparison to baseline models. The evaluation results show the effectiveness of the proposed model.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Latent Blocks Models"

1

Robert, Valérie. "Classification croisée pour l'analyse de bases de données de grandes dimensions de pharmacovigilance". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS111/document.

Texto completo da fonte
Resumo:
Cette thèse regroupe des contributions méthodologiques à l'analyse statistique des bases de données de pharmacovigilance. Les difficultés de modélisation de ces données résident dans le fait qu'elles produisent des matrices souvent creuses et de grandes dimensions. La première partie des travaux de cette thèse porte sur la classification croisée du tableau de contingence de pharmacovigilance à l’aide du modèle des blocs latents de Poisson normalisé. L'objectif de la classification est d'une part de fournir aux pharmacologues des zones intéressantes plus réduites à explorer de manière plus précise, et d'autre part de constituer une information a priori utilisable lors de l'analyse des données individuelles de pharmacovigilance. Dans ce cadre, nous détaillons une procédure d'estimation partiellement bayésienne des paramètres du modèle et des critères de sélection de modèles afin de choisir le modèle le plus adapté aux données étudiées. Les données étant de grandes dimensions, nous proposons également une procédure pour explorer de manière non exhaustive mais pertinente, l'espace des modèles en coclustering. Enfin, pour mesurer la performance des algorithmes, nous développons un indice de classification croisée calculable en pratique pour un nombre de classes élevé. Les développements de ces outils statistiques ne sont pas spécifiques à la pharmacovigilance et peuvent être utile à toute analyse en classification croisée. La seconde partie des travaux de cette thèse porte sur l'analyse statistique des données individuelles, plus nombreuses mais également plus riches en information. L'objectif est d'établir des classes d'individus selon leur profil médicamenteux et des sous-groupes d'effets et de médicaments possiblement en interaction, palliant ainsi le phénomène de coprescription et de masquage que peuvent présenter les méthodes existantes sur le tableau de contingence. De plus, l'interaction entre plusieurs effets indésirables y est prise en compte. Nous proposons alors le modèle des blocs latents multiple qui fournit une classification croisée simultanée des lignes et des colonnes de deux tableaux de données binaires en leur imposant le même classement en ligne. Nous discutons des hypothèses inhérentes à ce nouveau modèle et nous énonçons des conditions suffisantes de son identifiabilité. Ensuite, nous présentons une procédure d'estimation de ses paramètres et développons des critères de sélection de modèles associés. De plus, un modèle de simulation numérique des données individuelles de pharmacovigilance est proposé et permet de confronter les méthodes entre elles et d'étudier leurs limites. Enfin, la méthodologie proposée pour traiter les données individuelles de pharmacovigilance est explicitée et appliquée à un échantillon de la base française de pharmacovigilance entre 2002 et 2010
This thesis gathers methodological contributions to the statistical analysis of large datasets in pharmacovigilance. The pharmacovigilance datasets produce sparse and large matrices and these two characteritics are the main statistical challenges for modelling them. The first part of the thesis is dedicated to the coclustering of the pharmacovigilance contingency table thanks to the normalized Poisson latent block model. The objective is on the one hand, to provide pharmacologists with some interesting and reduced areas to explore more precisely. On the other hand, this coclustering remains a useful background information for dealing with individual database. Within this framework, a parameter estimation procedure for this model is detailed and objective model selection criteria are developed to choose the best fit model. Datasets are so large that we propose a procedure to explore the model space in coclustering, in a non exhaustive way but a relevant one. Additionnally, to assess the performances of the methods, a convenient coclustering index is developed to compare partitions with high numbers of clusters. The developments of these statistical tools are not specific to pharmacovigilance and can be used for any coclustering issue. The second part of the thesis is devoted to the statistical analysis of the large individual data, which are more numerous but also provides even more valuable information. The aim is to produce individual clusters according their drug profiles and subgroups of drugs and adverse effects with possible links, which overcomes the coprescription and masking phenomenons, common contingency table issues in pharmacovigilance. Moreover, the interaction between several adverse effects is taken into account. For this purpose, we propose a new model, the multiple latent block model which enables to cocluster two binary tables by imposing the same row ranking. Assertions inherent to the model are discussed and sufficient identifiability conditions for the model are presented. Then a parameter estimation algorithm is studied and objective model selection criteria are developed. Moreover, a numeric simulation model of the individual data is proposed to compare existing methods and study its limits. Finally, the proposed methodology to deal with individual pharmacovigilance data is presented and applied to a sample of the French pharmacovigilance database between 2002 and 2010
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Anakok, Emre. "Prise en compte des effets d'échantillonnage pour la détection de structure des réseaux écologiques". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM049.

Texto completo da fonte
Resumo:
Dans cette thèse, nous nous intéressons aux biais que peut causer l'échantillonnage sur l'estimation des modèles statistiques et des métriques décrivant les réseaux d'interactions écologiques. D'abord, nous proposons de combiner un modèle d'observation qui traite des efforts d'échantillonnage et un modèle à blocs stochastiques représentant la structure des interactions possibles. L'identifiabilité du modèle est démontrée et un algorithme est proposé pour estimer ses paramètres. La pertinence et l'intérêt pratique de ce modèle sont confirmés par un grand ensemble de données de réseaux plantes-pollinisateurs, où nous observons un changement structurel dans la plupart des réseaux. Ensuite, nous nous penchons sur un jeu de données massif issu d'un programme de sciences participatives. En utilisant de récents progrès en intelligence artificielle, nous proposons un moyen d'obtenir une reconstruction du réseau écologique débarrassé des effets d'échantillonnage dus aux niveaux d'expérience différents des observateurs. Enfin, nous présentons des méthodes pour identifier les variables d'intérêt écologique qui influencent la connectance du réseau et montrons que la prise en compte de l'effet d'échantillonnage modifie en partie l'estimation de ces effets. Nos méthodes, implémentées soit en R soit en Python, sont accessibles librement
In this thesis, we focus on the biases that sampling can cause on the estimation of statistical models and metrics describing ecological interaction networks. First, we propose to combine an observation model that accounts for sampling with a stochastic block model representing the structure of possible interactions. The identifiability of the model is demonstrated and an algorithm is proposed to estimate its parameters. Its relevance and its practical interest are attested on a large dataset of plant-pollinator networks, as we observe structural change on most of the networks. We then examine a large dataset sampled by a citizen science program. Using recent advances in artificial intelligence, we propose a method to reconstruct the ecological network free from sampling effects caused by the varying levels of experience among observers. Finally, we present methods to highlight variables of ecological interest that influence the network's connectivity and show that accounting for sampling effects partially alters the estimation of these effects. Our methods, implemented in either R or Python, are freely accessible
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Brault, Vincent. "Estimation et sélection de modèle pour le modèle des blocs latents". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112238/document.

Texto completo da fonte
Resumo:
Le but de la classification est de partager des ensembles de données en sous-ensembles les plus homogènes possibles, c'est-à-dire que les membres d'une classe doivent plus se ressembler entre eux qu'aux membres des autres classes. Le problème se complique lorsque le statisticien souhaite définir des groupes à la fois sur les individus et sur les variables. Le modèle des blocs latents définit une loi pour chaque croisement de classe d'objets et de classe de variables, et les observations sont supposées indépendantes conditionnellement au choix de ces classes. Toutefois, il est impossible de factoriser la loi jointe des labels empêchant le calcul de la logvraisemblance et l'utilisation de l'algorithme EM. Plusieurs méthodes et critères existent pour retrouver ces partitions, certains fréquentistes, d'autres bayésiens, certains stochastiques, d'autres non. Dans cette thèse, nous avons d'abord proposé des conditions suffisantes pour obtenir l'identifiabilité. Dans un second temps, nous avons étudié deux algorithmes proposés pour contourner le problème de l'algorithme EM : VEM de Govaert et Nadif (2008) et SEM-Gibbs de Keribin, Celeux et Govaert (2010). En particulier, nous avons analysé la combinaison des deux et mis en évidence des raisons pour lesquelles les algorithmes dégénèrent (terme utilisé pour dire qu'ils renvoient des classes vides). En choisissant des lois a priori judicieuses, nous avons ensuite proposé une adaptation bayésienne permettant de limiter ce phénomène. Nous avons notamment utilisé un échantillonneur de Gibbs dont nous proposons un critère d'arrêt basé sur la statistique de Brooks-Gelman (1998). Nous avons également proposé une adaptation de l'algorithme Largest Gaps (Channarond et al. (2012)). En reprenant leurs démonstrations, nous avons démontré que les estimateurs des labels et des paramètres obtenus sont consistants lorsque le nombre de lignes et de colonnes tendent vers l'infini. De plus, nous avons proposé une méthode pour sélectionner le nombre de classes en ligne et en colonne dont l'estimation est également consistante à condition que le nombre de ligne et de colonne soit très grand. Pour estimer le nombre de classes, nous avons étudié le critère ICL (Integrated Completed Likelihood) dont nous avons proposé une forme exacte. Après avoir étudié l'approximation asymptotique, nous avons proposé un critère BIC (Bayesian Information Criterion) puis nous conjecturons que les deux critères sélectionnent les mêmes résultats et que ces estimations seraient consistantes ; conjecture appuyée par des résultats théoriques et empiriques. Enfin, nous avons comparé les différentes combinaisons et proposé une méthodologie pour faire une analyse croisée de données
Classification aims at sharing data sets in homogeneous subsets; the observations in a class are more similar than the observations of other classes. The problem is compounded when the statistician wants to obtain a cross classification on the individuals and the variables. The latent block model uses a law for each crossing object class and class variables, and observations are assumed to be independent conditionally on the choice of these classes. However, factorizing the joint distribution of the labels is impossible, obstructing the calculation of the log-likelihood and the using of the EM algorithm. Several methods and criteria exist to find these partitions, some frequentist ones, some bayesian ones, some stochastic ones... In this thesis, we first proposed sufficient conditions to obtain the identifiability of the model. In a second step, we studied two proposed algorithms to counteract the problem of the EM algorithm: the VEM algorithm (Govaert and Nadif (2008)) and the SEM-Gibbs algorithm (Keribin, Celeux and Govaert (2010)). In particular, we analyzed the combination of both and highlighted why the algorithms degenerate (term used to say that it returns empty classes). By choosing priors wise, we then proposed a Bayesian adaptation to limit this phenomenon. In particular, we used a Gibbs sampler and we proposed a stopping criterion based on the statistics of Brooks-Gelman (1998). We also proposed an adaptation of the Largest Gaps algorithm (Channarond et al. (2012)). By taking their demonstrations, we have shown that the labels and parameters estimators obtained are consistent when the number of rows and columns tend to infinity. Furthermore, we proposed a method to select the number of classes in row and column, the estimation provided is also consistent when the number of row and column is very large. To estimate the number of classes, we studied the ICL criterion (Integrated Completed Likelihood) whose we proposed an exact shape. After studying the asymptotic approximation, we proposed a BIC criterion (Bayesian Information Criterion) and we conjecture that the two criteria select the same results and these estimates are consistent; conjecture supported by theoretical and empirical results. Finally, we compared the different combinations and proposed a methodology for co-clustering
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tami, Myriam. "Approche EM pour modèles multi-blocs à facteurs à une équation structurelle". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT303/document.

Texto completo da fonte
Resumo:
Les modèles d'équations structurelles à variables latentes permettent de modéliser des relations entre des variables observables et non observables. Les deux paradigmes actuels d'estimation de ces modèles sont les méthodes de moindres carrés partiels sur composantes et l'analyse de la structure de covariance. Dans ce travail, après avoir décrit les deux principales méthodes d'estimation que sont PLS et LISREL, nous proposons une approche d'estimation fondée sur la maximisation par algorithme EM de la vraisemblance globale d'un modèle à facteurs latents et à une équation structurelle. Nous en étudions les performances sur des données simulées et nous montrons, via une application sur des données réelles environnementales, comment construire pratiquement un modèle et en évaluer la qualité. Enfin, nous appliquons l'approche développée dans le contexte d'un essai clinique en cancérologie pour l'étude de données longitudinales de qualité de vie. Nous montrons que par la réduction efficace de la dimension des données, l'approche EM simplifie l'analyse longitudinale de la qualité de vie en évitant les tests multiples. Ainsi, elle contribue à faciliter l'évaluation du bénéfice clinique d'un traitement
Structural equation models enable the modeling of interactions between observed variables and latent ones. The two leading estimation methods are partial least squares on components and covariance-structure analysis. In this work, we first describe the PLS and LISREL methods and, then, we propose an estimation method using the EM algorithm in order to maximize the likelihood of a structural equation model with latent factors. Through a simulation study, we investigate how fast and accurate the method is, and thanks to an application to real environmental data, we show how one can handly construct a model or evaluate its quality. Finally, in the context of oncology, we apply the EM approach on health-related quality-of-life data. We show that it simplifies the longitudinal analysis of quality-of-life and helps evaluating the clinical benefit of a treatment
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Febrissy, Mickaël. "Nonnegative Matrix Factorization and Probabilistic Models : A unified framework for text data". Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1291.

Texto completo da fonte
Resumo:
Depuis l’avènement du Big data, les techniques de réduction de la dimension sont devenues essentielles pour l’exploration et l’analyse de données hautement dimensionnelles issues de nombreux domaines scientifiques. En créant un espace à faible dimension intrinsèque à l’espace de données original, ces techniques offrent une meilleure compréhension dans de nombreuses applications de la science des données. Dans le contexte de l’analyse de textes où les données recueillies sont principalement non négatives, les techniques couramment utilisées produisant des transformations dans l’espace des nombres réels (par exemple, l’analyse en composantes principales, l’analyse sémantique latente) sont devenues moins intuitives car elles ne pouvaient pas fournir une interprétation directe. De telles applications montrent la nécessité de techniques de réduction de la dimensionnalité comme la factorisation matricielle non négative (NMF), utile pour intégrer par exemple, des documents ou des mots dans l’espace de dimension réduite. Par définition, la NMF vise à approximer une matrice non négative par le produit de deux matrices non négatives de plus faible dimension, ce qui aboutit à la résolution d’un problème d’optimisation non linéaire. Notons cependant que cet objectif peut être exploité dans le domaine du regroupement de documents/mots, même si ce n’est pas l’objectif de la NMF. En s’appuyant sur la NMF, cette thèse se concentre sur l’amélioration de la qualité du clustering de grandes données textuelles se présentant sous la forme de matrices document-terme très creuses. Cet objectif est d’abord atteint en proposant plusieurs types de régularisations de la fonction objectif originale de la NMF. En plaçant cet objectif dans un contexte probabiliste, un nouveau modèle NMF est introduit, apportant des bases théoriques pour établir la connexion entre la NMF et les modèles de mélange finis de familles exponentielles, ce qui permet d’offrir des régularisations intéressantes. Cela permet d’inscrire, entre autres, la NMF dans un véritable esprit de clustering. Enfin, un modèle bayésien de blocs latents de Poisson est proposé pour améliorer le regroupement de documents et de mots simultanément en capturant des caractéristiques de termes bruyants. Ce modèle peut être relié à la NMTF (Nonnegative Matrix Tri-Factorization) consacrée au co-clustering. Des expériences sur des jeux de données réelles ont été menées pour soutenir les propositions de la thèse
Since the exponential growth of available Data (Big data), dimensional reduction techniques became essential for the exploration and analysis of high-dimensional data arising from many scientific areas. By creating a low-dimensional space intrinsic to the original data space, theses techniques offer better understandings across many data Science applications. In the context of text analysis where the data gathered are mainly nonnegative, recognized techniques producing transformations in the space of real numbers (e.g. Principal component analysis, Latent semantic analysis) became less intuitive as they could not provide a straightforward interpretation. Such applications show the need of dimensional reduction techniques like Nonnegative Matrix factorization (NMF) useful to embed, for instance, documents or words in the space of reduced dimension. By definition, NMF aims at approximating a nonnegative matrix by the product of two lower dimensionalnonnegative matrices, which results in the solving of a nonlinear optimization problem. Note however that this objective can be harnessed to document/word clustering domain even it is not the objective of NMF. In relying on NMF, this thesis focuses on improving clustering of large text data arising in the form of highly sparse document-term matrices. This objective is first achieved, by proposing several types of regularizations of the original NMF objective function. Setting this objective in a probabilistic context, a new NMF model is introduced bringing theoretical foundations for establishing the connection between NMF and Finite Mixture Models of exponential families leading, therefore, to offer interesting regularizations. This allows to set NMF in a real clustering spirit. Finally, a Bayesian Poisson Latent Block model is proposed to improve document andword clustering simultaneously by capturing noisy term features. This can be connected to NMTF (Nonnegative Matrix factorization Tri-factorization) devoted to co-clustering. Experiments on real datasets have been carried out to support the proposals of the thesis
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Corneli, Marco. "Dynamic stochastic block models, clustering and segmentation in dynamic graphs". Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E012/document.

Texto completo da fonte
Resumo:
Cette thèse porte sur l’analyse de graphes dynamiques, définis en temps discret ou continu. Nous introduisons une nouvelle extension dynamique du modèle a blocs stochastiques (SBM), appelée dSBM, qui utilise des processus de Poisson non homogènes pour modéliser les interactions parmi les paires de nœuds d’un graphe dynamique. Les fonctions d’intensité des processus ne dépendent que des classes des nœuds comme dans SBM. De plus, ces fonctions d’intensité ont des propriétés de régularité sur des intervalles temporels qui sont à estimer, et à l’intérieur desquels les processus de Poisson redeviennent homogènes. Un récent algorithme d’estimation pour SBM, qui repose sur la maximisation d’un critère exact (ICL exacte) est ici adopté pour estimer les paramètres de dSBM et sélectionner simultanément le modèle optimal. Ensuite, un algorithme exact pour la détection de rupture dans les séries temporelles, la méthode «pruned exact linear time» (PELT), est étendu pour faire de la détection de rupture dans des données de graphe dynamique selon le modèle dSBM. Enfin, le modèle dSBM est étendu ultérieurement pour faire de l’analyse de réseau textuel dynamique. Les réseaux sociaux sont un exemple de réseaux textuels: les acteurs s’échangent des documents (posts, tweets, etc.) dont le contenu textuel peut être utilisé pour faire de la classification et détecter la structure temporelle du graphe dynamique. Le modèle que nous introduisons est appelé «dynamic stochastic topic block model» (dSTBM)
This thesis focuses on the statistical analysis of dynamic graphs, both defined in discrete or continuous time. We introduce a new extension of the stochastic block model (SBM) for dynamic graphs. The proposed approach, called dSBM, adopts non homogeneous Poisson processes to model the interaction times between pairs of nodes in dynamic graphs, either in discrete or continuous time. The intensity functions of the processes only depend on the node clusters, in a block modelling perspective. Moreover, all the intensity functions share some regularity properties on hidden time intervals that need to be estimated. A recent estimation algorithm for SBM, based on the greedy maximization of an exact criterion (exact ICL) is adopted for inference and model selection in dSBM. Moreover, an exact algorithm for change point detection in time series, the "pruned exact linear time" (PELT) method is extended to deal with dynamic graph data modelled via dSBM. The approach we propose can be used for change point analysis in graph data. Finally, a further extension of dSBM is developed to analyse dynamic net- works with textual edges (like social networks, for instance). In this context, the graph edges are associated with documents exchanged between the corresponding vertices. The textual content of the documents can provide additional information about the dynamic graph topological structure. The new model we propose is called "dynamic stochastic topic block model" (dSTBM).Graphs are mathematical structures very suitable to model interactions between objects or actors of interest. Several real networks such as communication networks, financial transaction networks, mobile telephone networks and social networks (Facebook, Linkedin, etc.) can be modelled via graphs. When observing a network, the time variable comes into play in two different ways: we can study the time dates at which the interactions occur and/or the interaction time spans. This thesis only focuses on the first time dimension and each interaction is assumed to be instantaneous, for simplicity. Hence, the network evolution is given by the interaction time dates only. In this framework, graphs can be used in two different ways to model networks. Discrete time […] Continuous time […]. In this thesis both these perspectives are adopted, alternatively. We consider new unsupervised methods to cluster the vertices of a graph into groups of homogeneous connection profiles. In this manuscript, the node groups are assumed to be time invariant to avoid possible identifiability issues. Moreover, the approaches that we propose aim to detect structural changes in the way the node clusters interact with each other. The building block of this thesis is the stochastic block model (SBM), a probabilistic approach initially used in social sciences. The standard SBM assumes that the nodes of a graph belong to hidden (disjoint) clusters and that the probability of observing an edge between two nodes only depends on their clusters. Since no further assumption is made on the connection probabilities, SBM is a very flexible model able to detect different network topologies (hubs, stars, communities, etc.)
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Schmutz, Amandine. "Contributions à l'analyse de données fonctionnelles multivariées, application à l'étude de la locomotion du cheval de sport". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1241.

Texto completo da fonte
Resumo:
Avec l'essor des objets connectés pour fournir un suivi systématique, objectif et fiable aux sportifs et à leur entraineur, de plus en plus de paramètres sont collectés pour un même individu. Une alternative aux méthodes d'évaluation en laboratoire est l'utilisation de capteurs inertiels qui permettent de suivre la performance sans l'entraver, sans limite d'espace et sans procédure d'initialisation fastidieuse. Les données collectées par ces capteurs peuvent être vues comme des données fonctionnelles multivariées : se sont des entités quantitatives évoluant au cours du temps de façon simultanée pour un même individu statistique. Cette thèse a pour objectif de chercher des paramètres d'analyse de la locomotion du cheval athlète à l'aide d'un capteur positionné dans la selle. Cet objet connecté (centrale inertielle, IMU) pour le secteur équestre permet de collecter l'accélération et la vitesse angulaire au cours du temps, dans les trois directions de l'espace et selon une fréquence d'échantillonnage de 100 Hz. Une base de données a ainsi été constituée rassemblant 3221 foulées de galop, collectées en ligne droite et en courbe et issues de 58 chevaux de sauts d'obstacles de niveaux et d'âges variés. Nous avons restreint notre travail à la prédiction de trois paramètres : la vitesse par foulée, la longueur de foulée et la qualité de saut. Pour répondre aux deux premiers objectifs nous avons développé une méthode de clustering fonctionnelle multivariée permettant de diviser notre base de données en sous-groupes plus homogènes du point de vue des signaux collectés. Cette méthode permet de caractériser chaque groupe par son profil moyen, facilitant leur compréhension et leur interprétation. Mais, contre toute attente, ce modèle de clustering n'a pas permis d'améliorer les résultats de prédiction de vitesse, les SVM restant le modèle ayant le pourcentage d'erreur inférieur à 0.6 m/s le plus faible. Il en est de même pour la longueur de foulée où une précision de 20 cm est atteinte grâce aux Support Vector Machine (SVM). Ces résultats peuvent s'expliquer par le fait que notre base de données est composée uniquement de 58 chevaux, ce qui est un nombre d'individus très faible pour du clustering. Nous avons ensuite étendu cette méthode au co-clustering de courbes fonctionnelles multivariées afin de faciliter la fouille des données collectées pour un même cheval au cours du temps. Cette méthode pourrait permettre de détecter et prévenir d'éventuels troubles locomoteurs, principale source d'arrêt du cheval de saut d'obstacle. Pour finir, nous avons investigué les liens entre qualité du saut et les signaux collectés par l'IMU. Nos premiers résultats montrent que les signaux collectés par la selle seuls ne suffisent pas à différencier finement la qualité du saut d'obstacle. Un apport d'information supplémentaire sera nécessaire, à l'aide d'autres capteurs complémentaires par exemple ou encore en étoffant la base de données de façon à avoir un panel de chevaux et de profils de sauts plus variés
With the growth of smart devices market to provide athletes and trainers a systematic, objective and reliable follow-up, more and more parameters are monitored for a same individual. An alternative to laboratory evaluation methods is the use of inertial sensors which allow following the performance without hindering it, without space limits and without tedious initialization procedures. Data collected by those sensors can be classified as multivariate functional data: some quantitative entities evolving along time and collected simultaneously for a same individual. The aim of this thesis is to find parameters for analysing the athlete horse locomotion thanks to a sensor put in the saddle. This connected device (inertial sensor, IMU) for equestrian sports allows the collection of acceleration and angular velocity along time in the three space directions and with a sampling frequency of 100 Hz. The database used for model development is made of 3221 canter strides from 58 ridden jumping horses of different age and level of competition. Two different protocols are used to collect data: one for straight path and one for curved path. We restricted our work to the prediction of three parameters: the speed per stride, the stride length and the jump quality. To meet the first to objectives, we developed a multivariate functional clustering method that allow the division of the database into smaller more homogeneous sub-groups from the collected signals point of view. This method allows the characterization of each group by it average profile, which ease the data understanding and interpretation. But surprisingly, this clustering model did not improve the results of speed prediction, Support Vector Machine (SVM) is the model with the lowest percentage of error above 0.6 m/s. The same applied for the stride length where an accuracy of 20 cm is reached thanks to SVM model. Those results can be explained by the fact that our database is build from 58 horses only, which is a quite low number of individuals for a clustering method. Then we extend this method to the co-clustering of multivariate functional data in order to ease the datamining of horses’ follow-up databases. This method might allow the detection and prevention of locomotor disturbances, main source of interruption of jumping horses. Lastly, we looked for correlation between jumping quality and signals collected by the IMU. First results show that signals collected by the saddle alone are not sufficient to differentiate finely the jumping quality. Additional information will be needed, for example using complementary sensors or by expanding the database to have a more diverse range of horses and jump profiles
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Laclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.

Texto completo da fonte
Resumo:
Notre capacité grandissante à collecter et stocker des données a fait de l'apprentissage non supervisé un outil indispensable qui permet la découverte de structures et de modèles sous-jacents aux données, sans avoir à \étiqueter les individus manuellement. Parmi les différentes approches proposées pour aborder ce type de problème, le clustering est très certainement le plus répandu. Le clustering suppose que chaque groupe, également appelé cluster, est distribué autour d'un centre défini en fonction des valeurs qu'il prend pour l'ensemble des variables. Cependant, dans certaines applications du monde réel, et notamment dans le cas de données de dimension importante, cette hypothèse peut être invalidée. Aussi, les algorithmes de co-clustering ont-ils été proposés: ils décrivent les groupes d'individus par un ou plusieurs sous-ensembles de variables au regard de leur pertinence. La structure des données finalement obtenue est composée de blocs communément appelés co-clusters. Dans les deux premiers chapitres de cette thèse, nous présentons deux approches de co-clustering permettant de différencier les variables pertinentes du bruit en fonction de leur capacité \`a révéler la structure latente des données, dans un cadre probabiliste d'une part et basée sur la notion de métrique, d'autre part. L'approche probabiliste utilise le principe des modèles de mélanges, et suppose que les variables non pertinentes sont distribuées selon une loi de probabilité dont les paramètres sont indépendants de la partition des données en cluster. L'approche métrique est fondée sur l'utilisation d'une distance adaptative permettant d'affecter à chaque variable un poids définissant sa contribution au co-clustering. D'un point de vue théorique, nous démontrons la convergence des algorithmes proposés en nous appuyant sur le théorème de convergence de Zangwill. Dans les deux chapitres suivants, nous considérons un cas particulier de structure en co-clustering, qui suppose que chaque sous-ensemble d'individus et décrit par un unique sous-ensemble de variables. La réorganisation de la matrice originale selon les partitions obtenues sous cette hypothèse révèle alors une structure de blocks homogènes diagonaux. Comme pour les deux contributions précédentes, nous nous plaçons dans le cadre probabiliste et métrique. L'idée principale des méthodes proposées est d'imposer deux types de contraintes : (1) nous fixons le même nombre de cluster pour les individus et les variables; (2) nous cherchons une structure de la matrice de données d'origine qui possède les valeurs maximales sur sa diagonale (par exemple pour le cas des données binaires, on cherche des blocs diagonaux majoritairement composés de valeurs 1, et de 0 à l’extérieur de la diagonale). Les approches proposées bénéficient des garanties de convergence issues des résultats des chapitres précédents. Enfin, pour chaque chapitre, nous dérivons des algorithmes permettant d'obtenir des partitions dures et floues. Nous évaluons nos contributions sur un large éventail de données simulées et liées a des applications réelles telles que le text mining, dont les données peuvent être binaires ou continues. Ces expérimentations nous permettent également de mettre en avant les avantages et les inconvénients des différentes approches proposées. Pour conclure, nous pensons que cette thèse couvre explicitement une grande majorité des scénarios possibles découlant du co-clustering flou et dur, et peut être vu comme une généralisation de certaines approches de biclustering populaires
With the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Laclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data". Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.

Texto completo da fonte
Resumo:
Notre capacité grandissante à collecter et stocker des données a fait de l'apprentissage non supervisé un outil indispensable qui permet la découverte de structures et de modèles sous-jacents aux données, sans avoir à \étiqueter les individus manuellement. Parmi les différentes approches proposées pour aborder ce type de problème, le clustering est très certainement le plus répandu. Le clustering suppose que chaque groupe, également appelé cluster, est distribué autour d'un centre défini en fonction des valeurs qu'il prend pour l'ensemble des variables. Cependant, dans certaines applications du monde réel, et notamment dans le cas de données de dimension importante, cette hypothèse peut être invalidée. Aussi, les algorithmes de co-clustering ont-ils été proposés: ils décrivent les groupes d'individus par un ou plusieurs sous-ensembles de variables au regard de leur pertinence. La structure des données finalement obtenue est composée de blocs communément appelés co-clusters. Dans les deux premiers chapitres de cette thèse, nous présentons deux approches de co-clustering permettant de différencier les variables pertinentes du bruit en fonction de leur capacité \`a révéler la structure latente des données, dans un cadre probabiliste d'une part et basée sur la notion de métrique, d'autre part. L'approche probabiliste utilise le principe des modèles de mélanges, et suppose que les variables non pertinentes sont distribuées selon une loi de probabilité dont les paramètres sont indépendants de la partition des données en cluster. L'approche métrique est fondée sur l'utilisation d'une distance adaptative permettant d'affecter à chaque variable un poids définissant sa contribution au co-clustering. D'un point de vue théorique, nous démontrons la convergence des algorithmes proposés en nous appuyant sur le théorème de convergence de Zangwill. Dans les deux chapitres suivants, nous considérons un cas particulier de structure en co-clustering, qui suppose que chaque sous-ensemble d'individus et décrit par un unique sous-ensemble de variables. La réorganisation de la matrice originale selon les partitions obtenues sous cette hypothèse révèle alors une structure de blocks homogènes diagonaux. Comme pour les deux contributions précédentes, nous nous plaçons dans le cadre probabiliste et métrique. L'idée principale des méthodes proposées est d'imposer deux types de contraintes : (1) nous fixons le même nombre de cluster pour les individus et les variables; (2) nous cherchons une structure de la matrice de données d'origine qui possède les valeurs maximales sur sa diagonale (par exemple pour le cas des données binaires, on cherche des blocs diagonaux majoritairement composés de valeurs 1, et de 0 à l’extérieur de la diagonale). Les approches proposées bénéficient des garanties de convergence issues des résultats des chapitres précédents. Enfin, pour chaque chapitre, nous dérivons des algorithmes permettant d'obtenir des partitions dures et floues. Nous évaluons nos contributions sur un large éventail de données simulées et liées a des applications réelles telles que le text mining, dont les données peuvent être binaires ou continues. Ces expérimentations nous permettent également de mettre en avant les avantages et les inconvénients des différentes approches proposées. Pour conclure, nous pensons que cette thèse couvre explicitement une grande majorité des scénarios possibles découlant du co-clustering flou et dur, et peut être vu comme une généralisation de certaines approches de biclustering populaires
With the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Galindo-Prieto, Beatriz. "Novel variable influence on projection (VIP) methods in OPLS, O2PLS, and OnPLS models for single- and multi-block variable selection : VIPOPLS, VIPO2PLS, and MB-VIOP methods". Doctoral thesis, Umeå universitet, Kemiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-130579.

Texto completo da fonte
Resumo:
Multivariate and multiblock data analysis involves useful methodologies for analyzing large data sets in chemistry, biology, psychology, economics, sensory science, and industrial processes; among these methodologies, partial least squares (PLS) and orthogonal projections to latent structures (OPLS®) have become popular. Due to the increasingly computerized instrumentation, a data set can consist of thousands of input variables which contain latent information valuable for research and industrial purposes. When analyzing a large number of data sets (blocks) simultaneously, the number of variables and underlying connections between them grow very much indeed; at this point, reducing the number of variables keeping high interpretability becomes a much needed strategy. The main direction of research in this thesis is the development of a variable selection method, based on variable influence on projection (VIP), in order to improve the model interpretability of OnPLS models in multiblock data analysis. This new method is called multiblock variable influence on orthogonal projections (MB-VIOP), and its novelty lies in the fact that it is the first multiblock variable selection method for OnPLS models. Several milestones needed to be reached in order to successfully create MB-VIOP. The first milestone was the development of a single-block variable selection method able to handle orthogonal latent variables in OPLS models, i.e. VIP for OPLS (denoted as VIPOPLS or OPLS-VIP in Paper I), which proved to increase the interpretability of PLS and OPLS models, and afterwards, was successfully extended to multivariate time series analysis (MTSA) aiming at process control (Paper II). The second milestone was to develop the first multiblock VIP approach for enhancement of O2PLS® models, i.e. VIPO2PLS for two-block multivariate data analysis (Paper III). And finally, the third milestone and main goal of this thesis, the development of the MB-VIOP algorithm for the improvement of OnPLS model interpretability when analyzing a large number of data sets simultaneously (Paper IV). The results of this thesis, and their enclosed papers, showed that VIPOPLS, VIPO2PLS, and MB-VIOP methods successfully assess the most relevant variables for model interpretation in PLS, OPLS, O2PLS, and OnPLS models. In addition, predictability, robustness, dimensionality reduction, and other variable selection purposes, can be potentially improved/achieved by using these methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Latent Blocks Models"

1

Fedorov, Viktor, e Mihail San'kov. Management: theory and practice. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1859086.

Texto completo da fonte
Resumo:
The textbook presents the most important aspects of the theory and practice of modern management in a concise and accessible form. The section "Management Theory" is accompanied by questions and tasks for self-control, topics of abstracts and reports, as well as a list of additional literary sources for self-study. The section "Management Practice" contains test methods, practical tasks for individual and collective work of students, business situations for analysis, discussion and management decision-making. The manual additionally includes a block of self-test tasks and a glossary that can be used to monitor the development of the course. Meets the requirements of the federal state educational standards of secondary vocational education of the latest generation. It is intended for students studying in economic and managerial specialties to form basic knowledge in the field of management.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kuz'mina, Natal'ya. Criminology and crime prevention. ru: INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1900600.

Texto completo da fonte
Resumo:
The textbook presents modern material on all sections of the discipline "Criminology and crime prevention". The content of the teachings on crime and its causes, the identity of the criminal, the mechanism of committing a specific crime is revealed. The characteristics of the current state of certain types of crime using qualitative and quantitative criminological indicators are given. The problems of carrying out criminological research in the modern period are analyzed using a broad empirical base (including data from criminal law statistics). The section "Crime prevention System" has a practical orientation, which includes the legal foundations and areas of law enforcement agencies' activities in the implementation of crime prevention and prevention in Russia. At the end of each chapter of the textbook, a block of control questions and tasks is offered, with the help of which students can test their knowledge and consolidate the studied material. Meets the requirements of the federal state educational standard of secondary vocational education of the latest generation. For students of secondary vocational education institutions studying in the specialty 40.02.02 "Law enforcement", as well as teachers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Skrypnik, Oleg. Radio navigation and landing systems. ru: INFRA-M Academic Publishing LLC., 2024. http://dx.doi.org/10.12737/1874250.

Texto completo da fonte
Resumo:
The textbook discusses the issues of the general theory of navigation necessary for mastering the material that reveals the principles of construction and operation of on-board devices and radio navigation systems. The general characteristics of radio engineering and non-radio engineering navigation aids are given. The classification is given and the main characteristics of radio navigation systems are considered. Navigation concepts and terms are explained. The coordinate systems used in aviation navigation, methods for determining navigation parameters, as well as factors affecting the accuracy of radio navigation definitions, and ways to improve it are considered. The technique of constructing and analyzing the working zones of radio navigation systems is shown. The modern requirements of aviation consumers for navigation accuracy are analyzed. The main attention is paid to the consideration of theoretical aspects and principles of operation of on-board radio navigation devices and systems of modern civil aviation aircraft. Typical block diagrams, basic mathematical relations characterizing the operation of radio altimeters and radio altimeter systems, Doppler speed and drift angle meters, on-board and ground equipment of short-range navigation systems (automatic radio compasses and drive radios, VOR/DME angle-measuring system), radio landing systems and satellite navigation systems are presented. Structural diagrams are given as examples of practical implementation, the main characteristics of Russian-made radio navigation systems and equipment installed on the A-320 aircraft are considered, and the features of their design are shown. The presented text material is accompanied by a sufficient number of illustrations. Meets the requirements of the federal state educational standards of higher education of the latest generation. The textbook is intended for students studying in the fields of training and specialties of the radio engineering profile. It can also be useful for pilots and aviation specialists operating ground-based and airborne radio navigation flight support facilities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Palomäki, Outi, e Petri Volmanen. Alternative neural blocks for labour analgesia. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198713333.003.0018.

Texto completo da fonte
Resumo:
Although neuraxial analgesia is available to the majority of parturients in developed countries, alternative neural blocks for labour analgesia are needed for medical, individual, and institutional reasons. Paracervical and pudendal blocks are usually administered transvaginally by an obstetrician. An injection of 0.25% bupivacaine using a superficial technique into the lateral fornixes gives rapid pain relief and has been found to have no negative effect on either fetal oxygenation, or maternal and neonatal outcomes. Low rates of post-analgesic bradycardia and high rates of spontaneous vaginal delivery have been described in low-risk populations. The analgesic effect of a paracervical block is moderate and is limited to the first stage of labour. A pudendal block, administered transvaginally, can be used for pain relief in the late first stage, the second stage, in cases of vacuum extraction, or for episiotomy repair. In clinical use, 1% lidocaine gives rapid pain relief but the success rate is variable. The complications of pudendal block are rare and localized. The sympathetic and paravertebral blocks are currently mainly of historic interest. However, they may benefit parturients in exceptional conditions if the anaesthesiologist is experienced in the techniques. Lumbar sympathetic block provides fast pain relief during the first stage of labour when a combination of 0.5% bupivacaine with fentanyl and epinephrine is employed. With the currently available data, no conclusion on the analgesic effects of thoracic paravertebral block can be drawn when it is used for labour pain relief. Potential maternal risks limit the use of these methods in modern obstetrics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Succi, Sauro. Lattice Boltzmann Models without Underlying Boolean Microdynamics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199592357.003.0013.

Texto completo da fonte
Resumo:
Chapter 12 showed how to circumvent two major stumbling blocks of the LGCA approach: statistical noise and exponential complexity of the collision rule. Yet, the ensuing LB still remains connected to low Reynolds flows, due to the low collisionality of the underlying LGCA rules. The high-viscosity barrier was broken just a few months later, when it was realized how to devise LB models top-down, i.e., based on the macroscopic hydrodynamic target, rather than bottom-up, from underlying microdynamics. Most importantly, besides breaking the low-Reynolds barrier, the top-down approach has proven very influential for many subsequent developments of the LB method to this day.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tibaldi, Stefano, e Franco Molteni. Atmospheric Blocking in Observation and Models. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.611.

Texto completo da fonte
Resumo:
The atmospheric circulation in the mid-latitudes of both hemispheres is usually dominated by westerly winds and by planetary-scale and shorter-scale synoptic waves, moving mostly from west to east. A remarkable and frequent exception to this “usual” behavior is atmospheric blocking. Blocking occurs when the usual zonal flow is hindered by the establishment of a large-amplitude, quasi-stationary, high-pressure meridional circulation structure which “blocks” the flow of the westerlies and the progression of the atmospheric waves and disturbances embedded in them. Such blocking structures can have lifetimes varying from a few days to several weeks in the most extreme cases. Their presence can strongly affect the weather of large portions of the mid-latitudes, leading to the establishment of anomalous meteorological conditions. These can take the form of strong precipitation episodes or persistent anticyclonic regimes, leading in turn to floods, extreme cold spells, heat waves, or short-lived droughts. Even air quality can be strongly influenced by the establishment of atmospheric blocking, with episodes of high concentrations of low-level ozone in summer and of particulate matter and other air pollutants in winter, particularly in highly populated urban areas.Atmospheric blocking has the tendency to occur more often in winter and in certain longitudinal quadrants, notably the Euro-Atlantic and the Pacific sectors of the Northern Hemisphere. In the Southern Hemisphere, blocking episodes are generally less frequent, and the longitudinal localization is less pronounced than in the Northern Hemisphere.Blocking has aroused the interest of atmospheric scientists since the middle of the last century, with the pioneering observational works of Berggren, Bolin, Rossby, and Rex, and has become the subject of innumerable observational and theoretical studies. The purpose of such studies was originally to find a commonly accepted structural and phenomenological definition of atmospheric blocking. The investigations went on to study blocking climatology in terms of the geographical distribution of its frequency of occurrence and the associated seasonal and inter-annual variability. Well into the second half of the 20th century, a large number of theoretical dynamic works on blocking formation and maintenance started appearing in the literature. Such theoretical studies explored a wide range of possible dynamic mechanisms, including large-amplitude planetary-scale wave dynamics, including Rossby wave breaking, multiple equilibria circulation regimes, large-scale forcing of anticyclones by synoptic-scale eddies, finite-amplitude non-linear instability theory, and influence of sea surface temperature anomalies, to name but a few. However, to date no unique theoretical model of atmospheric blocking has been formulated that can account for all of its observational characteristics.When numerical, global short- and medium-range weather predictions started being produced operationally, and with the establishment, in the late 1970s and early 1980s, of the European Centre for Medium-Range Weather Forecasts, it quickly became of relevance to assess the capability of numerical models to predict blocking with the correct space-time characteristics (e.g., location, time of onset, life span, and decay). Early studies showed that models had difficulties in correctly representing blocking as well as in connection with their large systematic (mean) errors.Despite enormous improvements in the ability of numerical models to represent atmospheric dynamics, blocking remains a challenge for global weather prediction and climate simulation models. Such modeling deficiencies have negative consequences not only for our ability to represent the observed climate but also for the possibility of producing high-quality seasonal-to-decadal predictions. For such predictions, representing the correct space-time statistics of blocking occurrence is, especially for certain geographical areas, extremely important.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Pierce, Helen. Graphic Satire and the Printed Image in Shakespeare’s London. Editado por Malcolm Smuts. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199660841.013.40.

Texto completo da fonte
Resumo:
How was the multiplied, printed image encountered in Shakespeare’s London? This chapter examines a range of genres and themes for single sheet, illustrated broadsides in an emerging, specialist print market. It discusses how such images were used to persuade and to entertain a potentially broad cross-section of society along moral, political and religious lines, and according to both topical and commercial interests. The mimetic nature of the English print in both engraved and woodcut form is highlighted, with its frequent adaptation of continental models to suit more local concerns. Consideration is also given to the survival of certain images in later seventeenth-century impressions, indicative of popularity and the common commercial practice of reprinting stock from aging plates and blocks, and the sporadic nature of censorship upon the illustrated broadside.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Anderson, Elisabeth. Agents of Reform. Princeton University Press, 2021. http://dx.doi.org/10.23943/princeton/9780691220895.001.0001.

Texto completo da fonte
Resumo:
The beginnings of the modern welfare state are often traced to the late nineteenth-century labor movement and to policymakers' efforts to appeal to working-class voters. But this book shows that the regulatory welfare state began a half century earlier, in the 1830s, with the passage of the first child labor laws. The book tells the story of how middle-class and elite reformers in Europe and the United States defined child labor as a threat to social order, and took the lead in bringing regulatory welfare into being. They built alliances to maneuver around powerful political blocks and instituted pathbreaking new employment protections. Later in the century, now with the help of organized labor, they created factory inspectorates to strengthen and routinize the state's capacity to intervene in industrial working conditions. The book compares seven in-depth case studies of key policy episodes in Germany, France, Belgium, Massachusetts, and Illinois. Foregrounding the agency of individual reformers, the book challenges existing explanations of welfare state development and advances a new pragmatist field theory of institutional change. In doing so, it moves beyond standard narratives of interests and institutions toward an integrated understanding of how these interact with political actors' ideas and coalition-building strategies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Veech, Richard L., e M. Todd King. Alzheimer’s Disease. Editado por Detlev Boison. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190497996.003.0026.

Texto completo da fonte
Resumo:
Deficits in cerebral glucose utilization in Alzheimer’s disease (AD) arise decades before cognitive impairment and accumulation of amyloid plaques and neurofibrillary tangles in brain. Addressing this metabolic deficit has greater potential in treating AD than targeting later disease processes – an approach that has failed consistently in the clinic. Cerebral glucose utilization requires numerous enzymes, many of which have been shown to decline in AD. Perhaps the most important is pyruvate dehydrogenase (PDH), which links glycolysis with the Krebs cycle and aerobic metabolism, and whose activity is greatly suppressed in AD. The unique metabolism of ketone bodies allows them to bypass the block at pyruvate dehydrogenase and restore brain metabolism. Recent studies in mouse genetic models of AD and in a human Alzheimer’s patient showed the potential of ketones in maintaining brain energetics and function. Oral ketone bodies might be a promising avenue for treatment of Alzheimer’s disease.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Miller, Kenneth P. Texas vs. California. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190077365.001.0001.

Texto completo da fonte
Resumo:
Texas and California are the leaders of red and blue America. As the nation has polarized, its most populous and economically powerful states have taken charge of the opposing camps. These states now advance sharply contrasting political and policy agendas and view themselves as competitors for control of the nation’s future. This book provides a detailed account of the rivalry’s emergence, present state, and possible future. First, it explores why, despite their many similarities, the two states have become so deeply divided. The explanations focus on critical differences in the state’s origins as well as in their later demographic, economic, cultural, and political development. Second, the book analyzes how the two states have translated their competing visions into policy. It describes how Texas and California have constructed opposing, comprehensive policy models—one conservative, the other progressive. It describes how these models operate and how they have produced widely different outputs in a range of domestic policy areas. In separate chapters, the book highlights the states’ contrasting policies in five areas: tax, labor, energy and environment, poverty, and social issues. It also shows how Texas and California have led the red and blue state blocs in seeking to influence federal policy in these and other areas. Finally, the book assesses the two models’ strengths, vulnerabilities, and potential futures, providing a balanced analysis of their competing visions.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Latent Blocks Models"

1

Boutalbi, Rafika, Lazhar Labiod e Mohamed Nadif. "Latent Block Regression Model". In Studies in Classification, Data Analysis, and Knowledge Organization, 73–81. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09034-9_9.

Texto completo da fonte
Resumo:
AbstractWhen dealing with high dimensional sparse data, such as in recommender systems,co-clusteringturnsouttobemorebeneficialthanone-sidedclustering,even if one is interested in clustering along one dimension only. Thereby, co-clusterwise is a natural extension of clusterwise. Unfortunately, all of the existing approaches do not consider covariates on both dimensions of a data matrix. In this paper, we propose a Latent Block Regression Model (LBRM) overcoming this limit. For inference, we propose an algorithm performing simultaneously co-clustering and regression where a linear regression model characterizes each block. Placing the estimate of the model parameters under the maximum likelihood approach, we derive a Variational Expectation–Maximization (VEM) algorithm for estimating the model’s parameters. The finality of the proposed VEM-LBRM is illustrated through simulated datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kuckertz, Andreas, Thomas Leicht, Maximilian Scheu, Indra da Silva Wagner e Bernd Ebersberger. "Architecture of the Venture: Understanding Business Modeling". In Mastering Your Entrepreneurial Journey, 63–73. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71064-3_6.

Texto completo da fonte
Resumo:
AbstractAs a prospective founder, there is no getting around the question of the business model of your venture. There are various components to consider which, in their interaction, realize the inherent value of your product. In this chapter, you will learn how to shape the individual building blocks of the business model by using various tools and insights from external players and adapting these blocks to the latest findings. You will also learn which factors you should consider when selecting the revenue model to ensure the profitability of your business model. The challenge is to stay dynamic in your design while not getting distracted by the myriad of ways to build a business model. We provide order in the cluttered universe of business models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Salinas Ruíz, Josafhat, Osval Antonio Montesinos López, Gabriela Hernández Ramírez e Jose Crossa Hiriart. "Generalized Linear Mixed Models for Repeated Measurements". In Generalized Linear Mixed Models with Applications in Agriculture and Biology, 377–423. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32800-8_9.

Texto completo da fonte
Resumo:
AbstractRepeated measures data, also known as longitudinal data, are those derived from experiments in which observations are made on the same experimental units at various planned times. These experiments can be of the regression or analysis of variance (ANOVA) type, can contain two or more treatments, and are set up using familiar designs, such as CRD (Completely Randomized design), randomized complete block design (RCBD), or randomized incomplete blocks, if blocking is appropriate, or using row and column designs such as Latin squares when appropriate. Repeated measures designs are widely used in the biological sciences and are fairly well understood for normally distributed data but less so with binary, ordinal, count data, and so on. Nevertheless, recent developments in statistical computing methodology and software have greatly increased the number of tools available for analyzing categorical data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Osborne, Martin J., e Ariel Rubinstein. "Choice". In Models in Microeconomic Theory, 17–30. 2a ed. Cambridge, UK: Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0362.02.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Osborne, Martin J., e Ariel Rubinstein. "Choice". In Models in Microeconomic Theory, 17–30. 2a ed. Cambridge, UK: Open Book Publishers, 2023. http://dx.doi.org/10.11647/obp.0361.02.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Khoufache, Reda, Anisse Belhadj, Hanene Azzag e Mustapha Lebbah. "Distributed MCMC Inference for Bayesian Non-parametric Latent Block Model". In Advances in Knowledge Discovery and Data Mining, 271–83. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2242-6_22.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Guarino, Stefano, Enrico Mastrostefano e Davide Torre. "The Hyperbolic Geometric Block Model and Networks with Latent and Explicit Geometries". In Complex Networks and Their Applications XI, 109–21. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-21131-7_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Marchello, Giulia, Marco Corneli e Charles Bouveyron. "A Deep Dynamic Latent Block Model for the Co-Clustering of Zero-Inflated Data Matrices". In Machine Learning and Knowledge Discovery in Databases: Research Track, 695–710. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43412-9_41.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Han, Sapphire Yu, e Cees H. Elzinga. "Modeling the Genesis of Life Courses". In Social Background and the Demographic Life Course: Cross-National Comparisons, 125–40. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67345-1_7.

Texto completo da fonte
Resumo:
AbstractLife course research has been dominated by methods and models that focus on the description of life course patterns and on the causal patterns between agency- and structure-related variables on the one hand and, on the other hand, outcomes in later life. Little attention has been paid to modelling the driving force, the mechanism, that generates the chain of successive events and stages of the life course: the sequences of individual decisions pertaining to all facets of the life course. This paper presents the minimal requirements that models should satisfy in order to be considered as life course generating models. The paper then proposes Hidden Markov Models as one of the main building blocks of life course generating models and discusses a few applications of these models in the domains of family formation, school-to-work transition and their interaction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Kirilenko, Andrei, e Svetlana Stepchenkova. "Automated Topic Analysis with Large Language Models". In Information and Communication Technologies in Tourism 2024, 29–34. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_3.

Texto completo da fonte
Resumo:
AbstractTopic modeling is a popular method in tourism data analysis. Many authors have applied various approaches to summarize the main themes of travel blogs, reviews, video diaries, and similar media. One common shortcoming of these methods is their severe limitation in working with short documents, such as blog readers’ feedback (reactions). In the past few years, a new crop of large language models (LLMs), such as ChatGPT, has become available for researchers. We investigate LLM capability in extracting the main themes of viewers’ reactions to popular videos of a rural China destination that explores the cultural, technological, and natural heritage of the countryside. We compare the extracted topics and model accuracy with the results of the traditional Latent Dirichlet Allocation approach. Overall, LLM results are more accurate, specific, and better at separating discussion topics.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Latent Blocks Models"

1

Zhuang, Junling, Guanhong Li, Hang Xu, Jintu Xu e Runjia Tian. "Text-to-City: Controllable 3D Urban Block Generation With Latent Diffusion Model". In CAADRIA 2024: Accelerated Design, 169–78. CAADRIA, 2024. http://dx.doi.org/10.52842/conf.caadria.2024.2.169.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Liwei, Suraj Yerramilli, Akshay Iyer, Daniel Apley, Ping Zhu e Wei Chen. "Data-Driven Design via Scalable Gaussian Processes for Multi-Response Big Data With Qualitative Factors". In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-71570.

Texto completo da fonte
Resumo:
Abstract Scientific and engineering problems often require an inexpensive surrogate model to aid understanding and the search for promising designs. While Gaussian processes (GP) stand out as easy-to-use and interpretable learners in surrogate modeling, they have difficulties in accommodating big datasets, qualitative inputs, and multi-type responses obtained from different simulators, which has become a common challenge for a growing number of data-driven design applications. In this paper, we propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously. The method is built upon the latent variable Gaussian process (LVGP) model where qualitative factors are mapped into a continuous latent space to enable GP modeling of mixed-variable datasets. By extending variational inference to LVGP models, the large training dataset is replaced by a small set of inducing points to address the scalability issue. Output response vectors are represented by a linear combination of independent latent functions, forming a flexible kernel structure to handle multi-type responses. Comparative studies demonstrate that the proposed method scales well for large datasets with over 104 data points, while outperforming state-of-the-art machine learning methods without requiring much hyperparameter tuning. In addition, an interpretable latent space is obtained to draw insights into the effect of qualitative factors, such as those associated with “building blocks” of architectures and element choices in metamaterial and materials design. Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism with aperiodic microstructures and multiple materials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Fogat, Mrigya, Samiran Roy, Viviane Ferreira e Satyan Singh. "A Comparative Analysis of Convolutional Neural Networks for Seismic Noise Attenuation". In SPE EuropEC - Europe Energy Conference featured at the 84th EAGE Annual Conference & Exhibition. SPE, 2023. http://dx.doi.org/10.2118/214392-ms.

Texto completo da fonte
Resumo:
Abstract Seismic data is an essential source of information often contaminated with disturbing, coherent and random noise. Seismic random noise has degenerative impacts on subsequent seismic processing and data interpretation. Thus, seismic noise attenuation is a key step in seismic processing. Convolutional Neural Networks (CNNs) have proven successful for various image processing tasks in multidisciplinary fields and this paper aims to study the impact of three CNN architectures (autoencoders, denoising CNNs (DnCNN) and residual dense networks (RDN)) on improving the signal to noise ratio of seismic data. The work consists of three steps: Data preparation, model training and model testing. In this study we have used real seismic data to prepare the training dataset we have manually added noise. Most studies on seismic noise attenuation, study only a single kind of noise. However this paper suggests making our approach by exposing the model to many kinds of noises and noise levels such as Guassian noise, Poisson noise, Salt and Pepper and Speckle noise. In this paper we have analysed the performance of three models. Autoencoders: This architecture consists of two parts, the encoders and the decoders. The encoder consists of convolutions on the input image to extract all key information and map it to a latent space with loss of unnecessary data(noise) while the decoder reconstructs the image from the latent space to a seismic image while high signal to noise ratio. DnCNNs: This architecture is a combination of residual learning and batch normalization and mainly consists of three kinds of blocks. The model is trained to predict the residual image, that is the difference between the noisy observation and the latent clean image. RDNs: This architecture comprises of shallow feature extraction net, residual dense blocks (RDBs), dense feature fusion, and lastly up-sampling net. The data prepared as mentioned above is trained on all three CNN models across different noise levels and the performance of these models was compared. The model is finally tested on a batch of unseen noisy seismic sections and the performance is measured by an l2 loss namely mean squared error and the improvement in signal to noise ratio. The resultant images from all three architectures across different noise levels have drastically improved signal to noise ratio and thus the application of CNN as a denoiser for seismic images proves to be successful. It is important to note that when comparing the difference plots(Noisy image minus the denoised image) we found minimal signal leakage. While the application of CNN for image pre-processing has seen great success in other fields, mathematical denoising techniques such as F-K filter, tao-p filter are still used in oil and gas industry particularly in seismic denoising. After thorough review, this paper studies some of the most successful denoising CNN architectures and its success in seismic denoising.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Zhu, Feida, Junwei Zhu, Wenqing Chu, Ying Tai, Zhifeng Xie, Xiaoming Huang e Chengjie Wang. "HifiHead: One-Shot High Fidelity Neural Head Synthesis with 3D Control". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/244.

Texto completo da fonte
Resumo:
We propose HifiHead, a high fidelity neural talking head synthesis method, which can well preserve the source image's appearance and control the motion (e.g., pose, expression, gaze) flexibly with 3D morphable face models (3DMMs) parameters derived from a driving image or indicated by users. Existing head synthesis works mainly focus on low-resolution inputs. Instead, we exploit the powerful generative prior embedded in StyleGAN to achieve high-quality head synthesis and editing. Specifically, we first extract the source image's appearance and driving image's motion to construct 3D face descriptors, which are employed as latent style codes for the generator. Meanwhile, hierarchical representations are extracted from the source and rendered 3D images respectively to provide faithful appearance and shape guidance. Considering the appearance representations need high-resolution flow fields for spatial transform, we propose a coarse-to-fine style-based generator consisting of a series of feature alignment and refinement (FAR) blocks. Each FAR block updates the dense flow fields and refines RGB outputs simultaneously for efficiency. Extensive experiments show that our method blends source appearance and target motion more accurately along with more photo-realistic results than previous state-of-the-art approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cao, Bingyi, Kenneth A. Ross, Martha A. Kim e Stephen A. Edwards. "Implementing latency-insensitive dataflow blocks". In 2015 ACM/IEEE International Conference on Formal Methods and Models for Codesign (MEMOCODE). IEEE, 2015. http://dx.doi.org/10.1109/memcod.2015.7340485.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

da Rosa, Augusto Seben, Frederico Santos de Oliveira, Anderson da Silva Soares e Arnaldo Candido Junior. "Yin Yang Convolutional Nets: Image Manifold Extraction by the Analysis of Opposites". In Congresso Latino-Americano de Software Livre e Tecnologias Abertas, 341–47. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/latinoware.2024.245312.

Texto completo da fonte
Resumo:
Computer vision in general presented several ad- vances such as training optimizations, new architectures (pure attention, efficient block, vision language models, generative mod- els, among others). This have improved performance in several tasks such as classification, and others. However, the majority of these models focus on modifications that are taking distance from realistic neuroscientific approaches related to the brain. In this work, we adopt a more bio-inspired approach and present the Yin Yang Convolutional Network, an architecture that extracts visual manifold, its blocks are intended to separate analysis of colors and forms at its initial layers, simulating occipital lobe’s operations. Our results shows that our architecture provides State- of-the-Art efficiency among low parameter architectures in the dataset CIFAR-10. Our first model reached 93.32% test accuracy, 0.8% more than the older SOTA in this category, while having 150k less parameters (726k in total). Our second model uses 52k parameters, losing only 3.86% test accuracy. We also performed an analysis on ImageNet, where we reached 66.49% validation accuracy with 1.6M parameters. We make the code publicly available at: https://github.com/NoSavedDATA/YinYang CNN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ailem, Melissa, Francois Role e Mohamed Nadif. "Sparse Poisson Latent Block Model for Document Clustering (Extended Abstract)". In 2018 IEEE 34th International Conference on Data Engineering (ICDE). IEEE, 2018. http://dx.doi.org/10.1109/icde.2018.00229.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Doersch, Stefan, Maria Starnberg e Haike Brick. "Acoustic Certification of New Composite Brake Blocks". In EuroBrake 2021. FISITA, 2021. http://dx.doi.org/10.46720/1766833eb2021-stp-022.

Texto completo da fonte
Resumo:
In the latest amendment to the TSI Noise, the Commission Implementing Regulation (EU) 2019/774 from year 2019 (TSI NOI EU 2019/774, 2019), the term “quieter brake blocks” was introduced. The purpose was to distinguish between brake blocks that cause a high rolling noise level by roughening the surface of the wheels and quieter brake blocks with acoustic properties that better correspond to the pass-by noise limit for freight wagons. However, it has remained an open point which methods and procedures should be used for the assessment of the acoustic properties of new brake blocks. This open point shall be closed in the new revision of the TSI Noise, which will become effective in year 2022. It requires a new acoustic certification procedure for brake blocks to be developed. A new procedure for the acoustic certification of new brake blocks should be reliable, easy to use and less expensive in terms of time and costs than full scale pass-by noise measurements in field. These conditions could be fulfilled by a certification procedure based on the wheel roughness level caused by the specific brake block. The relationship to the TSI-noise limit value can be established by defining reference values for the rail roughness and transfer function according to the well-established rolling noise model. Besides the certification procedure, a practical method should be defined how to generate and assess the wheel roughness that is characteristic for a specific brake block product. This project is financed by the German Centre for Rail Traffic Research in cooperation with the Federal Railway Authority and executed by DB Systemtechnik GmbH. The objective of the presentation is to introduce the research project “Acoustic Certification of New Composite Brake Blocks”. This presentation summarizes the project work so far and gives explanations and background knowledge to the development of the methods as well as to railway noise. A calculation example is given to comprehensibly demonstrate the proposed procedure. At the time of the EuroBrake conference the project is still ongoing, and the final results cannot yet be presented. The focus for the discussions is to put on the practicability of the methods and the needs of the user regarding for instance documentation, required efforts or material and qualification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Yujin, DeZhong Wang, Junlian Yin e Yaoyu Hu. "The Use of Experimental Design for the Shrink-Fit Assembly of Multi-Ring Flywheel". In 2014 22nd International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icone22-30359.

Texto completo da fonte
Resumo:
The flywheel of latest coolant pump provides high inertia to ensure a slow decrease in coolant flow to prevent fuel damage after the loss of power. Flywheel comprises a hub, twelve tungsten alloy blocks and a retainer ring shrink-fit assembled on the outer surface of blocks. In the structural integrity analysis, the shrinkage load due to shrink-fit and the centrifugal load due to rotation are considered, so the wall thickness of retainer ring and the magnitude of shrink-fit are key variables. In particular, these variables will change the flywheel running state. This paper considers the influence of these variables, we employ Latin hypercube design to obtain the response surface model and analyze the influence of these variables. Finally we obtain the magnitude of wall thickness of retainer ring and the range of shrink-fit.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Lomet, Aurore, Gerard Govaert e Yves Grandvalet. "An Approximation of the Integrated Classification Likelihood for the Latent Block Model". In 2012 IEEE 12th International Conference on Data Mining Workshops. IEEE, 2012. http://dx.doi.org/10.1109/icdmw.2012.32.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Latent Blocks Models"

1

Zagorevski, A., e C. R. van Staal. Cordilleran magmatism in Yukon and northern British Columbia: characteristics, temporal variations, and significance for the tectonic evolution of the northern Cordillera. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/326063.

Texto completo da fonte
Resumo:
Geochemical and temporal characterization of magmatic rocks is an effective way to test terrane definitions and to evaluate tectonic models. In the northern Cordillera, magmatic episodes are mostly interpreted as products of continental arc and back-arc settings. Re-evaluation of Paleozoic and Late Mesozoic magmatic episodes presented herein highlights fundamental gaps in the understanding of the tectonic framework of the northern Cordillera. In many cases, the character of magmatism and temporal relationships between various magma types do not support existing tectonic models. The present re-evaluation indicates that some of the magmatic episodes are best explained by lithospheric extension rather than arc magmatism. In addition, comparison to modern analogues suggests that many presently defined terranes are not the fundamental tectonic building blocks, but rather combine distinctly different tectonic elements that may not be related each other. Grouping of these distinctly different tectonic elements into single terranes hinders the understanding of Cordilleran evolution and its mineral deposits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sentcоv, Valentin, Andrei Reutov e Vyacheslav Kuzmin. Electronic training manual "Acute poisoning with alcohols and alcohol-containing liquids". SIB-Expertise, janeiro de 2024. http://dx.doi.org/10.12731/er0778.29012024.

Texto completo da fonte
Resumo:
In the structure of acute poisonings, ethanol poisoning currently accounts, according to various sources, from 10 to 20%. The mortality rate in poison control centers for ethanol poisoning is 1-2%, but the mortality rate is much higher due to those who died before medical care was provided. The widespread use of methanol and ethylene glycol in various industries and the high mortality rate with late recognition of poisoning with these alcohols determine the high relevance of a detailed study of the clinic, diagnosis and treatment of these poisonings by doctors of various specialties. In particular, toxicologists from health care institutions, anesthesiologists and resuscitators from health care institutions, doctors from specialized emergency medical services teams, and disaster medicine doctors. Competent and timely diagnosis, hospitalization in a specialized hospital and previously started treatment greatly increases the patient’s chances of life and its further quality. This electronic educational resourse consists of six theoretical educational modules: general issues of clinical toxicology, acute poisoning with veratrine, acute poisoning with ethanol, poisoning with methanol, poisoning with ethylene glycol, acute poisoning with other alcohols. The theoretical block of modules is presented by presentations, the text of lectures with illustrations. Control classes in the form of test control accompany each theoretical module. After studying all modules, the student passes the final test control. Mastering the electronic educational resourse will ensure a high level of readiness to provide specialized toxicological care by doctors of various specialties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Knudsen, Tyler R. Interim Geologic Map of the Parowan Quadrangle, Iron County, Utah. Utah Geological Survey, junho de 2024. http://dx.doi.org/10.34191/ofr-764.

Texto completo da fonte
Resumo:
The Parowan 7.5' quadrangle is centered around the City of Parowan at the eastern margin of the Basin and Range Province in Iron County, southwestern Utah. The quadrangle covers part of the northwestern flank of the Markagunt Plateau and part of the adjacent Parowan Valley. Interstate 15 crosses the northwestern corner of the map area. Parowan Creek and its tributaries have carved deep canyons into the Markagunt Plateau, exposing a succession of sedimentary and volcanic rocks ranging in age from Late Cretaceous to Middle Pleistocene. The modern landscape is dominated by northeast-southwest-trending high-angle normal faults that form a series of horsts and grabens. The largest graben, Parowan Valley, is bounded by the Parowan fault on the southeast and is part of the transitional boundary between the Colorado Plateau to the east and the Basin and Range Province to the west. Large down-to-the-west displacements on the Parowan and the subparallel Paragonah faults have formed the precipitous Hurricane Cliffs. Along the base of the Hurricane Cliffs, Cretaceous through Eocene strata dip moderately to steeply northwest as part of the Cedar City-Parowan monocline, indicating that the eastward progression of Sevier deformation in this area extended into the Eocene. Extensive mass-wasting deposits consisting largely of Oligocene and Miocene volcanic rocks are preserved within four major northeast-trending grabens that traverse the Markagunt Plateau and are absent on upthrown blocks. Mass-wasting deposits range from Miocene regional-scale gravity-slide deposits to modern localized landsliding and slumping of weak, oversteepened units. The Parowan fault and nearby intrabasin faults in Parowan Valley have locally displaced Late Pleistocene to Holocene alluvial-fan deposits, indicating that the faults should be considered hazardous.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir e Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, dezembro de 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Texto completo da fonte
Resumo:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Karacic, Almir, e Anneli Adler. Fertilization of poplar plantations with dried sludge : a demonstration trial in Hillebola - central Sweden. Department of Crop Production Ecology, Swedish University of Agricultural Sciences, 2023. http://dx.doi.org/10.54612/a.2q9iahfphk.

Texto completo da fonte
Resumo:
Wastewater sludge contains essential nutrients for plant growth and is frequently used as fertilizer in European agriculture. Sludge contains elevated concentrations of heavy metals, microplastics, and other substances that may pose potential risks to human health and the environment. Nevertheless, dried pelletized sludge emerges as a viable product for fertilizing short-rotation poplar plantations within a circular model, enabling nutrient recycling and converting waste into a valuable resource to enhance biomass production for different markets. In Hillebola, central Sweden, we demonstrated the application of dried pelletized sludge to pilot plantations with climate-adapted Populus trichocarpa clones. The trial was established in four blocks with four treatments three years after the poplar trees were planted. The treatments were: mineral NPK fertilizer + soil cultivation between poplar rows, dried pelletized sludge + soil cultivation, no fertilization + soil cultivation only, and control (no treatments). The effect of fertilization on poplar growth was evaluated two years later, after the fifth growing season. The results showed a significantly improved basal area increment in NPK and sludge treatments compared to the control. The ground vegetation inventory revealed substantial differences in weed biomass between control and cultivated plots. Control plots contained double the amount of aboveground grass and herbaceous biomass (8.6 ton ha-1 ) compared to cultivated and cultivated + fertilized plots. The low-intensity Nordic-Baltic poplar establishment practices allow for a substantial amount of ground vegetation to develop until the canopy closure, potentially contributing to the soil carbon pool more than it is usually recognized when modeling carbon balances in short-rotation poplar plantations, which is the theme of our next report.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Chejanovsky, Nor, e Suzanne M. Thiem. Isolation of Baculoviruses with Expanded Spectrum of Action against Lepidopteran Pests. United States Department of Agriculture, dezembro de 2002. http://dx.doi.org/10.32747/2002.7586457.bard.

Texto completo da fonte
Resumo:
Our long-term goal is to learn to control (expand and restrict) the host range of baculoviruses. In this project our aim was to expand the host range of the prototype baculovirus Autographa cali/arnica nuclear polyhedrosis virus (AcMNPV) towards American and Israeli pests. To achieve this objective we studied AcMNPV infection in the non-permissive hosts L. dispar and s. littoralis (Ld652Y and SL2 cells, respectively) as a model system and the major barriers to viral replication. We isolated recombinant baculoviruses with expanded infectivity towards L. dispar and S. littoralis and tested their infectivity towards other Lepidopteran pests. The restricted host range displayed by baculoviruses constitutes an obstacle to their further implementation in the control of diverse Lepidopteran pests, increasing the development costs. Our work points out that cellular defenses are major role blocks to AcMNPV replication in non- and semi-permissive hosts. Therefore a major determinant ofbaculovirus host range is the ability of the virus to effectively counter cellular defenses of host cells. This is exemplified by our findings showing tliat expressing the viral gene Ldhrf-l overcomes global translation arrest in AcMNPV -infected Ld652Y cells. Our data suggests that Ld652Y cells have two anti-viral defense pathways, because they are subject to global translation arrest when infected with AcMNPV carrying a baculovirus apoptotic suppressor (e.g., wild type AcMNPV carryingp35, or recombinant AcMNPV carrying Opiap, Cpiap. or p49 genes) but apoptose when infected with AcMNPV-Iacking a functional apoptotic suppressor. We have yet to elucidate how hrf-l precludes the translation arrest mechanism(s) in AcMNPV-infected Ld652Y cells. Ribosomal profiles of AcMNPV infected Ld652Y cells suggested that translation initiation is a major control point, but we were unable to rule-out a contribution from a block in translation elongation. Phosphorylation of eIF-2a did not appear to playa role in AcMNPV -induced translation arrest. Mutagenesis studies ofhrf-l suggest that a highly acidic domain plays a role in precluding translation arrest. Our findings indicate that translation arrest may be linked to apoptosis either through common sensors of virus infection or as a consequence of late events in the virus life-cycle that occur only if apoptosis is suppressed. ~ AcMNPV replicates poorly in SL2 cells and induces apoptosis. Our studies in AcMNPV - infected SL2ceils led us to conclude that the steady-state levels of lEI (product of the iel gene, major AcMNPV -transactivator and multifunctional protein) relative to those of the immediate early viral protein lEO, playa critical role in regulating the viral infection. By increasing the IEl\IEO ratio we achieved AcMNPV replication in S. littoralis and we were able to isolate recombinant AcMNPV s that replicated efficiently in S. lifforalis cells and larvae. Our data that indicated that AcMNPV - infection may be regulated by an interaction between IE 1 and lED (of previously unknown function). Indeed, we showed that IE 1 associates with lED by using protein "pull down" and immunoprecipitation approaches High steady state levels of "functional" IE 1 resulted in increased expression of the apoptosis suppressor p35 facilitating AcMNPV -replication in SL2 cells. Finally, we determined that lED accelerates the viral infection in AcMNPV -permissive cells. Our results show that expressing viral genes that are able to overcome the insect-pest defense system enable to expand baculovirus host range. Scientifically, this project highlights the need to further study the anti-viral defenses of invertebrates not only to maximi~e the possibilities for manipulating baculovirus genomes, but to better understand the evolutionary underpinnings of the immune systems of vertebrates towards virus infection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Johnson, Derek, e Nigel Clark. PR-746-22204-R01 Review of Technologies to Enable In-situ Valve Service to Reduce Methane Emissions. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), março de 2024. http://dx.doi.org/10.55274/r0000058.

Texto completo da fonte
Resumo:
Leaking gas industry valve stem seals are recognized as a substantial source of atmospheric methane, which is a greenhouse gas. Newly proposed regulations include methane alongside volatile organic compound emissions, with leak detection and repair requirements. If a leak is identified a first attempt at repair must occur no later than five calendar days after identification, or else be justifiably delayed. The objective of this report is to review valve technology and methods used to address in-situ valve stem leak repair that offers an economical solution with reduced service disruption. A wide variety of valves are employed in natural gas facilities, with valve stems that rotate or translate, and with seals ranging from packings to O-rings. Low emissions valve designs are available, but turnover of legacy valves is slow. Precise causes of failures are not well documented, although stem misalignment, intrusive dirt, and corrosion may exacerbate seal damage. Injection of lubricants and sealants into the valve packing or seal area offers the simplest remedy for leakage control. However, more work is required to identify optimal sealants for each application and to predict durability of the repair using injection. Safety must be assured where seals must be replaced, but there are varying practices in addressing isolation of the seal area from high pressure gas. Where double block (or isolation) and bleed are required, knowledge of the valve main seal design is essential. Blowdown of line sections may be required, but the methane release is of concern. Methods exist to capture or else oxidize the methane. Opinions on the protection offered by backseating of gate valves varies. Improved understanding and practice will require comprehensive record keeping on the history of each valve, permitting analysis and quality improvement using the resulting operations database. This is key to recommendations in a future roadmap that includes study of failure modes and optimized use of sealants. Monitoring success of repairs would be better served by measuring leak rate than concentration. Record keeping and better understanding of failures and success of repair approaches also support decisions on immediate versus deferred repair, use of sealants, and on whether a valve should be replaced or repaired. Hardware and practice innovations are anticipated in response to leak detection and repair requirements.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Meir, Shimon, Michael Reid, Cai-Zhong Jiang, Amnon Lers e Sonia Philosoph-Hadas. Molecular Studies of Postharvest Leaf and Flower Abscission. United States Department of Agriculture, 2005. http://dx.doi.org/10.32747/2005.7696523.bard.

Texto completo da fonte
Resumo:
Original objectives: Understanding the regulation of abscission competence by exploring the nature and function of auxin-related gene expression changes in the leaf and pedicelAZs of tomato (as a model system), was the main goal of the previously submitted proposal. We proposed to achieve this goal by using microarray GeneChip analysis, to identify potential target genes for functional analysis by virus-induced gene silencing (VIGS). To increase the potential of accomplishing the objectives of the previously submitted proposal, we were asked by BARD to show feasibility for the use of these two modern techniques in our abscission system. Thus, the following new objectives were outlined for the one-year feasibility study: 1.to demonstrate the feasibility of the VIGS system in tomato to perform functional analysis of known abscission-related genes; 2. to demonstrate that by using microarray analysis we can identify target genes for further VIGS functional analysis. Background to the topic: It is a generally accepted model that auxin flux through the abscission zone (AZ) prevents organ abscission by rendering the AZ insensitive to ethylene. However, the molecular mechanisms responsible for acquisition of abscission competence and the way in which the auxin gradient modulates it are still unknown. Understanding this basic stage of the abscission process may provide us with future tools to control abscission for agricultural applications. Based on our previous study, performed to investigate the molecular changes occurring in leaf and stem AZs of MirabillisJalapaL., we have expanded our research to tomato, using genomic approaches that include modern techniques for gene discovery and functional gene characterization. In our one-year feasibility study, the US team has established a useful system for VIGS in tomato, using vectors based on the tobacco rattle virus (TRV), a Lcreporter gene for silencing (involved in regulation of anthocyanin biosynthesis), and the gene of interest. In parallel, the Israeli team has used the newly released Affymetrix Tomato GeneChip to measure gene expression in AZ and non-AZ tissues at various time points after flower removal, when increased sensitivity to ethylene is acquired prior to abscission (at 0-8 h), and during pedicelabscission (at 14 h). In addition, gene expression was measured in the pedicel AZ pretreated with the ethylene action inhibitor, 1-methylcyclopropene (1-MCP) before flower removal, to block any direct effects of ethylene. Major conclusions, solutions and achievements: 1) The feasibility study unequivocally established that VIGS is an ideal tool for testing the function of genes with putative roles in abscission; 2) The newly released Affymetrix Tomato GeneChip was found to be an excellent tool to identify AZ genes possibly involved in regulation and execution of abscission. The VIGS-based study allowed us to show that TAPG, a polygalacturonase specifically associated with the tomato AZ, is a key enzyme in the abscission process. Using the newly released Affymetrix Tomato GeneChip we have identified potential abscission regulatory genes as well as new AZ-specific genes, the expression of which was modified after flower removal. These include: members of the Aux/IAAgene family, ethylene signal transduction-related genes, early and late expressed transcription factors, genes which encode post-translational regulators whose expression was modified specifically in the AZ, and many additional novel AZ-specific genes which were previously not associated with abscission. This microarray analysis allowed us to select an initial set of target genes for further functional analysis by VIGS. Implications: Our success in achieving the two objectives of this feasibility study provides us with a solid basis for further research outlined in the original proposal. This will significantly increase the probability of success of a full 3-year project. Additionally, our feasibility study yielded highly innovative results, as they represent the first direct demonstration of the functional involvement of a TAPG in abscission, and the first microarray analysis of the abscission process. Using these approaches we could identify a large number of genes involved in abscission regulation, initiation and execution, and in auxin-ethylene cross-talk, which are of great importance, and could enable their potential functional analysis by VIGS.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Harris, L. B., P. Adiban e E. Gloaguen. The role of enigmatic deep crustal and upper mantle structures on Au and magmatic Ni-Cu-PGE-Cr mineralization in the Superior Province. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328984.

Texto completo da fonte
Resumo:
Aeromagnetic and ground gravity data for the Canadian Superior Province, filtered to extract long wavelength components and converted to pseudo-gravity, highlight deep, N-S trending regional-scale, rectilinear faults and margins to discrete, competent mafic or felsic granulite blocks (i.e. at high angles to most regional mapped structures and sub-province boundaries) with little to no surface expression that are spatially associated with lode ('orogenic') Au and Ni-Cu-PGE-Cr occurrences. Statistical and machine learning analysis of the Red Lake-Stormy Lake region in the W Superior Province confirms visual inspection for a greater correlation between Au deposits and these deep N-S structures than with mapped surface to upper crustal, generally E-W trending, faults and shear zones. Porphyry Au, Ni, Mo and U-Th showings are also located above these deep transverse faults. Several well defined concentric circular to elliptical structures identified in the Oxford Stull and Island Lake domains along the S boundary of the N Superior proto-craton, intersected by N- to NNW striking extensional fractures and/or faults that transect the W Superior Province, again with little to no direct surface or upper crustal expression, are spatially associated with magmatic Ni-Cu-PGE-Cr and related mineralization and Au occurrences. The McFaulds Lake greenstone belt, aka. 'Ring of Fire', constitutes only a small, crescent-shaped belt within one of these concentric features above which 2736-2733 Ma mafic-ultramafic intrusions bodies were intruded. The Big Trout Lake igneous complex that hosts Cr-Pt-Pd-Rh mineralization west of the Ring of Fire lies within a smaller concentrically ringed feature at depth and, near the Ontario-Manitoba border, the Lingman Lake Au deposit, numerous Au occurrences and minor Ni showings, are similarly located on concentric structures. Preliminary magnetotelluric (MT) interpretations suggest that these concentric structures appear to also have an expression in the subcontinental lithospheric mantle (SCLM) and that lithospheric mantle resistivity features trend N-S as well as E-W. With diameters between ca. 90 km to 185 km, elliptical structures are similar in size and internal geometry to coronae on Venus which geomorphological, radar, and gravity interpretations suggest formed above mantle upwellings. Emplacement of mafic-ultramafic bodies hosting Ni-Cr-PGE mineralization along these ringlike structures at their intersection with coeval deep transverse, ca. N-S faults (viz. phi structures), along with their location along the margin to the N Superior proto-craton, are consistent with secondary mantle upwellings portrayed in numerical models of a mantle plume beneath a craton with a deep lithospheric keel within a regional N-S compressional regime. Early, regional ca. N-S faults in the W Superior were reactivated as dilatational antithetic (secondary Riedel/R') sinistral shears during dextral transpression and as extensional fractures and/or normal faults during N-S shortening. The Kapuskasing structural zone or uplift likely represents Proterozoic reactivation of a similar deep transverse structure. Preservation of discrete faults in the deep crust beneath zones of distributed Neoarchean dextral transcurrent to transpressional shear zones in the present-day upper crust suggests a 'millefeuille' lithospheric strength profile, with competent SCLM, mid- to deep, and upper crustal layers. Mechanically strong deep crustal felsic and mafic granulite layers are attributed to dehydration and melt extraction. Intra-crustal decoupling along a ductile décollement in the W Superior led to the preservation of early-formed deep structures that acted as conduits for magma transport into the overlying crust and focussed hydrothermal fluid flow during regional deformation. Increase in the thickness of semi-brittle layers in the lower crust during regional metamorphism would result in an increase in fracturing and faulting in the lower crust, facilitating hydrothermal and carbonic fluid flow in pathways linking SCLM to the upper crust, a factor explaining the late timing for most orogenic Au. Results provide an important new dataset for regional prospectively mapping, especially with machine learning, and exploration targeting for Au and Ni-Cr-Cu-PGE mineralization. Results also furnish evidence for parautochthonous development of the S Superior Province during plume-related rifting and cannot be explained by conventional subduction and arc-accretion models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Perl-Treves, Rafael, Rebecca Grumet, Nurit Katzir e Jack E. Staub. Ethylene Mediated Regulation of Sex Expression in Cucumis. United States Department of Agriculture, janeiro de 2005. http://dx.doi.org/10.32747/2005.7586536.bard.

Texto completo da fonte
Resumo:
Monoecious species such as melon and cucumber develop separate male and female (or bisexual) flowers on the same plant individual. They display complex genetic and hormonal regulation of sex patterns along the plant. Ethylene is known to play an important role in promoting femaleness and inhibiting male development, but many questions regarding critical sites of ethylene production versus perception, the relationship between ethylene and the sex determining loci, and the possible differences between melon and cucumber in this respect are still open. The general goal of the project was to elucidate the role of ethylene in determining flower sex in Cucumis species, melon and cucumber. The specific Objectives were: 1. Clone and characterize expression patterns of cucumber genes involved in ethylene biosynthesis and perception. 2. Genetic mapping of cloned genes and markers with respect to sex loci in melon and cucumber. 3. Produce and analyze transgenic melons altered in ethylene production or perception. In the course of the project, some modifications/adjustments were made: under Objective 2 (genetic mapping) a set of new mapping populations had to be developed, to allow better detection of polymorphism. Under Objective 3, cucumber transformation systems became available to us and we included this second model species in our plan. The main findings of our study support the pivotal role of ethylene in cucumber and melon sex determination and later stages of reproductive development. Modifying ethylene production resulted in profound alteration of sex patterns in melon: femaleness increased, and also flower maturation and fruit set were enhanced, resulting in earlier, more concentrated fruit yield in the field. Such effect was previously unknown and could have agronomic value. Our results also demonstrate the great importance of ethylene sensitivity in sex expression. Ethylene perception genes are expressed in sex-related patterns, e.g., gynoecious lines express higher levels of receptor-transcripts, and copper treatments that activate the receptor can increase femaleness. Transgenic cucumbers with increased expression of an ethylene receptor showed enhanced femaleness. Melons that expressed a defective receptor produced fewer hermaphrodite flowers and were insensitive to exogenous ethylene. When the expression of defective receptor was restricted to specific floral whorls, we saw that pistils were not inhibited by the blocked perception at the fourth whorl. Such unexpected findings suggest an indirect effect of ethylene on the affected whorl; it also points at interesting differences between melon and cucumber regarding the mode of action of ethylene. Such effects will require further study. Finally, our project also generated and tested a set of novel genetic tools for finer identification of sex determining genes in the two species and for efficient breeding for these characters. Populations that will allow easier linkage analysis of candidate genes with each sex locus were developed. Moreover, effects of modifier genes on the major femaleness trait were resolved. QTL analysis of femaleness and related developmental traits was conducted, and a comprehensive set of Near Isogenic Lines that differ in specific QTLs were prepared and made available for the private and public research. Marker assisted selection (MAS) of femaleness and fruit yield components was directly compared with phenotypic selection in field trials, and the relative efficiency of MAS was demonstrated. Such level of genetic resolution and such advanced tools were not used before to study these traits, that act as primary yield components to determine economic yields of cucurbits. In addition, this project resulted in the establishment of workable transformation procedures in our laboratories and these can be further utilized to study the function of sex-related genes in detail.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia