Tesis sobre el tema "Modèles des blocs latents"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 36 mejores tesis para su investigación sobre el tema "Modèles des blocs latents".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Brault, Vincent. "Estimation et sélection de modèle pour le modèle des blocs latents". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112238/document.
Texto completoClassification aims at sharing data sets in homogeneous subsets; the observations in a class are more similar than the observations of other classes. The problem is compounded when the statistician wants to obtain a cross classification on the individuals and the variables. The latent block model uses a law for each crossing object class and class variables, and observations are assumed to be independent conditionally on the choice of these classes. However, factorizing the joint distribution of the labels is impossible, obstructing the calculation of the log-likelihood and the using of the EM algorithm. Several methods and criteria exist to find these partitions, some frequentist ones, some bayesian ones, some stochastic ones... In this thesis, we first proposed sufficient conditions to obtain the identifiability of the model. In a second step, we studied two proposed algorithms to counteract the problem of the EM algorithm: the VEM algorithm (Govaert and Nadif (2008)) and the SEM-Gibbs algorithm (Keribin, Celeux and Govaert (2010)). In particular, we analyzed the combination of both and highlighted why the algorithms degenerate (term used to say that it returns empty classes). By choosing priors wise, we then proposed a Bayesian adaptation to limit this phenomenon. In particular, we used a Gibbs sampler and we proposed a stopping criterion based on the statistics of Brooks-Gelman (1998). We also proposed an adaptation of the Largest Gaps algorithm (Channarond et al. (2012)). By taking their demonstrations, we have shown that the labels and parameters estimators obtained are consistent when the number of rows and columns tend to infinity. Furthermore, we proposed a method to select the number of classes in row and column, the estimation provided is also consistent when the number of row and column is very large. To estimate the number of classes, we studied the ICL criterion (Integrated Completed Likelihood) whose we proposed an exact shape. After studying the asymptotic approximation, we proposed a BIC criterion (Bayesian Information Criterion) and we conjecture that the two criteria select the same results and these estimates are consistent; conjecture supported by theoretical and empirical results. Finally, we compared the different combinations and proposed a methodology for co-clustering
Tami, Myriam. "Approche EM pour modèles multi-blocs à facteurs à une équation structurelle". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT303/document.
Texto completoStructural equation models enable the modeling of interactions between observed variables and latent ones. The two leading estimation methods are partial least squares on components and covariance-structure analysis. In this work, we first describe the PLS and LISREL methods and, then, we propose an estimation method using the EM algorithm in order to maximize the likelihood of a structural equation model with latent factors. Through a simulation study, we investigate how fast and accurate the method is, and thanks to an application to real environmental data, we show how one can handly construct a model or evaluate its quality. Finally, in the context of oncology, we apply the EM approach on health-related quality-of-life data. We show that it simplifies the longitudinal analysis of quality-of-life and helps evaluating the clinical benefit of a treatment
Robert, Valérie. "Classification croisée pour l'analyse de bases de données de grandes dimensions de pharmacovigilance". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS111/document.
Texto completoThis thesis gathers methodological contributions to the statistical analysis of large datasets in pharmacovigilance. The pharmacovigilance datasets produce sparse and large matrices and these two characteritics are the main statistical challenges for modelling them. The first part of the thesis is dedicated to the coclustering of the pharmacovigilance contingency table thanks to the normalized Poisson latent block model. The objective is on the one hand, to provide pharmacologists with some interesting and reduced areas to explore more precisely. On the other hand, this coclustering remains a useful background information for dealing with individual database. Within this framework, a parameter estimation procedure for this model is detailed and objective model selection criteria are developed to choose the best fit model. Datasets are so large that we propose a procedure to explore the model space in coclustering, in a non exhaustive way but a relevant one. Additionnally, to assess the performances of the methods, a convenient coclustering index is developed to compare partitions with high numbers of clusters. The developments of these statistical tools are not specific to pharmacovigilance and can be used for any coclustering issue. The second part of the thesis is devoted to the statistical analysis of the large individual data, which are more numerous but also provides even more valuable information. The aim is to produce individual clusters according their drug profiles and subgroups of drugs and adverse effects with possible links, which overcomes the coprescription and masking phenomenons, common contingency table issues in pharmacovigilance. Moreover, the interaction between several adverse effects is taken into account. For this purpose, we propose a new model, the multiple latent block model which enables to cocluster two binary tables by imposing the same row ranking. Assertions inherent to the model are discussed and sufficient identifiability conditions for the model are presented. Then a parameter estimation algorithm is studied and objective model selection criteria are developed. Moreover, a numeric simulation model of the individual data is proposed to compare existing methods and study its limits. Finally, the proposed methodology to deal with individual pharmacovigilance data is presented and applied to a sample of the French pharmacovigilance database between 2002 and 2010
Febrissy, Mickaël. "Nonnegative Matrix Factorization and Probabilistic Models : A unified framework for text data". Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1291.
Texto completoSince the exponential growth of available Data (Big data), dimensional reduction techniques became essential for the exploration and analysis of high-dimensional data arising from many scientific areas. By creating a low-dimensional space intrinsic to the original data space, theses techniques offer better understandings across many data Science applications. In the context of text analysis where the data gathered are mainly nonnegative, recognized techniques producing transformations in the space of real numbers (e.g. Principal component analysis, Latent semantic analysis) became less intuitive as they could not provide a straightforward interpretation. Such applications show the need of dimensional reduction techniques like Nonnegative Matrix factorization (NMF) useful to embed, for instance, documents or words in the space of reduced dimension. By definition, NMF aims at approximating a nonnegative matrix by the product of two lower dimensionalnonnegative matrices, which results in the solving of a nonlinear optimization problem. Note however that this objective can be harnessed to document/word clustering domain even it is not the objective of NMF. In relying on NMF, this thesis focuses on improving clustering of large text data arising in the form of highly sparse document-term matrices. This objective is first achieved, by proposing several types of regularizations of the original NMF objective function. Setting this objective in a probabilistic context, a new NMF model is introduced bringing theoretical foundations for establishing the connection between NMF and Finite Mixture Models of exponential families leading, therefore, to offer interesting regularizations. This allows to set NMF in a real clustering spirit. Finally, a Bayesian Poisson Latent Block model is proposed to improve document andword clustering simultaneously by capturing noisy term features. This can be connected to NMTF (Nonnegative Matrix factorization Tri-factorization) devoted to co-clustering. Experiments on real datasets have been carried out to support the proposals of the thesis
Lomet, Aurore. "Sélection de modèle pour la classification croisée de données continues". Compiègne, 2012. http://www.theses.fr/2012COMP2041.
Texto completoSchmutz, Amandine. "Contributions à l'analyse de données fonctionnelles multivariées, application à l'étude de la locomotion du cheval de sport". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1241.
Texto completoWith the growth of smart devices market to provide athletes and trainers a systematic, objective and reliable follow-up, more and more parameters are monitored for a same individual. An alternative to laboratory evaluation methods is the use of inertial sensors which allow following the performance without hindering it, without space limits and without tedious initialization procedures. Data collected by those sensors can be classified as multivariate functional data: some quantitative entities evolving along time and collected simultaneously for a same individual. The aim of this thesis is to find parameters for analysing the athlete horse locomotion thanks to a sensor put in the saddle. This connected device (inertial sensor, IMU) for equestrian sports allows the collection of acceleration and angular velocity along time in the three space directions and with a sampling frequency of 100 Hz. The database used for model development is made of 3221 canter strides from 58 ridden jumping horses of different age and level of competition. Two different protocols are used to collect data: one for straight path and one for curved path. We restricted our work to the prediction of three parameters: the speed per stride, the stride length and the jump quality. To meet the first to objectives, we developed a multivariate functional clustering method that allow the division of the database into smaller more homogeneous sub-groups from the collected signals point of view. This method allows the characterization of each group by it average profile, which ease the data understanding and interpretation. But surprisingly, this clustering model did not improve the results of speed prediction, Support Vector Machine (SVM) is the model with the lowest percentage of error above 0.6 m/s. The same applied for the stride length where an accuracy of 20 cm is reached thanks to SVM model. Those results can be explained by the fact that our database is build from 58 horses only, which is a quite low number of individuals for a clustering method. Then we extend this method to the co-clustering of multivariate functional data in order to ease the datamining of horses’ follow-up databases. This method might allow the detection and prevention of locomotor disturbances, main source of interruption of jumping horses. Lastly, we looked for correlation between jumping quality and signals collected by the IMU. First results show that signals collected by the saddle alone are not sufficient to differentiate finely the jumping quality. Additional information will be needed, for example using complementary sensors or by expanding the database to have a more diverse range of horses and jump profiles
Mero, Gulten. "Modèles à facteurs latents et rentabilités des actifs financiers". Rennes 1, 2010. http://www.theses.fr/2010REN1G011.
Texto completoThis thesis aims at using latent factor models and recent econometric developments in order to obtain a better understanding of underlying asset risk. Firstly, we describe the various latent factor models currently applied to finance as well as the main estimation methodologies. We also present how financial and econometrical theories allow us to link statistical factors to economic and financial variables, hence facilitating their interpretation. Secondly, we use a cross-sectional approach in order to explain and interpret the risk profile of hedge fund and stock returns. Our methodology is consistent with statistical properties inherent to large samples as well as the dynamic properties of systematic risk. Thirdly, we model a market where prices and volumes are influenced by intra-day liquidity shocks. We propose a mixture of distribution model with two latent factors allowing us to capture the respective impacts of both information shocks and liquidity frictions. This model enables us to build a static stock-specific liquidity measure using daily data. Moreover, we extend our structural model in order to take into account dynamic properties of liquidity risk. In particular, we distinguish two liquidity issues : intra-day liquidity frictions and illiquidity events deteriorating market quality in a persistent manner. Finally, we use signal extraction modeling in order to build dynamic liquidity measures
Frichot, Eric. "Modèles à facteurs latents pour les études d'association écologique en génétique des populations". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENS018/document.
Texto completoWe introduce a set of latent factor models dedicated to landscape genomics and ecological association tests. It includes statistical methods for correcting principal component maps for effects of spatial autocorrelation (spFA); methods for estimating ancestry coefficients from large genotypic matrices and evaluating the number of ancestral populations (sNMF); and methods for identifying genetic polymorphisms that exhibit high correlation with some environmental gradient or with the variables used as proxies for ecological pressures (LFMM). We also developed a set of open source softwares associated with the methods, based on optimized C programs that can scale with the dimension of very large data sets, to run analyses of population structure and genome scans for local adaptation
Galéra, Cyril. "Construction de blocs géographiques cohérents dans GOCAD". Vandoeuvre-les-Nancy, INPL, 2002. http://www.theses.fr/2002INPLA05N.
Texto completoThe definition of a 3D geological model is a complicated task, done ftom poor data and let a large space to the geologist interpretation. Thanks to restorations methods which consist in unfolding the model in order to see if it seems credible before deformation. Nevertheless, the efficiency of this method in 3D is subject to caution. Also, another approach presented in this thesis and implemented in the GOCAD geomodeller consist in building directly new horizons based on the most reliable data and the supposed deformation, the simple shear or flexural slip. Thanks to these tools, the geologist will thus be able to comp!ete and check the model coherency. The developability or unfoldability of the horizons have also been study in these in order to implement new unfoldable methods and artefacts corrections
Delhomme, Fabien. "Etude du comportement sous impact d'une structure pare-blocs en béton armé". Chambéry, 2005. http://www.theses.fr/2005CHAMS004.
Texto completoThis thesis studies the behaviour of a new concept for a protection gallery under rock fall, called Structurally Dissipating Rock-shed (SDR). The main innovation, compared to conventional solutions, is to dissipate the impact energy directly into the reinforced concrete slab or into fuse supports, and no longer in a cushion layer. The dynamic phenomena, taking place during the impact of a block onto the slab, are analyzed by means of experiments on a 1/3 scale SDR structure. The percussion loads applied to the slab, during the contact phase with the block, are assessed as weIl as the various energy transfers and dissipations. The results allowed to validate the operating and repair principles of the SDR and revealed that the slab is damaged by three main mechanisms: the punching, the bending and the breaking clown at surface level of the impacted zone. The principal experimental values are found by numerical simulations of the tests with a finite elements tool. A simplified mechanical model "masses-springs-damping" is also developed with the aim of implementing design methods for engineering offices. The prospects for this work are to succeed in establishing design and construction recommendations for structurally dissipating rock-sheds
Poivet, Sylwia. "Adhésion instantanée de deux systèmes modèles : liquides simples et copolymères à blocs". Bordeaux 1, 2003. http://www.theses.fr/2003BOR12763.
Texto completoWithin an expérimental approach whose scope is fundamental as well as applied, we study the separation mechanisms encoutered when a material confined between two parallel plates is put under traction (probe-tack test). The study is conducted on two model systems : simple liquids and block copolymers. The simplicity of the first system allows us to provide a detailed interpretation of our observations. Indeed, two competing regimes are identified : a fingering regime and a cavitation regime. The curve shape for the measured force, specific to each regime, along with the conditions required for cavitation, are modelled using models for Newtonian fluids. This allows us to build a phase diagram capable of prediction the different regimes. The excellent fit of the experimental data with our theoretical model demonstrates that the cavitation mechanism, commonly thought to be characteristic of viscoelastic materials like adhesives, can also be encountered in viscous liquids. The second system, a candidate for use as an adhesive material even in wet environment, is essentially composed of amphiphilic diblock copolymers. We characterize the structure of these materials, their ability to absorb water and their adhesive properties on dry and wet substrates. Our study demonstrates the critical role of the polymer structure on the adhesion properties. Moreover, we show that, owing to their amphiphilic behaviour, these materials are particulary promising for the well-known problem of adhesion on wet surfaces
Amoualian, Hesam. "Modélisation et apprentissage de dépendances á l’aide de copules dans les modéles probabilistes latents". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM078/document.
Texto completoThis thesis focuses on scaling latent topic models for big data collections, especiallywhen document streams. Although the main goal of probabilistic modeling is to find word topics, an equally interesting objective is to examine topic evolutions and transitions. To accomplish this task, we propose in Chapter 3, three new models for modeling topic and word-topic dependencies between consecutive documents in document streams. The first model is a direct extension of Latent Dirichlet Allocation model (LDA) and makes use of a Dirichlet distribution to balance the influence of the LDA prior parameters with respect to topic and word-topic distributions of the previous document. The second extension makes use of copulas, which constitute a generic tool to model dependencies between random variables. We rely here on Archimedean copulas, and more precisely on Franck copula, as they are symmetric and associative and are thus appropriate for exchangeable random variables. Lastly, the third model is a non-parametric extension of the second one through the integration of copulas in the stick-breaking construction of Hierarchical Dirichlet Processes (HDP). Our experiments, conducted on five standard collections that have been used in several studies on topic modeling, show that our proposals outperform previous ones, as dynamic topic models, temporal LDA and the Evolving Hierarchical Processes,both in terms of perplexity and for tracking similar topics in document streams. Compared to previous proposals, our models have extra flexibility and can adapt to situations where there are no dependencies between the documents.On the other hand, the "Exchangeability" assumption in topic models like LDA oftenresults in inferring inconsistent topics for the words of text spans like noun-phrases, which are usually expected to be topically coherent. In Chapter 4, we propose copulaLDA (copLDA), that extends LDA by integrating part of the text structure to the model and relaxes the conditional independence assumption between the word-specific latent topics given the per-document topic distributions. To this end, we assume that the words of text spans like noun-phrases are topically bound and we model this dependence with copulas. We demonstrate empirically the effectiveness of copLDA on both intrinsic and extrinsic evaluation tasks on several publicly available corpora. To complete the previous model (copLDA), Chapter 5 presents an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words. The coherence between topics is ensured through a copula, binding the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distributions so as to capture fine-grained differences in topic assignments. We show that the proposed model naturally encompasses other state-of-the-art LDA-based models designed for similar tasks. Furthermore, our experiments, conducted on six different publicly available datasets, show the effectiveness of our model in terms of perplexity, Normalized Pointwise Mutual Information, which captures the coherence between the generated topics, and the Micro F1 measure for text classification
Tadde, Oladédji bachirou. "Modélisation dynamique des sphères anatomique, cognitive et fonctionnelle dans la maladie d’Alzheimer : une approche par processus latents". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0306/document.
Texto completoIn public health, the study of the progression of a chronic disease and its mechanisms may require the joint modeling of several longitudinal markers and their dependence structure. Modeling approaches exist in the literature to partially address these modeling objectives. But these approaches become rapidly numerically expensive and difficult to use in some complex diseases involving latent, dynamic and multidimensional aspects, such as in Alzheimer’s disease. The aim of this thesis was to propose an innovative methodology for modeling the dynamics of several latent processes and their temporal influences for the purpose of causal interpretations, from repeated observations of continuous Gaussian and non Gaussian markers. The proposed latent process approach defines a structural model in discrete time for the latent processes trajectories and an observation model to relate longitudinal markers to the process they measure. In the structural model, the initial level and the rate of change of individual-specific processes are modeled by mixedeffect linear models. The rate of change model has a first order auto-regressive component that can model the effect of a process on another process by explicitly accounting for time. The structural model as defined benefits from the same causal interpretations as the models with differential equations (ODE) of the mechanistic approach of the causality while avoiding major numerical problems. The observation model uses parameterized link functions to handle possibly non-Gaussian continuous markers. The consistency of the ML estimators and the accuracy of the inference of the influence structures between the latent processes have been validated by simulation studies. This approach, applied to Alzheimer’s disease, allowed to jointly describe the dynamics of hippocampus atrophy, the decline of episodic memory, the decline of verbal fluency, and loss of autonomy as well as the temporal influences between these dimensions in several stages of Alzheimer’s dementia from the data of the ADNI initiative
Balikas, Georgios. "Explorer et apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM054/document.
Texto completoText is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amounts of text becoming available online is several languages and different styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods such as topic models and word embeddings constitute prominent tools.The goal of this dissertation is to study and address challengingproblems in this area, focusing on both the design of novel text miningalgorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages.In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to such models.Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information.To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. We assume that the documents are partitioned in thematically coherent text segments. The first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability density distributions of random variables while having access only to their marginals.The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable document pairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome it we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings.The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an auto-encoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that by jointly training classification systems using correlated tasks can improve the obtained performance. To this end we show how can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we show that by adapting the transportation problem for the task of estimating document distances one can achieve important improvements
Corneli, Marco. "Dynamic stochastic block models, clustering and segmentation in dynamic graphs". Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E012/document.
Texto completoThis thesis focuses on the statistical analysis of dynamic graphs, both defined in discrete or continuous time. We introduce a new extension of the stochastic block model (SBM) for dynamic graphs. The proposed approach, called dSBM, adopts non homogeneous Poisson processes to model the interaction times between pairs of nodes in dynamic graphs, either in discrete or continuous time. The intensity functions of the processes only depend on the node clusters, in a block modelling perspective. Moreover, all the intensity functions share some regularity properties on hidden time intervals that need to be estimated. A recent estimation algorithm for SBM, based on the greedy maximization of an exact criterion (exact ICL) is adopted for inference and model selection in dSBM. Moreover, an exact algorithm for change point detection in time series, the "pruned exact linear time" (PELT) method is extended to deal with dynamic graph data modelled via dSBM. The approach we propose can be used for change point analysis in graph data. Finally, a further extension of dSBM is developed to analyse dynamic net- works with textual edges (like social networks, for instance). In this context, the graph edges are associated with documents exchanged between the corresponding vertices. The textual content of the documents can provide additional information about the dynamic graph topological structure. The new model we propose is called "dynamic stochastic topic block model" (dSTBM).Graphs are mathematical structures very suitable to model interactions between objects or actors of interest. Several real networks such as communication networks, financial transaction networks, mobile telephone networks and social networks (Facebook, Linkedin, etc.) can be modelled via graphs. When observing a network, the time variable comes into play in two different ways: we can study the time dates at which the interactions occur and/or the interaction time spans. This thesis only focuses on the first time dimension and each interaction is assumed to be instantaneous, for simplicity. Hence, the network evolution is given by the interaction time dates only. In this framework, graphs can be used in two different ways to model networks. Discrete time […] Continuous time […]. In this thesis both these perspectives are adopted, alternatively. We consider new unsupervised methods to cluster the vertices of a graph into groups of homogeneous connection profiles. In this manuscript, the node groups are assumed to be time invariant to avoid possible identifiability issues. Moreover, the approaches that we propose aim to detect structural changes in the way the node clusters interact with each other. The building block of this thesis is the stochastic block model (SBM), a probabilistic approach initially used in social sciences. The standard SBM assumes that the nodes of a graph belong to hidden (disjoint) clusters and that the probability of observing an edge between two nodes only depends on their clusters. Since no further assumption is made on the connection probabilities, SBM is a very flexible model able to detect different network topologies (hubs, stars, communities, etc.)
Empereur-Mot, Luc. "La fragmentation naturelle des massifs rocheux : modèles de blocs et bases de données tridimensionnelles ; réalisation, exploration géométrique et applications". Chambéry, 2000. http://www.theses.fr/2000CHAMS012.
Texto completoLaclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.
Texto completoWith the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
Ruiz, Daniel. "Résolution de grands systèmes linéaires creux non symétriques par une méthode itérative par blocs dans un environnement multiprocesseur". Toulouse, INPT, 1992. http://www.theses.fr/1992INPT010H.
Texto completoBoulaud, Romain. "Etudes et modélisations du comportement d’un écran de filet pare-blocs à différentes échelles". Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC2017.
Texto completoRockfall barriers are flexible structures that mitigate the risk of rockfall and thus protect people living in risk areas, as well as their property. These structures, placed on the trajectories of the moving blocks, are made of a steel net held on the natural ground by rigid posts. When they are impacted, they undergo large deformations that require modelling their behaviour by taking into account both geometric and material non-linearities. Their components are therefore represented in this work with discrete elements and the mechanical problem is thus solved with a calculation tool adapted to the large déformations problem. This algorithm is also used to assess the influence of different net modelling strategies, from the scientific literature, on the overall behaviour of a rokfall barrier. The conclusions of this study as well as experimental observations pave the way to new discrete modelling strategies, for which the net is represented by a limited number of degrees of freedom. The family of simplified models developed in this work makes it possible to simulate the behaviour of a structure with a low computation time costs, thus offering the opportunity of implementing complex parametric studies or probabilistic dimensioning methods
Demazeau, Maxime. "Relations structure-effet de nanovecteurs à base de copolymères à blocs pour la thérapie photodynamique : utilisation de modèles de membranes". Electronic Thesis or Diss., Toulouse 3, 2019. http://www.theses.fr/2019TOU30113.
Texto completoPhotodynamic therapy (PDT), a therapy based on the irradiation of photosensitizing molecules to generate an oxidative stress, is already used as a treatment of some pathologies. The photosensitizers used are often highly hydrophobic molecules that aggregate in aqueous medium. Therefore, used by themselves, they require to be injected at high concentrations, leading to a risk of global photosensitization. To reduce this secondary effect and increase the effectiveness of the treatment, it is possible to encapsulate those molecules. Previous work in the IMRCP laboratory has led to the development of block copolymer-based carriers to encapsulate a photosensitizer, pheophorbide-a. This work has showed superior efficiency of some type of carriers compared to others under PDT conditions on cell culture. The aim of this project was to develop tools to better understand the mechanisms occurring when using block copolymers-based nanocarriers encapsulating pheophorbide-a and during the irradiation of the photosensitizer. The nanocarriers studied were block copolymer-based micelles made of PEO-PCL, PEO-PLA and PEO-PS. To simplify the system studied, we chose to use liposomes as membrane models to simulate the biological target. Using the fluorescence properties of pheophorbide-a, we were able to obtain the affinity constants of the photosensitizer for the micelles and the lipid vesicles, and then evaluate the transfer of pheophorbide-a from the micelles to the vesicles. Following that, we investigated the phenomena occurring during the irradiation of the photosensitizer. We were able to estimate the relative production of singlet oxygen depending on the type of micelles used. By monitoring the leakage of a fluorescent probe contained in the liposomes, allowing us to evaluate their permeability, it was possible to measure the effects of singlet oxygen production on the integrity of the liposome membrane. Complementarily, we followed the oxidation of the lipids of the liposomes during the irradiation of pheophorbide-a by mass spectrometry. These results combined together allowed us to see what were the parameters influencing the PDT efficiency of micelles encapsulating a photosensitizer. We managed to classify those with the greatest effect on the integrity of model membranes among those studied
Laclau, Charlotte. "Hard and fuzzy block clustering algorithms for high dimensional data". Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB014.
Texto completoWith the increasing number of data available, unsupervised learning has become an important tool used to discover underlying patterns without the need to label instances manually. Among different approaches proposed to tackle this problem, clustering is arguably the most popular one. Clustering is usually based on the assumption that each group, also called cluster, is distributed around a center defined in terms of all features while in some real-world applications dealing with high-dimensional data, this assumption may be false. To this end, co-clustering algorithms were proposed to describe clusters by subsets of features that are the most relevant to them. The obtained latent structure of data is composed of blocks usually called co-clusters. In first two chapters, we describe two co-clustering methods that proceed by differentiating the relevance of features calculated with respect to their capability of revealing the latent structure of the data in both probabilistic and distance-based framework. The probabilistic approach uses the mixture model framework where the irrelevant features are assumed to have a different probability distribution that is independent of the co-clustering structure. On the other hand, the distance-based (also called metric-based) approach relied on the adaptive metric where each variable is assigned with its weight that defines its contribution in the resulting co-clustering. From the theoretical point of view, we show the global convergence of the proposed algorithms using Zangwill convergence theorem. In the last two chapters, we consider a special case of co-clustering where contrary to the original setting, each subset of instances is described by a unique subset of features resulting in a diagonal structure of the initial data matrix. Same as for the two first contributions, we consider both probabilistic and metric-based approaches. The main idea of the proposed contributions is to impose two different kinds of constraints: (1) we fix the number of row clusters to the number of column clusters; (2) we seek a structure of the original data matrix that has the maximum values on its diagonal (for instance for binary data, we look for diagonal blocks composed of ones with zeros outside the main diagonal). The proposed approaches enjoy the convergence guarantees derived from the results of the previous chapters. Finally, we present both hard and fuzzy versions of the proposed algorithms. We evaluate our contributions on a wide variety of synthetic and real-world benchmark binary and continuous data sets related to text mining applications and analyze advantages and inconvenients of each approach. To conclude, we believe that this thesis covers explicitly a vast majority of possible scenarios arising in hard and fuzzy co-clustering and can be seen as a generalization of some popular biclustering approaches
Goffinet, Étienne. "Clustering multi-blocs et visualisation analytique de données séquentielles massives issues de simulation du véhicule autonome". Thesis, Paris 13, 2021. http://www.theses.fr/2021PA131090.
Texto completoAdvanced driving-assistance systems validation remains one of the biggest challenges car manufacturers must tackle to provide safe driverless cars. The reliable validation of these systems requires to assess their reaction’s quality and consistency to a broad spectrum of driving scenarios. In this context, large-scale simulation systems bypass the physical «on-tracks» limitations and produce important quantities of high-dimensional time series data. The challenge is to find valuable information in these multivariate unlabelled datasets that may contain noisy, sometimes correlated or non-informative variables. This thesis propose several model-based tool for univariate and multivariate time series clustering based on a Dictionary approach or Bayesian Non Parametric framework. The objective is to automatically find relevant and natural groups of driving behaviors and, in the multivariate case, to perform a model selection and multivariate time series dimension reduction. The methods are experimented on simulated datasets and applied on industrial use cases from Groupe Renault Coclustering
Asof, Marwan. "Etude du comportement mécanique des massifs rocheux fracturés en blocs (méthode à l'équilibre limite) : réalisation et application". Vandoeuvre-les-Nancy, INPL, 1991. http://www.theses.fr/1991INPL083N.
Texto completoLu, Yang. "Analyse de survie bivariée à facteurs latents : théorie et applications à la mortalité et à la dépendance". Thesis, Paris 9, 2015. http://www.theses.fr/2015PA090020/document.
Texto completoThis thesis comprises three essays on identification and estimation problems in bivariate survival models with individual and common frailties.The first essay proposes a model to capture the mortality dependence of the two spouses in a couple. It allows to disentangle two types of dependencies : the broken heart syndrome and the dependence induced by common risk factors. An analysis of their respective effects on joint insurance premia is also proposed.The second essay shows that, under reasonable model specifications that take into account the longevity effect, we can identify the joint distribution of the long-term care and mortality risks from the observation of cohort mortality data only. A numerical application to the French population data is proposed.The third essay conducts an analysis of the tail of the joint distribution for general bivariate survival models with proportional frailty. We show that under appropriate assumptions, the distribution of the joint residual lifetimes converges to a limit distribution, upon normalization. This can be used to analyze the mortality and long-term care risks at advanced ages. In parallel, the heterogeneity distribution among survivors converges also to a semi-parametric limit distribution. Properties of the limit distributions, their identifiability from the data, as well as their implications are discussed
Wronski, Maciej. "Couplage du contact et du frottement avec la mécanique non linéaire des solides en grandes déformations : application à l'étude des blocs de mousse en polyuréthane". Compiègne, 1994. http://www.theses.fr/1994COMPD712.
Texto completoLarvet, Tiphaine. "Subduction dynamics of ridge-free oceanic plate : Implication for the Tethys domain lato sensu". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS322.pdf.
Texto completoPlate tectonics relates the movement of rigid plates at the Earth's surface to mantle convection. Although upwelling flows such as mantle plumes can interact with plates by mechanically eroding their bases and increasing their gravitational potential, they do not provide sufficient forces to break up a continental plate in the absence of far field extensional forces or other weakening mechanisms such as the injection of magma dykes. Mantle convection can also exert viscous friction at the base of tectonic plates, which can drive, or resist, plate motion. Nevertheless, the Lithosphere Asthenosphere Boundary is among the mechanically weakest regions of the mantle, therefore, the main link between plate motion and mantle convection in terms of driving force is the subduction of oceanic lithosphere slab. These subducting slabs indeed drive both plate motion at the surface and mantle convection and are strong enough to transmit forces from the deep earth to the surface. This thesis therefore studies the relationship between subduction dynamics and continental breakup. While it has long been recognized that subduction can lead to continental breakup in the upper plate through weakening by fluid percolation and small-scale convection, very few studies focus on the dynamics of continental breakup in the lower plate in response to slab pull force. This mechanism has been proposed for the breakup of Gondwana during the Permian and for the opening of the South China Sea in the Oligocene. In both cases, continental breakup of the lower plate must occur after the mid-ocean ridge has ceased activity or when subduction becomes normal to the ridge, otherwise oceanic plate subduction would be accommodated by accretion at the mid-oceanic ridge. I set up a series of 2D numerical simulations of subducting ridge-free plates to study by means of a parametric approach when and where the continental plate breaks up as a function of the relative motion of the plates. Given the importance of the volume forces produced by the sinking slab, special care was taken to take into account the effect of mineralogical changes on density in the simulations. The simulations present four modes of continental breakup: upper plate, lower plate, both plates, or absent. Focusing on lower plate continental lithosphere breakup, the parametric study shows that the sharp increase in density of the sinking slab related to the 410 km phase transition, in addition to the gravitational potential energy of the continental lithosphere, can cause continental rifting in the lower subducting plate. However, simulations also show that this mechanism requires the lower plate to move at the same speed as the underlying mantle (i.e. no significant horizontal basal shear on the continent). The slab-drag model appears to be a viable mechanism for continental breakup of the lower plate and the conditions limiting this process in terms of timing and relative motion make its potential geological record an important constraint on the dynamics of the system. Furthermore, the simulations also demonstrate that there is a significant time lag between ridge subduction and continental breakup (i.e. the time required for the plunging panel to reach the 410 km discontinuity). These last two points provide new constraints on paleogeographic reconstructions of Permian Cimmerian blocks motion. Based on the results of this first set of simulations and the extensive literature documenting the opening of the South China Sea, I conducted a second study adapted to the regional geodynamic context. This allows me to propose a new conceptual model that combines ridge inversion, continental breakup related to slab pull and subduction reversal to reconcile the geological and geophysical data of this region. The end of this manuscript discusses the limitations of my results and provides suggestions for remediation
Bargui, Henda. "Modélisation des comportements mécaniques et hydrauliques de massifs rocheux simulés par des assemblages de blocs rigides : Introduction d'un couplage hydro-mécanique". Phd thesis, Ecole Nationale des Ponts et Chaussées, 1997. http://tel.archives-ouvertes.fr/tel-00529406.
Texto completoBargui, Henda. "Modélisation des comportements mécaniques et hydrauliques de massifs rocheux simulés par des assemblages de blocs rigides : Introduction d'un couplage hydro-mécanique". Phd thesis, Marne-la-vallée, ENPC, 1997. http://www.theses.fr/1997ENPC9705.
Texto completoThis research aims at modelling the hydro-mechanical behaviour of fissured rock masses by improvement and extension of a discrete element model, called BRIG3D. This model simulates fissured rock mass as a set of rigid blocks interacting along their interfaces. Interface deformation is related to the relative displacement of the corresponding blocks. Being subject to external loads, the total set of blocks moves until equilibrium is reached. The computation of this equilibrium has been improved by redefining the model description of the rigid block movement, the interface position and the stress distribution along an interface. To describe flow problems through blocks interfaces, a boundary element model has been developed. Flow through each interface is assumed to be laminar, stationary and planar. This hydraulic model has then been coupled with the mechanical model BRIG3D and used to analyse hydro-mechanical rock mass behaviour under varying loads ; In particulard, a study of a dam foundation has been carried out
El, Haj Abir. "Stochastics blockmodels, classifications and applications". Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2300.
Texto completoThis PhD thesis focuses on the analysis of weighted networks, where each edge is associated to a weight representing its strength. We introduce an extension of the binary stochastic block model (SBM), called binomial stochastic block model (bSBM). This question is motivated by the study of co-citation networks in a context of text mining where data is represented by a graph. Nodes are words and each edge joining two words is weighted by the number of documents included in the corpus simultaneously citing this pair of words. We develop an inference method based on a variational maximization algorithm (VEM) to estimate the parameters of the modelas well as to classify the words of the network. Then, we adopt a method based on maximizing an integrated classification likelihood (ICL) criterion to select the optimal model and the number of clusters. Otherwise, we develop a variational approach to analyze the given network. Then we compare the two approaches. Applications based on real data are adopted to show the effectiveness of the two methods as well as to compare them. Finally, we develop a SBM model with several attributes to deal with node-weighted networks. We motivate this approach by an application that aims at the development of a tool to help the specification of different cognitive treatments performed by the brain during the preparation of the writing
Ghazal, Rima. "Modélisation de la stabilité des blocs rocheux isolés sur la paroi des excavations souterraines avec prise en compte des contraintes initiales et du comportement non linéaire des joints". Electronic Thesis or Diss., Paris, ENMP, 2013. http://www.theses.fr/2013ENMP0007.
Texto completoFailure of rock blocks located at the surface of underground excavations is a common problem in discontinuous rock masses. Since exact methods that take into account all blocks and their interactions are computationally hard, the Isolated Blocks method is usually adopted. It consists in studying each block considering it to be rigid and the surrounding rock mass to be rigid and fixed. Nevertheless, none of the existing methods based on this approach takes into account initial stresses and joints behavior rigorously. In this thesis, a new method providing significant improvements to conventional Isolated Blocks methods is developed. Considering that initial stresses are known, the excavation process is modeled by unloading the block's free face. Stresses acting on the faces in contact with the rock mass are then resolved by taking into account force and moment balance equations, joints behavior and rigid body movement. This leads to a linear system where the block's translation and rotation vectors are the only unknowns.Two models are proposed: the first one assumes linear elastic joint behavior, thus the stability is evaluated a posteriori. The second, more realistic model, assumes joint behavior to be hyperbolic in the normal direction and elastoplastic in the tangential direction, while also accounting for dilatancy. This non-linear problem is solved numerically by explicit integration in the kinematic time with constant deconfining steps. Also, thanks to the surface integration technique used, any block geometry can be studied. The method proposed is validated and compared to other conventional methods. Parametric studies show the influence of initial stresses and the joints' mechanical properties on the stability. Rock support modeling is also integrated into the code. Finally, the new method is applied to study an assemblage of blocks around an underground excavation and is compared to a model that takes into account all the blocks with the Distinct Element Method. It is also used to reproduce an actual block failure case
Ossamy, Rodrigue Bertrand. "An algorithmic and computational approach to local computations". Bordeaux 1, 2005. http://www.theses.fr/2005BOR13067.
Texto completoGhazal, Rima. "Modélisation de la stabilité des blocs rocheux isolés sur la paroi des excavations souterraines avec prise en compte des contraintes initiales et du comportement non linéaire des joints". Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00934081.
Texto completoBeyer-Berjot, Laura. "Développement d'une formation en parcours de soin simulé en chirurgie colorectale laparoscopique". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM5071/document.
Texto completoBackground: Few studies have assessed simulation in laparoscopic colorectal surgery (LCS) & simulation has never been designed in a care pathway approach (CPA) manner. Objectives: To design a CPA to training in LCS, involving virtual patients perioperative training & a virtual competency-based curriculum for intraoperative training. To implement such CPA & to look whether such training may improve patients' management. Methods:1) A CPA to training in appendicitis was designed and implemented. All residents of our department were trained & 38 patients undergoing appendectomy were prospectively included before (n=21) and after (n=17) CPA. 2) A CPA to training in LCS was designed in accordance with enhanced recovery (ER) recommendations, and a curriculum in LCS was validated. All residents of our department were trained & 20 patients were prospectively included before (n = 10) and after (n = 10) CPA. Results: 1) All residents were trained. Pre/intraoperative data were comparable between groups of patients. Times to liquid and solid diet were reduced after CPA (7 h (2-20) vs. 4 (4-6); P=0.004 & 17 h (4-48) vs. 6 (4-24); P=0.005) without changing postoperative morbidity & length of stay (LS). 2) Residents' participation in LCS improved afterCPA (0% (0-100) vs. 82.5% (10-100); P = 0.006). Pre/intraoperative data were comparable between groups of patients. Compliance for ER improved at day 2 in post-training patients (3 (30%) vs. 8 (80%); P = 0.035). Postoperative morbidity and LS were comparable. Conclusion: A CPA to training in LCS has been designed and implemented. It improved compliance for ER & residents participation without adversely altering patients' outcomes
Cenni, Fabio. "Modélisation à haut niveau de systèmes hétérogènes, interfaçage analogique /numérique". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00721972.
Texto completoSalmi, Zahia. "Modélisation de séries chronologiques non linéaires et modèles ARMA faibles". Thèse, 2003. http://hdl.handle.net/1866/14580.
Texto completoEmpereur, Mot Luc. "La fragmentation naturelle des massifs rocheux : modèles de blocs et bases de données tridimensionnelles : réalisation, exploration géométrique et applications". Phd thesis, 2000. http://tel.archives-ouvertes.fr/tel-00723710.
Texto completo