Dissertations / Theses on the topic 'Partitions de Markov'

To see the other types of publications on this topic, follow the link: Partitions de Markov.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Partitions de Markov.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kenny, Robert. "Orbit complexity and computable Markov partitions." University of Western Australia. School of Mathematics and Statistics, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Markov partitions provide a 'good' mechanism of symbolic dynamics for uniformly hyperbolic systems, forming the classical foundation for the thermodynamic formalism in this setting, and remaining useful in the modern theory. Usually, however, one takes Bowen's 1970's general construction for granted, or restricts to cases with simpler geometry (as on surfaces) or more algebraic structure. This thesis examines several questions on the algorithmic content of (topological) Markov partitions, starting with the pointwise, entropy-like, topological conjugacy invariant known as orbit complexity. The relation between orbit complexity de nitions of Brudno and Galatolo is examined in general compact spaces, and used in Theorem 2.0.9 to bound the decrease in some of these quantities under semiconjugacy. A corollary, and a pointwise analogue of facts about metric entropy, is that any Markov partition produces symbolic dynamics matching the original orbit complexity at each point. A Lebesgue-typical value for orbit complexity near a hyperbolic attractor is also established (with some use of Brin-Katok local entropy), and is technically distinct from typicality statements discussed by Galatolo, Bonanno and their co-authors. Both our results are proved adapting classical arguments of Bowen for entropy. Chapters 3 and onwards consider the axiomatisation and computable construction of Markov partitions. We propose a framework of 'abstract local product structures'
2

Praggastis, Brenda L. "Markov partitions for hyperbolic toral automorphisms /." Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/5773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jeandenans, Emmanuelle. "Difféomorphismes hyperboliques des surfaces et combinatoires des partitions de Markov." Dijon, 1996. http://www.theses.fr/1996DIJOS032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse traite des difféomorphismes des surfaces qui préservent l'orientation et qui vérifient l'axiome A et la transversalité forte. Une première partie étudie les plus simples d'entre eux: ceux dont les variétés invariantes ne dessinent pas de bouclette. Dans cette partie, on donne explicitement la semi-conjugaison topologique entre un tel difféomorphisme et le représentant pseudo-Anosov de sa classe d'isotopie. Une seconde partie s'intéresse à la combinatoire des partitions de Markov géométrisées (on se soucie du sens de passage de l'image des rectangles dans les rectangles). On établit une condition nécessaire et suffisante pour que le genre d'une telle partition (i. E. Le genre d'une surface compacte contenant la partition et tous ses itères) soit fini, en analysant finement le comportement des itères des rectangles, en particulier au voisinage des points périodiques situes sur le bord des rectangles initiaux. La troisième partie complète la deuxième: étant donnée une partition de Markov géométrisée de genre fini, on montre qu'il n'y a pas d'obstruction topologique à la construction d'une surface compacte munie d'un difféomorphisme vérifiant l'axiome A et la transversalité forte, admettant comme partition de Markov géométrisée celle que l'on s'est fixée. Pour ce faire, on plonge les rectangles de la partition et leurs premiers itères dans une surface compacte que l'on construit à cet effet puis on prolonge le difféomorphisme défini par la partition géométrisée en un homéomorphisme de cette surface.
4

Cruz, Diaz Inti. "An Algorithmic Classification of Generalized Pseudo-Anosov Homeomorphisms via Geometric Markov Partitions." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCK083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse vise à fournir une classification des homéomorphismes pseudo-Anosov généralisés jusqu'à la conjugaison topologique en utilisant une approche algorithmique. Cela implique l'obtention d'invariants finis et calculables pour chaque classe de conjugaison.Une partition de Markov d'un homéomorphisme pseudo-Anosov généralisé est une décomposition de la surface en un nombre fini de rectangles avec des intérieurs disjoints, de telle manière que leurs images interagissent avec n'importe quel autre rectangle de la partition de Markov le long d'un nombre fini de sous-rectangles horizontaux. Chaque homéomorphisme pseudo-Anosov généralisé a une partition de Markov, et en utilisant l'orientation de la surface, nous pouvons doter toute partition de Markov d'une géométrisation. Ce processus implique d'étiqueter les rectangles et de choisir une orientation sur les feuilles stables et instables de chacun de ces rectangles.Le type géométrique d'une partition de Markov géométrique a été défini par Bonatti et Langevin dans leur livre "Difféomorphismes de Smale des surfaces" pour classer les pièces de base de type selle des difféomorphismes structurellement stables sur les surfaces. Un type géométrique est un objet combinatoire abstrait qui généralise la matrice d'incidence d'une partition de Markov. Il prend en compte non seulement le nombre de fois où l'image d'un rectangle interagit avec un autre rectangle de la famille, mais aussi l'ordre et le changement d'orientation induit par les homéomorphismes.Cette thèse utilise le type géométrique d'une partition de Markov géométrique pour classer les classes de conjugaison des homéomorphismes pseudo-Anosov. Nos principaux résultats peuvent être résumés comme suit :Le type géométrique est un invariant complet de la conjugaison : Une paire d'homéomorphismes pseudo-Anosov généralisés est topologiquement conjuguée l'un à l'autre à travers un homéomorphisme préservant l'orientation si et seulement si ils ont des partitions de Markov géométriques avec le même type géométrique.La réalisation : Les types géométriques sont définis de manière large, et chaque type géométrique abstrait ne correspond pas nécessairement à un homéomorphisme pseudo-Anosov. Un type géométrique T est considéré comme faisant partie de la classe pseudo-Anosov s'il existe un homéomorphisme pseudo-Anosov généralisé avec une partition de Markov géométrique de type T. Notre deuxième résultat fournit un critère calculable et combinatoire pour déterminer si un type géométrique abstrait appartient à la classe pseudo-Anosov.Représentations équivalentes : Chaque homéomorphisme pseudo-Anosov généralisé a un nombre infini de partitions de Markov géométriques avec différents types géométriques. Notre troisième résultat est un algorithme permettant de déterminer si deux types géométriques dans la classe pseudo-Anosov sont réalisés par des homéomorphismes pseudo-Anosov généralisés qui sont topologiquement conjugués ou non
This thesis aims to provide a classification of generalized pseudo-Anosov homeomorphisms up to topological conjugacy using an algorithmic approach. This entails obtaining finite and computable invariants for each conjugacy class.A Markov partition of a generalized pseudo-Anosov homeomorphism is a decomposition of the surface into a finite number of rectangles with disjoint interiors and such that their images intersect with any other rectangle in the Markov partition along a finite number of horizontal sub-rectangles. Every generalized pseudo-Anosov homeomorphism has a Markov partition, and, using the surface's orientation, we can endow any Markov partition with a geometrization. This process involves labeling the rectangles and choosing an orientation on the stable and unstable leaves of each of these rectangles.The geometric type of a geometric Markov partition was defined by Bonatti and Langevin in their book, "Difféomorphismes de Smale des surfaces," to classify saddle-type basic pieces for structurally stable diffeomorphisms on surfaces. A geometric type is an abstract combinatorial object that generalizes the incidence matrix of a Markov partition. It takes into account not only the number of times the image of a rectangle intersects with any other rectangle in the family but also the order and change of orientation induced by the homeomorphisms.This thesis employs the geometric type of a geometric Markov partition to classify the conjugacy classes of pseudo-Anosov homeomorphisms. Our main results can be summarized as follows:The geometric type is a complete invariant of conjugation: A pair of generalized pseudo-Anosov homeomorphisms is topologically conjugate to each other through an orientation-preserving homeomorphism if and only if they have geometric Markov partitions with the same geometric type.The realization: Geometric types are defined broadly, and not every abstract geometric type corresponds to a pseudo-Anosov homeomorphism. A geometric type T is considered part of the pseudo-Anosov class if there exists a generalized pseudo-Anosov homeomorphism with a geometric Markov partition of geometric type T. Our second result provides a computable and combinatorial criterion for determining whether an abstract geometric type belongs to the pseudo-Anosov class.Equivalent representations: Every generalized pseudo-Anosov homeomorphism has an infinite number of geometric Markov partitions with different geometric types. Our third result is an algorithm for determining whether two geometric types in the pseudo-Anosov class are realized by generalized pseudo-Anosov homeomorphisms that are topologically conjugated or not
5

Wong, Chi-hung, and 黃志雄. "Hand-written Chinese character recognition by hidden Markov models andradical partition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31220058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wingate, David. "Solving Large MDPs Quickly with Partitioned Value Iteration." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd437.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Chi-hung. "Hand-written Chinese character recognition by hidden Markov models and radical partition /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19669380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Adam Nicholas. "Bayesian Analysis of Partitioned Demand Models." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1497895561381294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hadriche, Abir. "Caractérisation du répertoire dynamique macroscopique de l'activité électrique cérébrale humaine au repos." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4724/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous proposons un algorithme basé sur une approche orientée d'ensemble de système dynamique pour extraire une organisation grossière de l'espace d'état de cerveau sur la base des signaux de l'EEG. Nous l'utilisons pour comparer l'organisation de l'espace d'état des données simulées à grande échelle avec la dynamique cérébrale réelle au repos chez des sujets sains et pathologiques (SEP)
We propose an algorithme based on set oriented approach of dynamical system to extract a coarse grained organization of brain state space on the basis of EEG signals. We use it for comparing the organization of the state space of large scale simulation of brain dynamics with actual brain dynamics of resting activity in healthy and SEP subjects
10

Joder, Cyril. "Alignement temporel musique-sur-partition par modèles graphiques discriminatifs." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00664260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse étudie le problème de l'alignement temporel d'un enregistrement musical et de la partition correspondante. Cette tâche peut trouver de nombreuses applications dans le domaine de l'indexation automatique de documents musicaux. Nous adoptons une approche probabiliste et nous proposons l'utilisation de modèles graphiques discriminatifs de type champs aléatoires conditionnels pour l'alignement, en l'exprimant comme un problème d'étiquetage de séquence. Cette classe de modèles permet d'exprimer des modèles plus flexibles que les modèles de Markov cachés ou les modèles semi-markoviens cachés, couramment utilisés dans ce domaine. En particulier, elle rend possible l'utilisation d'attributs (ou descripteurs acoustiques) extraits de séquences de trames audio qui se recouvrent, au lieu d'observations disjointes. Nous tirons parti de cette propriété pour introduire des attributs qui réalisent une modélisation implicite du tempo au plus bas niveau du modèle. Nous proposons trois structures de modèles différentes de complexité croissant, correspondant à différents niveaux de précision dans la modélisation de la durées des évènements musicaux. Trois types de descripteurs acoustiques sont utilisés, pour caractériser localement l'harmonie, les attaques de notes et le tempo de l'enregistrement. Une série d'expériences réalisées sur une base de données de piano classique et de musique pop permet de valider la grande précision de nos modèles. En effet, avec le meilleur des systèmes proposés, plus de 95 % des attaques de notes sont détectées à moins de 100 ms de leur position réelle. Plusieurs attributs acoustiques classiques, calculés à partir de différentes représentation de l'audio, sont utiliser pour mesurer la correspondance instantanée entre un point de la partition et une trame de l'enregistrement. Une comparaison de ces descripteurs est alors menée sur la base de leurs performances d'alignement. Nous abordons ensuite la conception de nouveaux attributs, grâce à l'apprentissage d'une transformation linéaire de la représentation symbolique vers une représentation temps-fréquence quelconque de l'audio. Nous explorons deux stratégies différentes, par minimum de divergence et maximum de vraisemblance, pour l'apprentissage de la transformation optimale. Les expériences effectuées montrent qu'une telle approche peut améliorer la précision des alignements, quelle que soit la représentation de l'audio utilisée. Puis, nous étudions différents ajustements à effectuer afin de confronter les systèmes à des cas d'utilisation réalistes. En particulier, une réduction de la complexité est obtenue grâce à une stratégie originale d'élagage hiérarchique. Cette méthode tire parti de la structure hiérarchique de la musique en vue d'un décodage approché en plusieurs passes. Une diminution de complexité plus importante que celle de la méthode classique de recherche par faisceaux est observée dans nos expériences. Nous examinons en outre une modification des modèles proposés afin de les rendre robustes à d'éventuelles différences structurelles entre la partition et l'enregistrement. Enfin, les propriétés de scalabilité des modèles utilisés sont étudiées.
11

Baptista, Diogo Pedro Ferreira Nascimento. "Iteradas de aplicações do plano no plano." Doctoral thesis, Universidade de Évora, 2008. http://hdl.handle.net/10174/12257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Neste trabalho estudamos as iteradas de aplicações do plano no plano. Usando as técnicas da dinâmica simbólica em aplicações do plano no plano, tendo sempre por base a teoria de amassamento de Milnor e Thurston e o formalismo da dinâmica simbólica desenvolvido por Sousa Ramos, abordamos diferentes aspectos qualitativos da dinâmica das aplicações de Lozi. Assim, através da dinâmica simbólica introduzida por Yutaka Ishii, começamos por refor-mular a fronteira do espaço dos parâmetros correspondente às aplicações de Lozi equivalentes à ferradura de Smale. No seguimento, apresentamos um método que permite a construção da bacia de atracção para o atractor de uma qualquer aplicação de Lozi. Ainda usando a dinâmica simbólica para as aplicações de Lozi, apresentamos um método que fazendo uso de expansões em fracções contínuas, nos permite calcular o maior dos expoentes de Lyapunov de uma aplicação de Lozi. Com a introdução do conceito de ponto crítico e subsequentemente de sequência de amassamento para as aplicações de Lozi, partimos para uma a construção de uma partição de Markov do seu espaço de fases. Desse modo, é possível a caracterização completa do espaço dos parâmetros através da introdução do conceito de curva de amassamento, que mostramos serem curvas isentrópicas. Consequentemente, obtemos a descrição em termos da entropia topológica da família das aplicações de Lozi. ### Abstract - In this work, we study the iterations of two dimensional maps. Using symbolic dynamics techniques for two dimensional maps, based on both the kneading theory of Milnor and Thurston and the formalism of symbolic dynamics developed by Sousa Ramos, we studied the qualitative aspects of the dynamics of Lozi maps. Thus, through the symbolic dynamics introduced by Yutaka Ishii, through the correction of symbolic sequence that characterized the first tangency between stable and unstable manifolds, we reformulate the border for the Smale horseshoes. Following this work, we present a method that allows the construction of the basin of attraction for the Lozi attractor. Even using the symbolic dynamics, we introduce a new method, using continuous fractions expansions that allow us to compute the largest Lyapunov exponent. Through the kneading sequence for Lozi map, we characterize the region in the parameter space were we have the kneading curves and we also give a method to the construction of a partition of Markov for the Lozi attractors. Consequently we characterize the topological entropy for the Lozi map, and costruct a new topological invariant, the second invariant.
12

Sörensen, Kristina. "Clustering in Financial Markets : A Network Theory Approach." Thesis, KTH, Optimeringslära och systemteori, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis we consider graph partition of a particular kind of complex networks referred to as power law graphs. In particular, we focus our analysis on the market graph, constructed from time series of price return on the American stock market. Two different methods originating from clustering analysis in social networks and image segmentation are applied to obtain graph partitions and the results are evaluated in terms of the structure and quality of the partition. Along with the market graph, power law graphs from three different theoretical graph models are considered. This study highlights topological features common in many power law graphs as well as their differences and limitations. Our results show that the market graph possess a clear clustered structure only for higher correlation thresholds. By studying the internal structure of the graph clusters we found that they could serve as an alternative to traditional sector classification of the market. Finally, partitions for different time series was considered to study the dynamics and stability in the partition structure. Even though the results from this part were not conclusive we think this could be an interesting topic for future research.
I denna uppsats studeras graf partition av en typ av komplexa nätverk som kallas power law grafer. Specifikt fokuserar vi på marknadengrafen, konstruerad av tidsserier av aktiepriser på den amerikanska aktiemarknaden. Två olika metoder, initialt utvecklade för klusteranalys i sociala nätverk samt för bildanalys appliceras för att få graf-partitioner och resultaten utvärderas utifrån strukturen och kvaliten på partitionen. Utöver marknadsgrafen studeras aven power law grafer från tre olika teoretiska grafmodeller. Denna studie belyser topologiska egenskaper vanligt förekommande i många power law grafer samt modellerns olikheter och begränsningar. Våra resultat visar att marknadsgrafen endast uppvisar en tydlig klustrad struktur för högre korrelation-trösklar. Genom att studera den interna strukturen hos varje kluster fann vi att kluster kan vara ett alternativ till traditionell marknadsindelning med industriella sektorer. Slutligen studerades partitioner för olika tidsserier för att undersöka dynamiken och stabiliteten i partitionsstrukturen. Trots att resultaten från denna del inte var entydiga tror vi att detta kan vara ett intressant spår för framtida studier.
13

Ocakli, Mehmet. "A Video Tracker System For Traffic Monitoring And Analysis." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608712/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this study, a video tracker system for traffic monitoring and analysis is developed. This system is able to detect and track vehicles as they move through the camera&rsquo
s field of view. This provides to perform traffic analysis about the scene, which can be used to optimize traffic flows and identify potential accidents. The scene inspected in this study is assumed stationary to achieve high performance solution to the problem. This assumption provides to detect moving objects more accurately, as well as ability of collecting a-priori information about the scene. A new algorithm is proposed to solve the multi-vehicle tracking problem that can deal with problems such as occlusion, short period object lost or inaccurate object detection. Two different tracking methods are used together in the developed tracking system, namely, the multi-model Kalman tracker and the Markov scene partition tracker. By the combination of these vehicle trackers with the developed occlusion reasoning approach, the continuity of the track is achieved for situations such as target loss and occlusion. The developed system is a system that collects a-priori information about the junction and then used it for scene modeling in order to increase the performance of the tracking system. The proposed system is implemented on real-world image sequences. The simulation results demonstrates that, the proposed multi-vehicle tracking system is capable of tracking a target in a complex environment and able to overcome occlusion and inaccurate detection problems as well as abrupt changes in its trajectory.
14

Cuvillier, Philippe. "On temporal coherency of probabilistic models for audio-to-score alignment." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066532/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur l'alignement automatique d'un enregistrement audio avec la partition de musique correspondante. Nous adoptons une approche probabiliste et proposons une démarche théorique pour la modélisation algorithmique de ce problème d'alignement automatique. La question est de modéliser l'évolution temporelle des événements par des processus stochastiques. Notre démarche part d'une spécificité de l'alignement musical : une partition attribue à chaque événement une durée nominale, qui est une information a priori sur la durée probable d'occurrence de l'événement. La problématique qui nous occupe est celle de la modélisation probabiliste de cette information de durée. Nous définissons la notion de cohérence temporelle à travers plusieurs critères de cohérence que devrait respecter tout algorithme d'alignement musical. Ensuite, nous menons une démarche axiomatique autour du cas des modèles de semi-Markov cachés. Nous démontrons que ces critères sont respectés lorsque des conditions mathématiques particulières sont vérifiées par les lois a priori du modèle probabiliste de la partition. Ces conditions proviennent de deux domaines mathématiques jusqu'ici étrangers à la question de l'alignement : les processus de Lévy et la totale positivité d'ordre deux. De nouveaux résultats théoriques sont démontrés sur l'interrelation entre ces deux notions. En outre, les bienfaits pratiques de ces résultats théoriques sont démontrés expérimentalement sur des algorithmes d'alignement en temps réel
This thesis deals with automatic alignment of audio recordings with corresponding music scores. We study algorithmic solutions for this problem in the framework of probabilistic models which represent hidden evolution on the music score as stochastic process. We begin this work by investigating theoretical foundations of the design of such models. To do so, we undertake an axiomatic approach which is based on an application peculiarity: music scores provide nominal duration for each event, which is a hint for the actual and unknown duration. Thus, modeling this specific temporal structure through stochastic processes is our main problematic. We define temporal coherency as compliance with such prior information and refine this abstract notion by stating two criteria of coherency. Focusing on hidden semi-Markov models, we demonstrate that coherency is guaranteed by specific mathematical conditions on the probabilistic design and that fulfilling these prescriptions significantly improves precision of alignment algorithms. Such conditions are derived by combining two fields of mathematics, Lévy processes and total positivity of order 2. This is why the second part of this work is a theoretical investigation which extends existing results in the related literature
15

Cuvillier, Philippe. "On temporal coherency of probabilistic models for audio-to-score alignment." Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur l'alignement automatique d'un enregistrement audio avec la partition de musique correspondante. Nous adoptons une approche probabiliste et proposons une démarche théorique pour la modélisation algorithmique de ce problème d'alignement automatique. La question est de modéliser l'évolution temporelle des événements par des processus stochastiques. Notre démarche part d'une spécificité de l'alignement musical : une partition attribue à chaque événement une durée nominale, qui est une information a priori sur la durée probable d'occurrence de l'événement. La problématique qui nous occupe est celle de la modélisation probabiliste de cette information de durée. Nous définissons la notion de cohérence temporelle à travers plusieurs critères de cohérence que devrait respecter tout algorithme d'alignement musical. Ensuite, nous menons une démarche axiomatique autour du cas des modèles de semi-Markov cachés. Nous démontrons que ces critères sont respectés lorsque des conditions mathématiques particulières sont vérifiées par les lois a priori du modèle probabiliste de la partition. Ces conditions proviennent de deux domaines mathématiques jusqu'ici étrangers à la question de l'alignement : les processus de Lévy et la totale positivité d'ordre deux. De nouveaux résultats théoriques sont démontrés sur l'interrelation entre ces deux notions. En outre, les bienfaits pratiques de ces résultats théoriques sont démontrés expérimentalement sur des algorithmes d'alignement en temps réel
This thesis deals with automatic alignment of audio recordings with corresponding music scores. We study algorithmic solutions for this problem in the framework of probabilistic models which represent hidden evolution on the music score as stochastic process. We begin this work by investigating theoretical foundations of the design of such models. To do so, we undertake an axiomatic approach which is based on an application peculiarity: music scores provide nominal duration for each event, which is a hint for the actual and unknown duration. Thus, modeling this specific temporal structure through stochastic processes is our main problematic. We define temporal coherency as compliance with such prior information and refine this abstract notion by stating two criteria of coherency. Focusing on hidden semi-Markov models, we demonstrate that coherency is guaranteed by specific mathematical conditions on the probabilistic design and that fulfilling these prescriptions significantly improves precision of alignment algorithms. Such conditions are derived by combining two fields of mathematics, Lévy processes and total positivity of order 2. This is why the second part of this work is a theoretical investigation which extends existing results in the related literature
16

Viricel, Clement. "Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie." Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0019/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur deux sujets intrinsèquement liés : le calcul de la constante de normalisation d’un champ de Markov et l’estimation de l’affinité de liaison d’un complexe de protéines. Premièrement, afin d’aborder ce problème de comptage #P complet, nous avons développé Z*, basé sur un élagage des quantités de potentiels négligeables. Il s’est montré plus performant que des méthodes de l’état de l’art sur des instances issues d’interaction protéine-protéine. Par la suite, nous avons développé #HBFS, un algorithme avec une garantie anytime, qui s’est révélé plus performant que son prédécesseur. Enfin, nous avons développé BTDZ, un algorithme exact basé sur une décomposition arborescente qui a fait ses preuves sur des instances issues d’interaction intermoléculaire appelées “superhélices”. Ces algorithmes s’appuient sur des méthodes issuse des modèles graphiques : cohérences locales, élimination de variable et décompositions arborescentes. A l’aide de méthodes d’optimisation existantes, de Z* et des fonctions d’énergie de Rosetta, nous avons développé un logiciel open source estimant la constante d’affinité d’un complexe protéine protéine sur une librairie de mutants. Nous avons analysé nos estimations sur un jeu de données de complexes de protéines et nous les avons confronté à deux approches de l’état de l’art. Il en est ressorti que notre outil était qualitativement meilleur que ces méthodes
This thesis is focused on two intrinsically related subjects : the computation of the normalizing constant of a Markov random field and the estimation of the binding affinity of protein-protein interactions. First, to tackle this #P-complete counting problem, we developed Z*, based on the pruning of negligible potential quantities. It has been shown to be more efficient than various state-of-the-art methods on instances derived from protein-protein interaction models. Then, we developed #HBFS, an anytime guaranteed counting algorithm which proved to be even better than its predecessor. Finally, we developed BTDZ, an exact algorithm based on tree decomposition. BTDZ has already proven its efficiency on intances from coiled coil protein interactions. These algorithms all rely on methods stemming from graphical models : local consistencies, variable elimination and tree decomposition. With the help of existing optimization algorithms, Z* and Rosetta energy functions, we developed a package that estimates the binding affinity of a set of mutants in a protein-protein interaction. We statistically analyzed our esti- mation on a database of binding affinities and confronted it with state-of-the-art methods. It appears that our software is qualitatively better than these methods
17

Vincent, Thomas. "Modèles hémodynamiques spatiaux adaptatifs pour l'imagerie cérébrale fonctionnelle." Paris 11, 2010. http://www.theses.fr/2010PA112365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les approches développées dans cette thèse s'inscrivent au sein des méthodes d'analyse en imagerie cérébrale fonctionnelle (ICF) cherchant à caractériser la spécialisation des structures cérébrales. La technique centrale d'ICF fut l'imagerie par résonance magnétique fonctionnelle (IRMf) qui fournit une mesure indirecte, hémodynamique, de l'activité neuronale. Les méthodes d'analyse portant sur ces données se divisent classiquement en : (i) une tâche de localisation des activations et (ii) une tâche d'estimation de la fonction de réponse hémodynamique (FRH) faisant le lien entre les stimulations du paradigme et le signal d'IRMf observé. Cette thèse traitent les tâches (i) et (ii) simultanément en un modèle de détection-estimation conjointe (DEC), respectant l'interdépendance évidente de ces deux processus. L'approche DEC a été ici étendue pour exprimer un modèle de corrélation spatiale sur les niveaux de réponse locaux associées à la FRH, rendant l'approche mutli-variée tant pour la détection que pour l'estimation. Dans le cadre bayésien, cette modélisation s'opère par l'expression d'un a priori par champ de Markov discret faisant intervenir un facteur de régularisation. Un traitement du cerveau entier non-supervisé pour ce paramètre a été mis en place, prenant en compte l'hétérogénéité des géométries des régions cérébrales. L'approche est validée sur la surface corticale, mais également dans le volume à travers plusieurs analyses de groupe dans des conditions d'acquisition différentes. Ces dernières ont permis d'évaluer l'impact de la méthode en terme de significativité des activations ainsi que son positionnement par rapport à l'approche classique
The approaches developed in this PhD take place in the analysis of functional brain imaging seeking the characterization of brain structures specialization. The central modality was functional magnetic resonance imaging (fMRI) which provides an indirect, hemodynamic, measure of the neural activity. Data analysis methods are conventionally divided into: (i) a localization task of activations and (ii) an estimation task i. E. Characterizing the hemodynamic response function (HRF) linking the stimulations provided by the paradigm to the observed fMRI signal. This PhD addresses the tasks (i) and (ii) simultaneously in a joint detection-estimation model (JDE), respecting the obvious interdependence of these two processes. The JDE approach here has been extended to express a model of spatial correlation on the local response level associated with the HRF, enabling the approach to be multivariate for the detection as well as the estimation tasks. In the Bayesian framework, this modeling is achieved by the expression of a prior discrete Markov field involving a regularization factor. The unsupervised treatment regarding this parameter for the whole brain has been developed by adaptively taking into account the heterogeneity of the geometries of brain regions. The approach is validated on the cortical surface, but also in the volume through several group analyses with different acquisition conditions. These were used to assess the impact of the method in terms of significance of activation and its positioning relative to the traditional approach
18

Chen, Qian. "Bayesian Methods for Estimation, Inference and Forecasting of Flexible Models for Value-at-Risk and Tail Conditional Expectations." Thesis, The University of Sydney, 2011. http://hdl.handle.net/2123/7863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Forecasting financial risk and risk measurement methods have been of increasing interest for financial market regulators and financial institutions in the past two decades. While the parametric and semi-parametric models have been widely reviewed in the academic literature, the non-parametric methods are popular in practice among the financial institutions. This thesis examines the forecasting models for Value-at-Risk (VaR) and conditional Value-at-Risk for financial return series. The aims of this thesis are to: 1. Estimate and forecast the potential skewness and dynamics in higher moments for conditional return distributions; 2. Develop flexible parametric models that can accurately forecast the portfolio tail risk levels. 3. Examine the impacts of asymmetry in the volatility and that in the shape of the conditional return distributions on the risk level forecasting. 4. Derive an easily applicable backtesting method for conditional VaR or expected shortfall. 5. Improve the efficiency and accuracy of Bayesian computational schemes for parameter estimation and forecasts. To achieve the above goals, this thesis first proposes a parametric approach to estimating and forecasting Value-at-Risk (VaR) and Expected Shortfall (ES) for a heteroscedastic financial return series. A GJR-GARCH is used to model the volatility process, capturing the leverage effect. To account for potential skewness and heavy tails, the model assumes an asymmetric Laplace (AL) distribution as the conditional distribution of the financial return series. Furthermore, dynamics in higher moments are captured by allowing for a time-varying shape parameter in this distribution. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is used for estimation, employing the Metropolis--Hastings (MH) algorithm with a mixture of Gaussian proposal distributions. A simulation study shows accurate estimation and improved inference of parameters in comparison with a single Gaussian proposal MH method. We illustrate the model by applying it to forecast return series from four international stock market indices, as well as two exchange rates, and generating one step-ahead forecasts of VaR and ES. We apply standard and non-standard tests to these forecasts, as well as to those from some competing methods, and find that the proposed model performs favourably compared to many popular competitors; in particular, it is the only conservative model of risk among the models considered in this work over the period studied, which includes the recent financial crisis. However, an AL conditional ditribution may forecast risk too conservatively, and over-estimate the risk levels by a factor of two. In other words, the model implies the necessity for financial institutions to set aside up to twice as much regulatory capital as they need. With fixed total capital, the capital available to invest is reduced, leading to a lowered profit potential. To address this dilemma, this study develops and employs a two-sided Weibull (TW) distribution to capture potential skewness and fat-tailed behaviour in the conditional financial return distribution for the purposes of risk measurement and management, specifically focusing on the forecasting of VaR and conditional VaR measures. Four volatility model specifications, including both symmetric and nonlinear versions, are considered to capture heteroscedasticity. An adaptive Bayesian MCMC scheme is devised for estimation, inference, and forecasting. A range of conditional return distributions (TW, AL, symmetric, and skewed Student t) are combined with the four volatility specifications to forecast risk measures. The study finds that the GARCH-type volatility specification is much less important than that of the conditional distribution and, while the Student t distribution performs particularly well on VaR forecasting, the two-sided Weibull performs at least equally well for VaR, but the most favourably for conditional VaR forecasting, both prior to as well as during and after the recent financial crisis. Nonetheless, the TW distribution can be bimodal, while the conditional distribution of real financial return series are known to be uni-modal. To address this issue, this study develops a partitioned distribution, combining the Weibull tails with a uni-modal AL centre. The proposed distribution is combined with the GJR-GARCH volatility model, to estimate and forecast the VaR and Conditional VaR. The estimation is via an adaptive MCMC sampling scheme and the MH algorithm, with a more general and flexible mixture of Student t proposal distributions. A simulation study demonstrates the estimation is marginally closer to the true values than the mixture of Gaussian proposal distributions. The model is illustrated via application to real financial return series, generating one-day-ahead forecasts and is compared with several competing models. The forecasts are evaluated by formal and non-formal backtesting methods. The model-fitting performances are demonstrated by a range of residual tests. We find the partitioned distribution forecasts financial tail risks slightly less accurately than the TW, but is most favoured by the residual tests.
19

Kéchichian, Razmig. "Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
20

Jiménez, Rojas Francisco. "Los grupos de empresa y la relación individual de trabajo en el marco de una economía productiva descentralizada." Doctoral thesis, Universidad de Murcia, 2012. http://hdl.handle.net/10803/87344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La organización productiva descentralizada y flexible que, bajo el impulso de las nuevas tecnologías y la globalización, viene sustituyendo a partir del último cuarto del siglo XX al fordismo de inspiración keynesiana, está deteriorando los mercados laborales, lo que supone una precarización de las condiciones de empleo, un notable repliegue de los «Estados del bienestar» y la desactivación del factor trabajo. Superado el tradicional principio de «unicidad» empresarial, un empresario «complejo» y múltiple –el grupo de empresas-, caracterizado por su dificultad identificatoria, absorbe un protagonismo creciente, en un contexto normativo-laboral casi desregulado, en el que al margen del fraude, la dirección unitaria de las empresas agrupadas no implica deducir de su funcionamiento una responsabilidad (solidaria). En esa «cierta unidad económica» que constituye el grupo, se detecta un punto de conflicto o desconexión, entre las facultades empresariales decisorias -unidad de decisión- y las organizativas –dependencia y ajenidad de frutos-.
The decentralized and flexible productive organization, boosted by globalization, new information and knowledge technologies, has been replacing the Fordist Keynesian inspiration since the last quarter of the 20th century; besides it has been worsening the labour markets, which involves a precariousness of employment conditions and an outstanding backing down of “welfare states” and job factor neutralization. Once the traditional principle of business uniqueness has been overwhelmed, a complex and multiple –the corporate group- employer arises; this employer is characterized by the difficulty of being identified and acquires an increasingly featuring role, inside a regulatory working context almost deregulated, where, on the fringe fraud, the unitarian corporate group management doesn’t imply deducing a solidarity liability from its activity. Inside that “particular economical unity” made up by the group, a deal-breaker or a gap is detected between the decision-making management faculties –decision unity- and the organizational ones –dependence and another person’s benefits-.
21

"Model-based clustering with network covariates by combining a modified product partition model with hidden Markov random field." Thesis, 2012. http://library.cuhk.edu.hk/record=b5549146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
乘積型劃分模型最近被擴展為容許個體有協變量的隨機聚類模型,然而協變量受限與對個體性質的描述。隨著科技發展,於越來越多生物醫學或社會研究的聚類問題中,我們需要考慮聚類對象間兩兩關連的額外資料,如基因間的調節關係或人際關係中的社交網絡。為此我們提出一個基於模型的方法,綜合乘積型劃分模型的一種改型與隱馬可夫隨機場對有網絡和協變量信息的對象做聚類。統計推論以貝葉斯方法進行。模型計算以馬可夫鏈蒙地卡羅運算法則進行。為了使馬可夫鏈能更好地混和,使用循序分配合併分裂取樣器進行群體移動以減少困於區域性頂點的機會。
為了測試本文提出的新方法的聚類性能,我們在兩個合成數據集上進行了模擬實驗。該實驗涵括多種類型的應變量,協變量網絡結構。結果顯示該方法在大部分實驗條件下都具有高正確聚類率。我們還將此返法應用於兩個真實數據集。第一個真實數據集利用學術期刊間相互引用的信息幫助對學術期刊的分門別類。第二個真實數據集合併酵母中基因的表達、轉錄因子結合位點和基因間的調控網絡信息,已對基因做詳細的功能分類。這兩個基於真實數據的實驗都給出諸多有意義的結果。
The product partition model was recently extended for the covariate-dependent random partition of subjects, where the covariates are limited to properties of individual subjects. For many clustering problems in biomedical or social studies, we often have extra clustering information from the pairwise association among subjects, such as the regulatory relationship between genes or the social network among people. Here we propose a model-based method for clustering with network information by combining a modified product partition model with hidden Markov random field. The Bayesian approach is used for statistical inference. Markov Chain Monte Carlo algorithms are used to compute the model. In order to improve the mixing of the chain, the Sequentially-Allocated Merge-Split Sampler is adapted to perform group moves as an eort to lower the chance of trapping in local modes.
The new method is tested on two synthesized data sets to evaluate its performance on different types of response variables, covariates and networks. The correct clustering rate is satisfactory under a wide range of conditions. We also applied this new method on two real data sets. The first real data set is the journal data, where the cross citation information among journals is used to groups journals to different categories. The second real data set involves the gene expression, motif binding and gene network of yeast, where the goal is to find detail gene functional groups. Both experiments yielded interesting results.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Fung, Ling Hiu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Technical Background --- p.7
Chapter 2.1 --- Variable notation --- p.8
Chapter 2.2 --- Two exemplary models for the response variable --- p.10
Chapter 2.3 --- PPMx --- p.12
Chapter 2.3.1 --- PPM - definition and its equivalence to DPM --- p.12
Chapter 2.3.2 --- PPMx - extension with covariates --- p.15
Chapter 2.3.3 --- Posterior inference --- p.18
Chapter 2.4 --- HMRF --- p.19
Chapter 2.4.1 --- Definition --- p.19
Chapter 2.4.2 --- Constrained Dirichlet Process Mixture --- p.21
Chapter 3 --- Model-based Clustering with Network Covariates --- p.27
Chapter 3.1 --- Design of the model --- p.27
Chapter 3.2 --- The Bayesian MCNC model --- p.30
Chapter 3.3 --- MCMC computing --- p.31
Chapter 3.4 --- Performance evaluation criteria --- p.37
Chapter 4 --- Simulation study --- p.39
Chapter 4.1 --- Network --- p.39
Chapter 4.2 --- Covariates --- p.41
Chapter 4.3 --- The Phase model (M1) --- p.42
Chapter 4.4 --- The Normal model (M2) --- p.52
Chapter 4.5 --- Comparing correct clustering percentage and correct co-occurrence percentage --- p.62
Chapter 5 --- Real data --- p.68
Chapter 5.1 --- Journal cross-citation data --- p.68
Chapter 5.2 --- Gene Network of yeast data --- p.76
Chapter 6 --- Conclusions --- p.89
Chapter A --- p.91
Chapter A.1 --- Covariates --- p.91
Chapter A.1.1 --- Continuous covariates --- p.91
Chapter A.1.2 --- Categorical covariates --- p.94
Chapter A.1.3 --- Count covariates --- p.96
Chapter A.2 --- Phase model --- p.98
Chapter A.2.1 --- Prior specification --- p.99
Chapter A.2.2 --- Data generation --- p.99
Chapter A.2.3 --- Posterior estimation --- p.100
Chapter A.3 --- Normal model --- p.111
Chapter A.3.1 --- Prior specification --- p.111
Chapter A.3.2 --- Data generation --- p.112
Chapter A.3.3 --- Posterior estimation --- p.112
Chapter A.4 --- Journal dataset --- p.115
22

"Structural equation models with continuous and polytomous variables: comparisons on the bayesian and the two-stage partition approaches." 2003. http://library.cuhk.edu.hk/record=b5891707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Chung Po-Yi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 33-34).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Bayesian Approach --- p.4
Chapter 2.1 --- Model Description --- p.5
Chapter 2.2 --- Identification --- p.6
Chapter 2.3 --- Bayesian Analysis of the Model --- p.8
Chapter 2.3.1 --- Posterior Analysis --- p.8
Chapter 2.3.2 --- The Gibbs Sampler --- p.9
Chapter 2.3.3 --- Conditional Distributions --- p.10
Chapter 2.4 --- Bayesian Estimation --- p.13
Chapter 3 --- Two-stage Partition Approach --- p.15
Chapter 3.1 --- First Stage: PRELIS --- p.15
Chapter 3.2 --- Second Stage: LISREL --- p.17
Chapter 3.2.1 --- Model Description --- p.17
Chapter 3.2.2 --- Identification --- p.17
Chapter 3.2.3 --- LISREL Analysis of the Model --- p.18
Chapter 4 --- Comparison --- p.19
Chapter 4.1 --- Simulation Studies --- p.19
Chapter 4.2 --- Real Data Studies --- p.28
Chapter 5 --- Conclusion & Discussion --- p.30
Chapter A --- Tables for the Two Approaches --- p.35
Chapter B --- Manifest variables in the ICPSR examples --- p.51
Chapter C --- PRELIS & LISREL Scripts for Simulation Studies --- p.52
23

WANG, YI NUO, and 王一諾. "Experiment on the influence of Fire resistance on market selling non-load-bearing metal stud partition walls to a standard fire." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/16425982163590593207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺灣科技大學
建築系
102
With the developing of society, architectural engineering is becoming larger, complicated and higher. Traditional labor-intensive constructions are replacing by new method of constructions, such as non-load-bearing metal stud Calcium silicate board wall. This wall has many advantages, such as unified construction method, less time to construct and so on. Both Cross-Strait and other countries have explicit standard for the method of fire resistance test for structural parts of building, however it doesn’t contain fire wall assembly with switchbox exposed to a standard fire. Some materials which are found in market sell does not have the same qualify with Lab materials. All these security risks are exist in daily life. This paper study literature,experiment with market selling materials. It also wants to investigate the influence of Fire resistance on market selling non-load-bearing metal stud Calcium silicate board wall assembly with switchbox exposed to a standard fire and the different between market sell materials and Lab materials by Literature study and Experimental study.
24

Desjardins, Guillaume. "Improving sampling, optimization and feature extraction in Boltzmann machines." Thèse, 2013. http://hdl.handle.net/1866/10550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Despite the current widescale success of deep learning in training large scale hierarchical models through supervised learning, unsupervised learning promises to play a crucial role towards solving general Artificial Intelligence, where agents are expected to learn with little to no supervision. The work presented in this thesis tackles the problem of unsupervised feature learning and density estimation, using a model family at the heart of the deep learning phenomenon: the Boltzmann Machine (BM). We present contributions in the areas of sampling, partition function estimation, optimization and the more general topic of invariant feature learning. With regards to sampling, we present a novel adaptive parallel tempering method which dynamically adjusts the temperatures under simulation to maintain good mixing in the presence of complex multi-modal distributions. When used in the context of stochastic maximum likelihood (SML) training, the improved ergodicity of our sampler translates to increased robustness to learning rates and faster per epoch convergence. Though our application is limited to BM, our method is general and is applicable to sampling from arbitrary probabilistic models using Markov Chain Monte Carlo (MCMC) techniques. While SML gradients can be estimated via sampling, computing data likelihoods requires an estimate of the partition function. Contrary to previous approaches which consider the model as a black box, we provide an efficient algorithm which instead tracks the change in the log partition function incurred by successive parameter updates. Our algorithm frames this estimation problem as one of filtering performed over a 2D lattice, with one dimension representing time and the other temperature. On the topic of optimization, our thesis presents a novel algorithm for applying the natural gradient to large scale Boltzmann Machines. Up until now, its application had been constrained by the computational and memory requirements of computing the Fisher Information Matrix (FIM), which is square in the number of parameters. The Metric-Free Natural Gradient algorithm (MFNG) avoids computing the FIM altogether by combining a linear solver with an efficient matrix-vector operation. The method shows promise in that the resulting updates yield faster per-epoch convergence, despite being slower in terms of wall clock time. Finally, we explore how invariant features can be learnt through modifications to the BM energy function. We study the problem in the context of the spike & slab Restricted Boltzmann Machine (ssRBM), which we extend to handle both binary and sparse input distributions. By associating each spike with several slab variables, latent variables can be made invariant to a rich, high dimensional subspace resulting in increased invariance in the learnt representation. When using the expected model posterior as input to a classifier, increased invariance translates to improved classification accuracy in the low-label data regime. We conclude by showing a connection between invariance and the more powerful concept of disentangling factors of variation. While invariance can be achieved by pooling over subspaces, disentangling can be achieved by learning multiple complementary views of the same subspace. In particular, we show how this can be achieved using third-order BMs featuring multiplicative interactions between pairs of random variables.
25

Dharmasena, Kalu Arachchillage Senarath. "The Non-alcoholic Beverage Market in the United States: Demand Interrelationships, Dynamics, Nutrition Issues and Probability Forecast Evaluation." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
There are many different types of non-alcoholic beverages (NAB) available in the United States today compared to a decade ago. Additionally, the needs of beverage consumers have evolved over the years centering attention on functionality and health dimensions. These trends in volume of consumption are a testament to the growth in the NAB industry. Our study pertains to ten NAB categories. We developed and employed a unique cross-sectional and time-series data set based on Nielsen Homescan data associated with household purchases of NAB from 1998 through 2003. First, we considered demographic and economic profiling of the consumption of NAB in a two-stage model. Race, region, age and presence of children and gender of household head were the most important factors affecting the choice and level of consumption. Second, we used expectation-prediction success tables, calibration, resolution, the Brier score and the Yates partition of the Brier score to measure the accuracy of predictions generated from qualitative choice models used to model the purchase decision of NAB by U.S. households. The Yates partition of the Brier score outperformed all other measures. Third, we modeled demand interrelationships, dynamics and habits of NAB consumption estimating own-price, cross-price and expenditure elasticities. The Quadratic Almost Ideal Demand System, the synthetic Barten model and the State Adjustment Model were used. Soft drinks were substitutes and fruit juices were complements for most of non-alcoholic beverages. Investigation of a proposed tax on sugar-sweetened beverages revealed the importance of centering attention not only to direct effects but also to indirect effects of taxes on beverage consumption. Finally, we investigated factors affecting nutritional contributions derived from consumption of NAB. Also, we ascertained the impact of the USDA year 2000 Dietary Guidelines for Americans associated with the consumption of NAB. Significant factors affecting caloric and nutrient intake from NAB were price, employment status of household head, region, race, presence of children and the gender of household food manager. Furthermore, we found that USDA nutrition intervention program was successful in reducing caloric and caffeine intake from consumption of NAB. The away-from-home intake of beverages and potential impacts of NAB advertising are not captured in our work. In future work, we plan to address these limitations.
26

Peyret, Thomas. "Développement de modèles prédictifs de la toxicocinétique de substances organiques." Thèse, 2013. http://hdl.handle.net/1866/9231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les modèles pharmacocinétiques à base physiologique (PBPK) permettent de simuler la dose interne de substances chimiques sur la base de paramètres spécifiques à l’espèce et à la substance. Les modèles de relation quantitative structure-propriété (QSPR) existants permettent d’estimer les paramètres spécifiques au produit (coefficients de partage (PC) et constantes de métabolisme) mais leur domaine d’application est limité par leur manque de considération de la variabilité de leurs paramètres d’entrée ainsi que par leur domaine d’application restreint (c. à d., substances contenant CH3, CH2, CH, C, C=C, H, Cl, F, Br, cycle benzénique et H sur le cycle benzénique). L’objectif de cette étude est de développer de nouvelles connaissances et des outils afin d’élargir le domaine d’application des modèles QSPR-PBPK pour prédire la toxicocinétique de substances organiques inhalées chez l’humain. D’abord, un algorithme mécaniste unifié a été développé à partir de modèles existants pour prédire les PC de 142 médicaments et polluants environnementaux aux niveaux macro (tissu et sang) et micro (cellule et fluides biologiques) à partir de la composition du tissu et du sang et de propriétés physicochimiques. L’algorithme résultant a été appliqué pour prédire les PC tissu:sang, tissu:plasma et tissu:air du muscle (n = 174), du foie (n = 139) et du tissu adipeux (n = 141) du rat pour des médicaments acides, basiques et neutres ainsi que pour des cétones, esters d’acétate, éthers, alcools, hydrocarbures aliphatiques et aromatiques. Un modèle de relation quantitative propriété-propriété (QPPR) a été développé pour la clairance intrinsèque (CLint) in vivo (calculée comme le ratio du Vmax (μmol/h/kg poids de rat) sur le Km (μM)), de substrats du CYP2E1 (n = 26) en fonction du PC n octanol:eau, du PC sang:eau et du potentiel d’ionisation). Les prédictions du QPPR, représentées par les limites inférieures et supérieures de l’intervalle de confiance à 95% à la moyenne, furent ensuite intégrées dans un modèle PBPK humain. Subséquemment, l’algorithme de PC et le QPPR pour la CLint furent intégrés avec des modèles QSPR pour les PC hémoglobine:eau et huile:air pour simuler la pharmacocinétique et la dosimétrie cellulaire d’inhalation de composés organiques volatiles (COV) (benzène, 1,2-dichloroéthane, dichlorométhane, m-xylène, toluène, styrène, 1,1,1 trichloroéthane et 1,2,4 trimethylbenzène) avec un modèle PBPK chez le rat. Finalement, la variabilité de paramètres de composition des tissus et du sang de l’algorithme pour les PC tissu:air chez le rat et sang:air chez l’humain a été caractérisée par des simulations Monte Carlo par chaîne de Markov (MCMC). Les distributions résultantes ont été utilisées pour conduire des simulations Monte Carlo pour prédire des PC tissu:sang et sang:air. Les distributions de PC, avec celles des paramètres physiologiques et du contenu en cytochrome P450 CYP2E1, ont été incorporées dans un modèle PBPK pour caractériser la variabilité de la toxicocinétique sanguine de quatre COV (benzène, chloroforme, styrène et trichloroéthylène) par simulation Monte Carlo. Globalement, les approches quantitatives mises en œuvre pour les PC et la CLint dans cette étude ont permis l’utilisation de descripteurs moléculaires génériques plutôt que de fragments moléculaires spécifiques pour prédire la pharmacocinétique de substances organiques chez l’humain. La présente étude a, pour la première fois, caractérisé la variabilité des paramètres biologiques des algorithmes de PC pour étendre l’aptitude des modèles PBPK à prédire les distributions, pour la population, de doses internes de substances organiques avant de faire des tests chez l’animal ou l’humain.
Physiologically-based pharmacokinetic (PBPK) models simulate the internal dose metrics of chemicals based on species-specific and chemical-specific parameters. The existing quantitative structure-property relationships (QSPRs) allow to estimate the chemical-specific parameters (partition coefficients (PCs) and metabolic constants) but their applicability is limited by their lack of consideration of variability in input parameters and their restricted application domain (i.e., substances containing CH3, CH2, CH, C, C=C, H, Cl, F, Br, benzene ring and H in benzene ring). The objective of this study was to develop new knowledge and tools to increase the applicability domain of QSPR-PBPK models for predicting the inhalation toxicokinetics of organic compounds in humans. First, a unified mechanistic algorithm was developed from existing models to predict macro (tissue and blood) and micro (cell and biological fluid) level PCs of 142 drugs and environmental pollutants on the basis of tissue and blood composition along with physicochemical properties. The resulting algorithm was applied to compute the tissue:blood, tissue:plasma and tissue:air PCs in rat muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, ethers, aliphatic and aromatic hydrocarbons. Then, a quantitative property-property relationship (QPPR) model was developed for the in vivo rat intrinsic clearance (CLint) (calculated as the ratio of the in vivo Vmax (μmol/h/kg bw rat) to the Km (μM)) of CYP2E1 substrates (n = 26) as a function of n-octanol:water PC, blood:water PC, and ionization potential). The predictions of the QPPR as lower and upper bounds of the 95% mean confidence intervals were then integrated within a human PBPK model. Subsequently, the PC algorithm and QPPR for CLint were integrated along with a QSPR model for the hemoglobin:water and oil:air PCs to simulate the inhalation pharmacokinetics and cellular dosimetry of volatile organic compounds (VOCs) (benzene, 1,2-dichloroethane, dichloromethane, m-xylene, toluene, styrene, 1,1,1-trichloroethane and 1,2,4 trimethylbenzene) using a PBPK model for rats. Finally, the variability in the tissue and blood composition parameters of the PC algorithm for rat tissue:air and human blood:air PCs was characterized by performing Markov chain Monte Carlo (MCMC) simulations. The resulting distributions were used for conducting Monte Carlo simulations to predict tissue:blood and blood:air PCs for VOCs. The distributions of PCs, along with distributions of physiological parameters and CYP2E1 content, were then incorporated within a PBPK model, to characterize the human variability of the blood toxicokinetics of four VOCs (benzene, chloroform, styrene and trichloroethylene) using Monte Carlo simulations. Overall, the quantitative approaches for PCs and CLint implemented in this study allow the use of generic molecular descriptors rather than specific molecular fragments to predict the pharmacokinetics of organic substances in humans. In this process, the current study has, for the first time, characterized the variability of the biological input parameters of the PC algorithms to expand the ability of PBPK models to predict the population distributions of the internal dose metrics of organic substances prior to testing in animals or humans.

To the bibliography