Дисертації з теми "Agrégation de modèles"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Agrégation de modèles".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Trifi, Amine. "Essais en agrégation, convergence et limites en temps continu des modèles GARCH." Paris 1, 2007. http://www.theses.fr/2007PA010057.
Melquiond, Adrien. "Agrégation de peptides amyloïdes par des simulations numériques." Paris 7, 2007. http://www.theses.fr/2007PA077053.
More than twenty human diseases, including Alzheimer's disease and dialysis-related amyloidosis, are associated with the pathological self-assembly of soluble proteins into transient cytotoxic oligomers and amyloid fibrils. Because this process is very complex, the detailed aggregation paths and structural characterization of the intermediate species remain to be determined. In this work, we first review our current understanding of the dynamics and free energy surface of the assembly of small amyloid-forming peptides (KFFE and the fragment 22-27 of amylin) using the OPEP coarse-grained protein force field (Optimized Potential for Efficient peptide-structure Prediction) coupled to the activation-relaxation technique (ART) and molecular dynamics (MD). Next, we present replica exchange MD simulations (REMD) on the dimers of Alzheimer's peptides Abeta (1-40) and Abeta (1-42) and discuss the rôle of amino acids 23-28 in fibril formation. Finally, we probe the first steps of Abeta (16-22) oligomer dissociation in the presence of N-methylated inhibitors by MD-OPEP simulations
Chebaro, Yassmine. "Agrégation des peptides et protéines amyloïdes par simulations numériques." Paris 7, 2010. http://www.theses.fr/2010PA077120.
The aggregation of amyloid peptides and proteins is the hallmark of neurodegenerative disease. In this thesis, we were interested in the Alzheimer disease and the prion related diseases. The two proteins implicated in those pathologies are the Abeta peptides and the prion protein respectively. The self-assembly of these soluble proteins into toxix oligomers is difficult to caracterize using the actual experimental techniques and the formation of amyloid fibrils is a very slow process in vitro and in vivo. The use of in silico simulation methods is therefore a useful tool to a better understanding of the pathological assembly. In this work, we used the coarse-grained force field (OPEP) based on a reduced representation of the amino-acids, allowing the acccess to longer simulation times in order to identify the structural modifications associated with the amyloid assembly. First, we review the coarse-grained model and the energy function associated to it, followed by its validation on numerous Systems. Then, we present the results of OPEP on the identification of the structural and thermodynamical properties of the amyloid peptide Abeta(16-35), the study of the inhibition mechanism on a preformed fibril in the presence of N-methylated inhibitors, and finally the study of de novo synthesized model peptides of the amyloid aggregation. The last part of the work concerns the study of the prion protein and the effect of the T183A mutation, causing Creutzfeld-Jacob disease, on its structure and stability, in the monomeric and dimeric forms of the protein, as well as the aggregation properties of the Hl helix of the prion protein in its monomeric, dimeric and tetrameric states
Bourel, Mathias. "Agrégation de modèles en apprentissage statistique pour l'estimation de la densité et la classification multiclasse." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4076/document.
Ensemble methods in statistical learning combine several base learners built from the same data set in order to obtain a more stable predictor with better performance. Such methods have been extensively studied in the supervised context for regression and classification. In this work we consider the extension of these approaches to density estimation. We suggest several new algorithms in the same spirit as bagging and boosting. We show the efficiency of combined density estimators by extensive simulations. We give also the theoretical results for one of our algorithms (Random Averaged Shifted Histogram) by mean of asymptotical convergence under milmd conditions. A second part is devoted to the extensions of the Boosting algorithms for the multiclass case. We propose a new algorithm (Adaboost.BG) accounting for the margin of the base classifiers and show its efficiency by simulations and comparing it to the most used methods in this context on several datasets from the machine learning benchmark. Partial theoretical results are given for our algorithm, such as the exponential decrease of the learning set misclassification error to zero
Ghattas, Badih. "Agrégation d'arbres de décision binaires : Application à la prévision de l'ozone dans les Bouches du Rhône." Aix-Marseille 2, 2000. http://www.theses.fr/2000AIX22003.
Moulahi, Bilel. "Définition et évaluation de modèles d'agrégation pour l'estimation de la pertinence multidimensionnelle en recherche d'information." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30254/document.
The main research topic of this document revolve around the information retrieval (IR) field. Traditional IR models rank documents by computing single scores separately with respect to one single objective criterion. Recently, an increasing number of IR studies has triggered a resurgence of interest in redefining the algorithmic estimation of relevance, which implies a shift from topical to multidimensional relevance assessment. In our work, we specifically address the multidimensional relevance assessment and evaluation problems. To tackle this challenge, state-of-the-art approaches are often based on linear combination mechanisms. However, these methods rely on the unrealistic additivity hypothesis and independence of the relevance dimensions, which makes it unsuitable in many real situations where criteria are correlated. Other techniques from the machine learning area have also been proposed. The latter learn a model from example inputs and generalize it to combine the different criteria. Nonetheless, these methods tend to offer only limited insight on how to consider the importance and the interaction between the criteria. In addition to the parameters sensitivity used within these algorithms, it is quite difficult to understand why a criteria is more preferred over another one. To address this problem, we proposed a model based on a multi-criteria aggregation operator that is able to overcome the problem of additivity. Our model is based on a fuzzy measure that offer semantic interpretations of the correlations and interactions between the criteria. We have adapted this model to the multidimensional relevance estimation in two scenarii: (i) a tweet search task and (ii) two personalized IR settings. The second line of research focuses on the integration of the temporal factor in the aggregation process, in order to consider the changes of document collections over time. To do so, we have proposed a time-aware IR model for combining the temporal relavance criterion with the topical relevance one. Then, we performed a time series analysis to identify the temporal query nature, and we proposed an evaluation framework within a time-aware IR setting
Thill, Antoine. "Agrégation des particules : structure, dynamique et simulation : application au cas d'un écoulement stratifié : l'estuaire du Rhône." Phd thesis, Aix-Marseille 3, 1999. http://tel.archives-ouvertes.fr/tel-00109390.
En effet, afin de mener à bien cette étude, il a fallu développer de façon préliminaire différents outils. On a ainsi mis au point deux nouvelles méthodes de mesure expérimentale de la structure des agrégats qui échappent aux techniques existantes. Ces méthodes permettent, outre une meilleure connaissance du système, de développer et de valider un nouveau modèle de la cinétique de croissance des agrégats. Ce modèle numérique prend en compte la dimension fractale des agrégats ainsi que sa variabilité. Il est validé par confrontation à des données issues de la bibliographie et l'expériences.
Une étude de terrain dans l'estuaire du grand Rhône est menée dans des conditions hydrodynamiques contrastées (étiage, débit moyen et crue). Elle a permis d'obtenir, pour la première fois, une série de mesures de tailles de particules tout au long de la zone de mélange. Il est établi que les particules les plus grosses (supérieures à 5 microns) présentent une évolution contrôlée par la dilution, la sédimentation et
éventuellement la remise en suspension. Par contre, les particules plus petites (2 à 5 microns) montrent une augmentation de leur concentration le long de la zone de mélange. Dans les premiers temps du mélange, cette augmentation est liée à la fragmentation d'agrégats. Il est possible de montrer ensuite i) que l'agrégation des colloïdes ne peut expliquer cette augmentation que si ceux ci ne réagissent pas avec les fractions de tailles supérieures et présentent une réactivité très supérieure à la réactivité moyenne (alpha : 0.009) et ii) que la production primaire est un mécanisme probable pour expliquer cette augmentation.
Hozé, Nathanaël. "Modélisation et méthodes d'analyse de la diffusion et agrégation au niveau moléculaire pour l'organisation sous-cellulaire." Paris 6, 2013. http://www.theses.fr/2013PA066695.
In the present PhD thesis, we study diffusion and aggregation in the context of cellular biology. Our goal is to obtain physical laws of several processes such as particle assembly or laws of diffusion in microdomains, in order to determine how subcellular processes are constructed from elementary molecular organization. This change of scale can be formulated and analyzed using several tools such as partial differential equations, statistical physics, stochastic processes and numerical simulations. We present here several methods and we apply them to study questions in biophysics, neurobiology and cellular biology. Examples are receptors trafficking on cellular membrane, nuclear organization and the dynamics of viral assembly. In the first part, to obtain an estimation of the effective diffusion coefficient of a Brownian particle moving in between obstacles, we compute the mean time for a Brownian particle to arrive to a narrow opening defined as the region of minimal distance between two disks of identical radius. The method relies on M\"obius conformal transformation applied to the Laplace equation. Using this result, we develop statistical methods to solve a reverse engineering problem which consists in recovering parameters of a stochastic equation from a large number of short trajectories. Applying this method to superresolution data of receptor trafficking, we identify novel molecular organization, which are described as potential wells (deterministic part of the SDE). We next solve a different question: how is it possible to reconstruct surfaces from a large sample of short stochastic trajectories? By using Ito's formula, we derive a new class of nonlinear partial differential equations that allow us to reconstruct the surface. This section is illustrated with numerical examples. In the second part, we focus on an aspect of nuclear organization and our goal is to model and analyze telomere dynamics (ends of the chromosomes) in the cell nucleus. Early experimental findings reveal that yeast telomeres organize dynamically in a small numbers of clusters, yet this process remains poorly understood. Thus, we use a statistical physics approach to study the joint dynamics of the 32 telomeres, that we model as independent Brownian particles that can also form aggregates. We estimate the number of clusters and the number of telomeres per cluster using exact formula that we derive from our model. We identify the relevant parameters by comparing our results with experimental data. In particular, we show that a single parameter - the ratio of the association to the dissociation rate - is sufficient to account for telomere clustering in various conditions. Finally, we develop an empirical model to study particle aggregation to a single nucleation site. The distribution of particles in small clusters before arriving is a key ingredient to derive kinetic laws. We derive these laws using first a deterministic model and then a stochastic jump process, which allows us to obtain also an explicit expression for the mean time that the nucleation site is filled. We discuss some applications to HIV capsid formation
Coppin, David. "Agrégation de la convection dans un modèle de circulation générale : mécanismes physiques et rôle climatique." Electronic Thesis or Diss., Paris 6, 2017. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2017PA066057.pdf.
This thesis focuses on the study of convective aggregation in LMDZ5A general circulation model, used in Radiative-Convective Equilibrium (RCE) configuration. The instability of the RCE allows us to look at the mechanisms controlling the initiation of convective aggregation and its dependence on sea surface temperatures (SST). At low SSTs, a coupling between the large-scale circulation and the radiative effects of low clouds is needed to trigger self-aggregation. At high SSTs, the coupling between the large-scale circulation and the surface fluxes controls this initiation. When the atmosphere is coupled to a slab ocean mixed layer, SST gradients facilitate the initiation of convective aggregation. Except for the high-cloud radiative effects, triggering mechanisms are less crucial. Convection also becomes less dependent on the SST.The impact of convective aggregation on the climate sensitivity and surface temperature is also analyzed. Convective aggregation is found to increase the area of dry clear-sky zones. Thus, it tends to cool the system very efficiently. However, the negative feedback associated with an increase in aggregation is generally balanced by offsetting changes in SST gradients and low clouds that tend to increase the climate sensitivity. In contrast, at shorter timescales, the coupling between ocean and convective aggregation also controls the strength of convective aggregation and overturn its effect. Thus the impact of convective aggregation may not be as strong as what can be inferred from experiments with uniform SSTs.These results emphasize the importance of considering ocean-atmosphere coupling when studying the role of aggregation in climate
Bel, Liliane. "Sur la réduction des modèles linéaires : analyse de données en automatique." Paris 11, 1985. http://www.theses.fr/1985PA112306.
Two state space model reduction methods are studied: aggregation method and the balanced state space representation method. In the case of aggregation a new method of selecting eigenvalues is proposed, which is both geometrical and sequential. Problems of robustness of aggregation are evoked and resolved in some particular cases. The balanced state space representation is approached by means of contralibility and observability degrees. The notion of perturbability degree is introduced. Then we study the application of those two methods to reduced order compensator design. The two methods are finally applied to the system representing the launch booster Ariane flying
Vinatier, Gérald. "Relation entre l'agrégation et la toxicité dans des modèles de maladies neurodégénératives chez la drosophile." Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0006.
A common feature of neurodegenerative diseases is the formation of different forms of protein aggregates. During my Ph. D. , I tried to establish the relashionship that links aggregation and toxicity thanks to Drosophila models of Spinocerebellar ataxia type 3 and 7. These models recapitulate the main caracteristics of the corresponding human polyglutamine diseases, making Drosophila a good model to study aggregation and toxicity. My results demonstrate that aggregates in their biochemical definition (high-molecular-mass objects that resist denaturation by boiling SDS) are the major proteic species responsible for the toxicity. One hypothesis could be that aggregates inhibit the proteasomal activity. Thus, I tried to evaluate proteasomal activity in Drosophila expressing a mutant form of Ataxin 3. I have shown that mutated Ataxin 3 is responsible for a decrease in proteasomal activity, which can be blocked by a suppressor of ataxin 3-induced toxicity. However, it seems that proteasomal inhibition is not necessary for the toxicity even if I cannot exclude that this reduction of protein degradation participates to the toxicity of polyglutamine proteins
Coppin, David. "Agrégation de la convection dans un modèle de circulation générale : mécanismes physiques et rôle climatique." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066057/document.
This thesis focuses on the study of convective aggregation in LMDZ5A general circulation model, used in Radiative-Convective Equilibrium (RCE) configuration. The instability of the RCE allows us to look at the mechanisms controlling the initiation of convective aggregation and its dependence on sea surface temperatures (SST). At low SSTs, a coupling between the large-scale circulation and the radiative effects of low clouds is needed to trigger self-aggregation. At high SSTs, the coupling between the large-scale circulation and the surface fluxes controls this initiation. When the atmosphere is coupled to a slab ocean mixed layer, SST gradients facilitate the initiation of convective aggregation. Except for the high-cloud radiative effects, triggering mechanisms are less crucial. Convection also becomes less dependent on the SST.The impact of convective aggregation on the climate sensitivity and surface temperature is also analyzed. Convective aggregation is found to increase the area of dry clear-sky zones. Thus, it tends to cool the system very efficiently. However, the negative feedback associated with an increase in aggregation is generally balanced by offsetting changes in SST gradients and low clouds that tend to increase the climate sensitivity. In contrast, at shorter timescales, the coupling between ocean and convective aggregation also controls the strength of convective aggregation and overturn its effect. Thus the impact of convective aggregation may not be as strong as what can be inferred from experiments with uniform SSTs.These results emphasize the importance of considering ocean-atmosphere coupling when studying the role of aggregation in climate
Chaari, Ali. "Modulation de l'agrégation des protéines amyloïdes par de petites molécules : modèle du lysozyme." Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0003.
At least twenty human proteins can fold abnormally to form pathological deposits that are associated with several degenerative diseases. Despite extensive investigation on amyloid fibrillogenesis and toxicity of certain aggregate forms, its detailed molecular mechanisms remain unknown. During my PhD, I was analysed the aggregation process of lysozyme at pH 2 and 57°C by different techniques. Particular attention has been focused on the exploring the inhibitory activity of natural products such us nicotine, dopamine, resveratrol, rutine and tyrosol against the fibrillation of hen lysozyme by using fluoresecence spectroscopy, atomic force microscopy, infra rouge spectroscopy and dynamic light scattering. We found that the formation of amyloid fibrils in vitro was inhibited by all products in a dose dependent manner. Moreover, they were also capable of robustly disaggregating pre-formed oligomers. Based upon structure analysis we demonstrate that natural products inhibit the aggregation with the same efficacity but they remodel differently oligomers and amyloid fibrils. Also we have tested the effect of these products in the aggregation of alpha synuclein and results demonstrate that the formation of alpha synuclein amyloid fibrils was inhibited by all products in a dose dependent manner. Thus, it appears that nicotine; dopamine, resveratrol, rutine and tyrosol are generic inhibitors of amyloid fibril formation and can remodel different conformers of amyloid proteins
Charton, Eric. "Génération de phrases multilingues par apprentissage automatique de modèles de phrases." Phd thesis, Université d'Avignon, 2010. http://tel.archives-ouvertes.fr/tel-00622561.
Hélias, Arnaud. "Agrégation/abstraction de modèles pour l'analyse et l'organisation de réseaux de flux : application à la gestion des effluents d'élevage à La Réunion." Montpellier, ENSA, 2003. http://www.theses.fr/2003ENSA0030.
The development of the intensive livestock production, notably in the island of Reunion, increases the animal waste production that cannot be any more neglected because of environmental and legislation constraints. The key points to evaluate animal waste management scenarios are the modelling of spreading decisions, their causes, and their consequence. This Ph. D. Thesis concerns the dynamic representation of producer's (i. E. , livestocks) and consumer's (i. E. , crop cultures) in the waste network. Consequently, imprecise stock dynamics and decision taking discrete models have to be confronted. Our approach is i) a modelling by the timed automata formalism join to ii) an automated and generic procedure definition to continuous models approximation, with imprecision on initial state and the input variables. The discrete abstraction approach is iIIustrated on a carbon substrate anaerobic digestion process. For each (production or consumption) unit, a model is defined. Then, the spreading decisions are studied by confronting the models via model-checking tools, which allow an automated verification of system 's properties. The Kronos software was used as a tool dedicated to the Timed Computation Tree Logic (TCTL). The model parameters are given for livestocks and crop cultures in the Reunion Island context and the approach was illustrated by the study of the sample farm functioning. This is realized by an iterative procedure between test and result's interpretations in front of agronomie knowledge. It is shown lastly in this study how this approach can be used to represent a farm network and a waste treatment plant stock supply
El, Attar Ali. "Estimation robuste des modèles de mélange sur des données distribuées." Phd thesis, Nantes, 2012. https://archive.bu.univ-nantes.fr/pollux/show/show?id=b22726f5-f19e-4c6e-9dcb-451bb7a968e8.
This work proposes a contribution aiming at probabilistic model estimation, in the setting of distributed, decentralized, data-sharing computer systems. Such systems are developing over the internet, and also exist as sensor networks, for instance. Our general goal consists in estimating a probability distribution over a data set which is distributed into subsets located on the nodes of a distributed system. More precisely, we are at estimating the global distribution by aggregating local distributions, estimated on these local subsets. Our proposal exploits the following assumption: all distributions are modelled as a Gaussian mixture. Our contribution is a solution that is both decentralized and statistically robust to outlier local Gaussian mixture models. The proposed process only requires mixture parameters, rather than original data
El, Attar Ali. "Estimation robuste des modèles de mélange sur des données distribuées." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00746118.
Doyon, Julien. "Simulation numérique d'agrégats fractals en milieu de microgravité." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28588/28588.pdf.
Conrad, Arnaud. "Accumulation de molécules organiques modèles par des agrégats biologiques de type boues activées." Nancy 1, 2004. http://docnum.univ-lorraine.fr/public/SCD_T_2004_0259_CONRAD.pdf.
Segard, Bertrand-David. "Myopathies myofibrillaires liées aux gènes de la desmine et de l'alpha-B-cristalline : étude de nouveaux modèles cellulaires inductibles." Paris 7, 2013. http://www.theses.fr/2013PA077090.
Myofibrillar myopathies are a group of rare genetic diseases affecting skeletal muscles with mainly adult onset and slow evolution. Although these pathologies can be associated with different genes, they still share the following characteristics: disruption of the myofibrils network, degradation products accumulation and aggregation of various proteins. We have created a cellular model for the study of the impact of the expression of pathogenic proteins in the skeletal muscle context. This model bas shown excellent characteristics and bas been used for the study of six variants of desmin (Des-S46Y, Des-D399Y and Des-S460I) and αB-crystallin (αBC-R120G, αBC-Q151X and αBC-464delCT) involved in these pathologies. This model highlights the importance of the full cellular context and the quantity of proteins expressed towards aggregation. We have been able during these experiments to define the response of these variants in different stress likely encountered during muscle activity (thermal, oxidative and mechanical). In addition, a compound bas shown effective anti-aggregation action in several tested situations: N-acetyl-L-cysteine. The spectrum of responses to stress and to anti-aggregation compounds tested of these protein variants show that the therapeutic approach must be finely parsed and prepared according to many parameters. All of the models and achieved results allow orienting the future stages of preclinical research on myofîbrillar myopathies associated with DES and CRYAB genes
Kaakai, Sarah. "Nouveaux paradigmes en dynamique de populations hétérogènes : modélisation trajectorielle, agrégation, et données empiriques." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066553/document.
This thesis deals with the probabilistic modeling of heterogeneity in human populations and of its impact on longevity. Over the past few years, numerous studies have shown a significant increase in geographical and socioeconomic inequalities in mortality. New issues have emerged from this paradigm shift that traditional demographic models are not able solve, and whose formalization requires a careful analysis of the data, in a multidisciplinary environment. Using the framework of population dynamics, this thesis aims at illustrating this complexity according to different points of view: We explore the link between heterogeneity and non-linearity in the presence of composition changes in the population, from a mathematical modeling viewpoint. The population dynamics, called Birth Death Swap, is built as the solution of a stochastic equation driven by a Poisson measure, using a more general pathwise comparison result. When swaps occur at a faster rate than demographic events, an averaging result is obtained by stable convergence and comparison. In particular, the aggregated population converges towards a nonlinear dynamic. In the second part, the impact of heterogeneity on aggregate mortality is studied from an empirical viewpoint, using English population data structured by age and socioeconomic circumstances. Based on numerical simulations, we show how a cause of death reduction could be compensated in presence of heterogeneity. The last point of view is an interdisciplinary survey on the determinants of longevity, accompanied by an analysis on the evolution of tools to analyze it and on new modeling issues in the face of this paradigm shift
Igel, Angélique. "Mécanismes d'inactivation des protéines amyloïdes." Paris 7, 2013. http://www.theses.fr/2013PA077101.
Amyloides fibers correspond to insoluble protein assemblies associated to the neurodegenerative diseases. By taking into account properties biophysics of these fibers which confer them a very high resistance, and the spectre of their transmissibility, everything lets suggest that medical surgical acts could potentially lead or transmit amyloidoses by the inoculation of nucleation seed. The objective of this work is to estimate mechanisms of inactivation of A ß and prion assemblies, to understand the mechanisms of inactivation of amyloides proteins. At first, we estimated the evolution of the quaternary structure of the assemblies of prion stemming from 3 strains (263K, vCJD and 139A) after decontamination treatments. Ail results demonstrate that the inactivation of prion are strain dependent. This intrinsic property of strain would be due to different structuring of PrP protomers within the assemblies. Finally, similar approaches to those used on the field of prions were used to estimate the résistance of amyloide assemblies stemming from the Alzheimer's disease (peptide A ß). Our preliminary results of synthetic peptide A ß inactivation, show that this peptide, in its fibrillar state, possesses properties conferring it a high strength. To deepenour results in a model of peptide having sudden a maturation of in-vivo withdrawal, we have designed a new cellular model expressing the peptide A640. This new tool is operational recently and seems promising for the study of the properties of the peptide Aß
Teisseire, Jérémie. "Tack de matériaux modèles." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2006. http://tel.archives-ouvertes.fr/tel-00175670.
L'étude réalisée sur le premier matériau a permis de mettre en évidence qu'outre la digitation et la cavitation, mécanismes de rupture observés sur des liquides newtoniens, un mécanisme de fracture peut également apparaître, la fracture étant localisée à l'interface entre la plaque solide et le matériau viscoélastique. Un modèle théorique, faisant notamment intervenir la cinétique de cavitation, a été élaboré pour interpréter la succession de ces mécanismes et décrire les courbes de traction. Le bon accord entre les prédictions et les résultats expérimentaux valide l'importance du rôle de la cinétique et nous permet d'expliquer l'apparition de fractures malgré la croissance préalable de cavités.
Le second système étudié provient de la déformulation d'adhésifs industriels. Nous avons tout d'abord étudié l'influence de la proportion en particules sur la rhéologie des mélanges. Nous avons observé une évolution des paramètres rhéologiques, que nous avons comparée à l'évolution de l'adhésion des mélanges. Nous avons ainsi pu corréler la présence d'un second plateau de force, observé fréquemment pour de véritables adhésifs, au taux de particules dans le matériau. Enfin, cette étude nous a permis de proposer la voie de rupture optimale pour un matériau adhésif.
Cugniet, Patrick. "Etude de l'agrégation de particules solides en milieu non mouillant. Interprétation et modélisation." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2003. http://tel.archives-ouvertes.fr/tel-00012127.
La modélisation des phénomènes d'agrégation – fragmentation utilise l'approche des bilans de populations et prend en compte l'hydrodynamique de la suspension ainsi que les aspects physico-chimiques propres à la non mouillabilité. L'un des points les plus importants qui ressort de cette étude concerne la diminution de la fragmentation due à la présence de ponts gazeux entre les particules solides plongées dans un milieu liquide non mouillant.
Delort, Florence. "Utilisation de modèles cellulaires et animaux dans l’étude des mécanismes moléculaires impliqués dans les desminopathies." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC051.
Desminopathies, belonging to the myofibrillar myopathies (MFM), are due to mutations in the DES gene, encoding desmin protein. It is an intermediate filament protein, essential for structural and functional alignment plus anchorage of myofibrils as well as the positioning of cell organelles and notably signaling events. Moreover among seventy mutations of DES gene, some can produce different phenotypes within a family, suggesting that environmental factors influence disease states. Beside this, patient muscle proteins show oxidative features and support a link between oxidative stress, protein aggregation and abnormal protein deposition in MFM. To improve our understanding of desminopathies, we have developed skeletal muscle cell models to observe desmin behavior upon formation of intra-cytoplasmic aggregates, in an isogenic context. These cells express human desmin carrying Wild Type or pathological mutations. We have already demonstrated that only the D399Y substitution localized in the desmin 2B domain presents an aggregative answer induced by oxidative stress, two other mutations in the head and the tail of the protein do not. Furthermore, a pretreatment with NAC but also with pro-autophagic molecules prevents this aggregation. In the continuity of this work, I thus studied the cellular phenotype of lineages expressing other mutations of the domain 2B. So, I showed that in answer to stress DesQ389P and DesD399Y share a common aggregative character without common structural modification. DesR406W also exhibits aggregates, but smaller and more spread in the cytoplasm. Of more this aggregation is also avoided by pretreatment with NAC for these three cell lines. In contrast, there is no stress inducible aggregation in DesA357P. Besides, the analysis of the redox intra-cytoplasmic contents as well as desmin posttranslational modifications in our cell models present variability for each mutation which can be corrected by the NAC. To analyze NAC anti-aggregative capacity in vivo, animal models were also built. So the surexpression of the desmin D399Y leads to similar physiopathological characteristics to those observed in patients. In conclusion, our results confirm that each mutation leads to its own pathological molecular mechanisms and it will be important to integrate these specific mutant behaviors to consider future treatment
Charton, Éric. "Génération de phrases multilingues par apprentissage automatique de modèles de phrases." Thesis, Avignon, 2010. http://www.theses.fr/2010AVIG0175/document.
Natural Language Generation (NLG) is the natural language processing task of generating natural language from a machine representation system. In this thesis report, we present an architecture of NLG system relying on statistical methods. The originality of our proposition is its ability to use a corpus as a learning resource for sentences production. This method offers several advantages : it simplifies the implementation and design of a multilingual NLG system, capable of sentence production of the same meaning in several languages. Our method also improves the adaptability of a NLG system to a particular semantic field. In our proposal, sentence generation is achieved trough the use of sentence models, obtained from a training corpus. Extracted sentences are abstracted by a labelling step obtained from various information extraction and text mining methods like named entity recognition, co-reference resolution, semantic labelling and part of speech tagging. The sentence generation process is achieved by a sentence realisation module. This module provide an adapted sentence model to fit a communicative intent, and then transform this model to generate a new sentence. Two methods are proposed to transform a sentence model into a generated sentence, according to the semantic content to express. In this document, we describe the complete labelling system applied to encyclopaedic content to obtain the sentence models. Then we present two models of sentence generation. The first generation model substitute the semantic content to an original sentence content. The second model is used to find numerous proto-sentences, structured as Subject, Verb, Object, able to fit by part a whole communicative intent, and then aggregate all the selected proto-sentences into a more complex one. Our experiments of sentence generation with various configurations of our system have shown that this new approach of NLG have an interesting potential
D'Orange, Marie. "Utilisation de nouveaux modèles rongeurs de tauopathie pure, obtenus par transfert de gène, pour caractériser le lien entre l’agrégation de Tau et sa toxicité." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS339/document.
Tauopathies are neurodegenerative diseases characterized by the aggregation of Tau protein. Despite this common hallmark, tauopathies exhibit a wide variety of clinical and anatomo-pathological presentations, which may possibly result from different pathological mechanisms. One hypothesized common mechanism, however, implicates small oligomeric aggregates as drivers of Tau-induced toxicity.The aim of this project was to develop models of sporadic and genetic tauopathies, using adeno-associated viruses to mediate gene transfer of human Tau to the rat brain. Three different constructs were used, each giving rise to a specific phenotype. First, hTAUWT overexpression led to a strong hyperphosphorylation of the protein which was associated with neurotoxicity in absence of any significant aggregation. Its co-expression with the pro-aggregation peptide TauRD-ΔK280 in the hTAUProAggr group strongly promoted its aggregation, with neuroprotective effects. hTAUP301L construct led to aggregation into soluble species as well as mature aggregates accompanied with an intermediate toxicity. Those results support the hypothesis that soluble oligomeric species are key players of Tau-induced neurodegeneration.Those fast developing models, obtained through similar overexpression of human Tau, thus recapitulated the phenotypic variability observed in human tauopathies. Those should prove useful in the future to study mechanisms underlying the toxicity of various Tau species. Those could also serve to study the specificity and selectivity of Tau-directed tracers for positon emission tomography (PET) imaging
Kirchner, Sara. "Approche multi-échelle de l'agrégation dans le procédé de précipitation de boehmite." Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/15134/1/Kirchner.pdf.
Mabon, Gwennaëlle. "Estimation non-paramétrique adaptative pour des modèles bruités." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB020/document.
In this thesis, we are interested in nonparametric adaptive estimation problems of density in the convolution model. This framework matches additive measurement error models, which means we observe a noisy version of the random variable of interest. To carry out our study, we follow the paradigm of model selection developped by Birgé & Massart or criterion based on Lepski's method. The thesis is divided into two parts. In the first one, the main goal is to build adaptive estimators in the convolution model when both random variables of interest and errors are distributed on the nonnegative real line. Thus we propose adaptive estimators of the density along with the survival function, then of linear functionals of the target density. This part ends with a linear density aggregation procedure. The second part of the thesis deals with adaptive estimation of density in the convolution model when the distribution is unknown and distributed on the real line. To make this problem identifiable, we assume we have at hand either a preliminary sample of the noise or we observe repeated data. So, we can derive adaptive estimation with mild assumptions on the noise distribution. This methodology is then applied to linear mixed models and to the problem of density estimation of the sum of random variables when the latter are observed with an additive noise
Normand, Raoul. "Modèles déterministes et aléatoires d'agrégation limitée et phénomène de gélification." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00631419.
De, lozzo Matthias. "Modèles de substitution spatio-temporels et multifidélité : Application à l'ingénierie thermique." Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0027/document.
This PhD thesis deals with the construction of surrogate models in transient and steady states in the context of thermal simulation, with a few observations and many outputs.First, we design a robust construction of recurrent multilayer perceptron so as to approach a spatio-temporal dynamic. We use an average of neural networks resulting from a cross-validation procedure, whose associated data splitting allows to adjust the parameters of these models thanks to a test set without any information loss. Moreover, the construction of this perceptron can be distributed according to its outputs. This construction is applied to the modelling of the temporal evolution of the temperature at different points of an aeronautical equipment.Then, we proposed a mixture of Gaussian process models in a multifidelity framework where we have a high-fidelity observation model completed by many observation models with lower and no comparable fidelities. A particular attention is paid to the specifications of trends and adjustement coefficients present in these models. Different kriging and co-krigings models are put together according to a partition or a weighted aggregation based on a robustness measure associated to the most reliable design points. This approach is used in order to model the temperature at different points of the equipment in steady state.Finally, we propose a penalized criterion for the problem of heteroscedastic regression. This tool is build in the case of projection estimators and applied with the Haar wavelet. We also give some numerical results for different noise specifications and possible dependencies in the observations
Ben, Amira Wael. "Comportement hydrodynamique des nanoparticules au cours de la séparation magnétique." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4797/document.
We present in this thesis a study of the hydrodynamic behavior of nanoparticles suspended in a ferrofluid and subjected to a magnetic field gradient. The goal is to characterize the magnetic separation of a colloidal suspension in a project to design a water treatment system HGMS (High Gradient Magnetic Separation) that can also have other applications. We develop a mathematical model that describes the Lagrangian tracking of nanoparticles in a carrier fluid. It is based on the Fokker- Planck and Langevin descriptions while taking account of magnetic and hydrodynamic interactions between particles. The model also takes into account the geometry of the formed aggregates. From the numerical simulation, we find that the separation time depends strongly on the size and length of the aggregates formed during the separation process. The study of the kinetics of aggregation shows the existence of a regime with dynamic scaling. In the irreversible aggregation linear chains of particles are formed and their average size changes over time with a scaling law with a low power. The variation exponent of the average size of chains is consistent with the Smoluchowski coagulation equation with a homogeneous kernel. Using the asymptotic, long-time behavior, of a solution of Smoluchowski equations we highlight a characteristic time of aggregation and we show that this time is a similarity variable on which depend the separation time. We also show that the scaling law is still valid for nanoparticles in a Poiseuille flow and the average size follows a power law as a function of Reynolds number
Maillard, Guillaume. "Hold-out and Aggregated hold-out Aggregated Hold-Out Aggregated hold-out for sparse linear regression with a robust loss function." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM005.
In statistics, it is often necessary to choose between different estimators (estimator selection) or to combine them (agregation). For risk-minimization problems, a simple method, called hold-out or validation, is to leave out some of the data, using it to estimate the risk of the estimators, in order to select the estimator with minimal risk. This method requires the statistician to arbitrarily select a subset of the data to form the "validation sample". The influence of this choice can be reduced by averaging several hold-out estimators (Aggregated hold-out, Agghoo). In this thesis, the hold-out and Agghoo are studied in various settings. First, theoretical guarantees for the hold-out (and Agghoo) are extended to two settings where the risk is unbounded: kernel methods and sparse linear regression. Secondly, a comprehensive analysis of the risk of both methods is carried out in a particular case: least-squares density estimation using Fourier series. It is proved that aggregated hold-out can perform better than the best estimator in the given collection, something that is clearly impossible for a procedure, such as hold-out or cross-validation, which selects only one estimator
Ahmed, Ahmed. "Utilisation de l'ingénierie dirigée par les modèles pour l'agrégation continue de données hétérogènes : application à la supervision de réseaux de gaz." Thesis, Paris, ENSAM, 2018. http://www.theses.fr/2018ENAM0049/document.
Over the last decade, the information technology and industrial infrastructures have evolved from containing monolithic systems to heterogeneous, autonomous, and widely distributed systems. Most systems cannot coexist while completely isolated and need to share their data in order to increase business productivity. In fact, we are moving towards larger complex systems where millions of systems and applications need to be integrated. Thus, the requirement of an inexpensive and fast interoperability solution becomes an essential need. The existing solutions today impose standards or middleware to handle this issue. However, these solutions are not sufficient and often require specific ad-hoc developments. Thus, this work proposes the study and the development of a generic, modular, agnostic and extensible interoperability architecture based on modeling principles and software engineering aspects. It aims to promote interoperability and data exchange between heterogeneous systems in real time without requiring systems to comply with specific standards or technologies. The industrial use cases for this work takes place in the context of the French gas distribution network. The theoretical and empirical validation of our proposal corroborates assumptions that the interoperability between heterogeneous systems can be achieved by using the aspects of separation of concerns and model-driven engineering. The cost and time to promote the interoperability are also reduced by promoting the characteristics of re-usability and extensibility
Vledouts, Alexandre. "Fragmentation explosive d'anneaux et de coques." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4362.
We are studying the fragmentation of cohesive objects through a set of model experiments. In a first experiment, we force the radial extension of a ring made of cohesive grains (magnetic beads or polymer beads bonded with liquid bridges). Because of the extension, the grains separate from each other, then, because of the cohesion, start an aggregation dynamic leading to the formation of fragments of different sizes. We show that the distribution of the fragment sizes formed in this way converge to a Gamma law distribution and we propose a model explaining the self-similar evolution of the distribution. On a second part we introduce a distribution of intensities of the bonds between the grains. The fragmentation in the presence of "defects" leads to a broader size distribution, with the same law. In a last study, a spherical shell composed with liquid is forced to extend by the combustion of a reagent mixture of gas confined inside. The drop size distribution observed is conform to the same family of distribution laws found for the rings composed with cohesive grains. In this last experiment the role of defects is played by the hydrodynamic instabilities developing on the surface of the liquid shell under extension. Thus the broadness of the drop size distribution depends on the strength of extension of the liquid shell
René, Malika. "Impact de traitements à haute pression isostatique ou dynamique sur des systèmes protéiques modèles : recherche d'indicateurs biologiques de traitement." Montpellier 2, 2009. http://www.theses.fr/2009MON20012.
Biological indicators for food processing were studied in view of characterising the effects of high pressure processing on model proteins such as (i) Staphylococcus aureus enterotoxin A (SEA) and (ii) whey protein isolate. The effects of isostatic high pressure applied to SEA solution in Tris-HCl buffer (20 mM, pH 7,4) in the presence or not of chatropic, dissociating or chelating agents (urea 8M, sodium dodecyl sulfate 30 mM or ethylene diamine tetraacetic acid 1 mM) have been investigated. Among the methods tested (i. E. SEA toxicity on Caco-2 cells; SEA superantigenicity as evaluated by the proliferation of rat T lymphocytes; SEA immunoreactivity), SEA superantigenicity displayed the finest changes in the biological responses that SEA could induce after processing at 600 MPa for 15 min and 20°C or 45°C. In parallel, the intrinsic protein fluorescence was studied by spectrofluorimetry under pressure between 10 and 600 MPa, at 20°C or 45°C, or in the presence of urea, at 20°C. Pressurisation up to 600 MPa at 20°C induced structural modifications that were reversible without changes in the SEA superantigenic response, and indicate some baro-resistance of the toxin. In contrast, processing at 600 MPa and 45°C, or processing at 20°C in the presence of urea (at 0. 1 or 600 MPa) induced partially reversible structural changes accompanied by an increase in SEA superantigenicity. Dynamic high pressure (also called ultra-high pressure homogenization, UHPH) at pressure levels above 200 MPa applied to dispersions of whey protein isolate induced protein aggregates of submicron and controlled sizes. The protein behaviour on TC7 cell monolayers was investigated both by assessing the amount of transported proteins through the cells, and using confocal fluorescence spectroscopy. Transport kinetics indicated that beta-lactoglobulin (the major whey protein) and probably the protein aggregates induced by UHPH enter the cell layers and progress inside the cells as a function of exposure time
Kervegant, Françoise. "Contribution aux simplifications quantitatives des modélisations de sûreté." Compiègne, 1991. http://www.theses.fr/1991COMPD397.
Houdard, Antoine. "Some advances in patch-based image denoising." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005/document.
This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
Houdard, Antoine. "Some advances in patch-based image denoising." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005.
This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
Claude, Mathilde. "Agrégation thermique de l’ovalbumine et modulation de l’allergénicité." Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4027/document.
Food allergies constitute a public health problem. Many allergens were already identified but the way proteins have to be consumed in order to be more tolerated by immune system remains unknown. Egg, one of the most important causes of food allergy among children, is present in a wild range of food products and is susceptible to be exposed to various thermal treatments. The aim of this work was to evaluate the effect of thermal aggregation of protein on allergenicity of one major allergen within egg, ovalbumin. One study using murine model allowed highlighting that aggregation of OVA modulates allergenicity of the protein by influencing both phases of the allergic reaction. During the sensitization, IgG production increased. Aggregation of ovalbumin also decreased its IgE-binding capacity and basophil activation ability in the murine model and for IgE from egg allergic patients. Beyond the aggregation phenomenon, the structure of aggregates itself modifies the allergenicity of ovalbumin. Aggregates formed under repulsive electrostatic conditions (linear aggregates of few ten nanometers) induced in a murine model lower IgE production during sensitization phase and consequently decreases elicitation phase compared to other aggregates (spherical aggregates of few ten micrometers) formed under different conditions. Aggregation of ovalbumin also modified the basophil degranulation ability after digestion and transport across the epithelial barrier. All these results give some comprehensive elements about tolerance to cooked egg and allowed giving some clues to enhance food technologic process for children with a risk to develop allergy
Montuelle, Lucie. "Inégalités d'oracle et mélanges." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112364/document.
This manuscript focuses on two functional estimation problems. A non asymptotic guarantee of the proposed estimator’s performances is provided for each problem through an oracle inequality.In the conditional density estimation setting, mixtures of Gaussian regressions with exponential weights depending on the covariate are used. Model selection principle through penalized maximum likelihood estimation is applied and a condition on the penalty is derived. If the chosen penalty is proportional to the model dimension, then the condition is satisfied. This procedure is accompanied by an algorithm mixing EM and Newton algorithm, tested on synthetic and real data sets. In the regression with sub-Gaussian noise framework, aggregating linear estimators using exponential weights allows to obtain an oracle inequality in deviation,thanks to pac-bayesian technics. The main advantage of the proposed estimator is to be easily calculable. Furthermore, taking the infinity norm of the regression function into account allows to establish a continuum between sharp and weak oracle inequalities
Eugène, Sarah. "Stochastic modelling in molecular biology : a probabilistic analysis of protein polymerisation and telomere shortening." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066193/document.
This PhD dissertation proposes a stochastic analysis of two questions of molecular biology in which randomness is a key feature of the processes involved: protein polymerisation in neurodegenerative diseases on the one hand, and telomere shortening on the other hand. Self-assembly of proteins into amyloid aggregates is an important biological phenomenon associated with human diseases such as prion diseases, Alzheimer’s, Huntington’s and Parkinson’s disease, amyloidosis and type-2 diabetes. The kinetics of amyloid assembly show an exponential growth phase preceded by a lag phase, variable in duration, as seen in bulk experiments and experiments that mimic the small volume of the concerned cells. After an introduction to protein polymerisation in chapter I, we investigate in chapter II the origins and the properties of the observed variability in the lag phase of amyloid assembly. This variability is currently not accounted for by deterministic nucleation-dependent mechanisms. In order to tackle this issue, a stochastic minimal model is proposed, simple, but capable of describing the characteristics of amyloid growth curves. Two populations of chemical components are considered in this model: monomers and polymerised monomers. Initially, there are only monomers and from then, two possible ways of polymerising a monomer: either two monomers collide to combine into two polymerised monomers, or a monomer is polymerised by the encounter of an already polymerised monomer. However efficient, this simple model does not fully explain the variability observed in the experiments, and in chapter III, we extend it in order to take into account other relevant mechanisms of the polymerisation process that may have an impact on fluctuations. In both chapters, asymptotic results involving different time scales are obtained for the corresponding Markov processes. First and second order results for the starting instant of nucleation are derived from these limit theorems. These results rely on a scaling analysis of a population model and the proof of a stochastic averaging principle for a model related to an Ehrenfest urn model. In the second part, a stochastic model for telomere shortening is proposed. In eukaryotic cells, chromosomes are shortened with each occurring mitosis, because the DNA polymerases are unable to replicate the chromosome down to the very end. To prevent potentially catastrophic loss of genetic information, these chromosomes are equipped with telomeres at both ends (repeated sequences that contain no genetic information). After many rounds of replication however, the telomeres are progressively nibbled to the point where the cell cannot divide anymore, a blocked state called replicative senescence. The aim of this model is to trace back to the initial distribution of telomeres from measurements of the time of senescence
Aniorté, Philippe. "Bases d'informations généralisées : modèle agrégatif, version et hypertext." Toulouse 3, 1990. http://www.theses.fr/1990TOU30001.
Ouni, Zaïd. "Statistique pour l’anticipation des niveaux de sécurité secondaire des générations de véhicules." Thesis, Paris 10, 2016. http://www.theses.fr/2016PA100099/document.
Road safety is a world, European and French priority. Because light vehicles (or simply“vehicles”) are obviously one of the main actors of road activity, the improvement of roadsafety necessarily requires analyzing their characteristics in terms of traffic road accident(or simply “accident”). If the new vehicles are developed in engineering department and validated in laboratory, it is the reality of real-life accidents that ultimately characterizesthem in terms of secondary safety, ie, that demonstrates which level of security they offer to their occupants in case of an accident. This is why car makers want to rank generations of vehicles according to their real-life levels of safety. We address this problem by exploiting a French data set of accidents called BAAC (Bulletin d’Analyse d’Accident Corporel de la Circulation). In addition, fleet data are used to associate a generational class (GC) to each vehicle. We elaborate two methods of ranking of GCs in terms of secondary safety. The first one yields contextual rankings, ie, rankings of GCs in specified contexts of accident. The second one yields global rankings, ie, rankings of GCs determined relative to a distribution of contexts of accident. For the contextual ranking, we proceed by “scoring”: we look for a score function that associates a real number to any combination of GC and a context of accident; the smaller is this number, the safer is the GC in the given context. The optimal score function is estimated by “ensemble learning”, under the form of an optimal convex combination of scoring functions produced by a library of ranking algorithms by scoring. An oracle inequality illustrates the performance of the obtained meta-algorithm. The global ranking is also based on “scoring”: we look for a scoring function that associates any GC with a real number; the smaller is this number, the safer is the GC. Causal arguments are used to adapt the above meta-algorithm by averaging out the context. The results of the two ranking procedures are in line with the experts’ expectations
Bellec, Pierre C. "Sharp oracle inequalities in aggregation and shape restricted regression." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLG001/document.
This PhD thesis studies two fields of Statistics: Aggregation of estimatorsand shape constrained regression.Shape constrained regression studies the regression problem (find a function that approximates well a set of points) with an underlying shape constraint, that is, the function must have a specific "shape". For instance, this function could be nondecreasing of convex: These two shape examples are the most studied. We study two estimators: an estimator based on aggregation methods and the Least Squares estimator with a convex shape constraint. Oracle inequalities are obtained for both estimators, and we construct confidence sets that are adaptive and honest.Aggregation of estimators studies the following problem. If several methods are proposed for the same task, how to construct a new method that mimics the best method among the proposed methods? We will study these problems in three settings: aggregation of density estimators, aggregation of affine estimators and aggregation on the regularization path of the Lasso
Bellec, Pierre C. "Sharp oracle inequalities in aggregation and shape restricted regression." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLG001.
This PhD thesis studies two fields of Statistics: Aggregation of estimatorsand shape constrained regression.Shape constrained regression studies the regression problem (find a function that approximates well a set of points) with an underlying shape constraint, that is, the function must have a specific "shape". For instance, this function could be nondecreasing of convex: These two shape examples are the most studied. We study two estimators: an estimator based on aggregation methods and the Least Squares estimator with a convex shape constraint. Oracle inequalities are obtained for both estimators, and we construct confidence sets that are adaptive and honest.Aggregation of estimators studies the following problem. If several methods are proposed for the same task, how to construct a new method that mimics the best method among the proposed methods? We will study these problems in three settings: aggregation of density estimators, aggregation of affine estimators and aggregation on the regularization path of the Lasso
Deschemps, Antonin. "Apprentissage machine et réseaux de convolutions pour une expertise augmentée en dosimétrie biologique." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. http://www.theses.fr/2023URENS104.
Biological dosimetry is the branch of health physics dealing with the estimation of ionizing radiation doses from biomarkers. The current gold standard (defined by the IAEA) relies on estimating how frequently dicentric chromosomes appear in peripheral blood lymphocytes. Variations in acquisition conditions and chromosome morphology makes this a challenging object detection problem. Furthermore, the need for an accurate estimation of the average number of dicentric per cell means that a large number of image has to be processed. Human counting is intrinsically limited, as cognitive load is high and the number of specialist insufficient in the context of a large-scale exposition. The main goal of this PhD is to use recent developments in computer vision brought by deep learning, especially for object detection. The main contribution of this thesis is a proof of concept for a dicentric chromosome detection model. This model agregates several Unet models to reach a high level of performance and quantify its prediction uncertainty, which is a stringent requirement in a medical setting
Erabit, Nicolas. "Caractérisation expérimentale et modélisation de la dénaturation et de l’agrégation de la beta-lactoglobuline au cours d’un traitement thermique de type industriel." Electronic Thesis or Diss., Paris, AgroParisTech, 2012. http://www.theses.fr/2012AGPT0094.
This work aims to model the formation of whey protein aggregates during continuous thermo-mechanical treatment (heat exchanger) with integration of the physicochemical properties (protein content and mineral content). This simulation work is supported by knowledge from literature and experimentation carried out in addition with literature.A two-step work was done at two scales. At laboratory scale, the samples were submitted to well controlled and almost homogenous thermo-mechanical treatments. This was used as data base to develop a mechanistic model of transformation for the irreversible aggregation of beta-lactoglobulin in solution in function of time/temperature/shear. This first step allows obtaining a model for one profile but not for a pilot-scale heat treatment. The hypothesis is that the dispersion of aggregate sizes in continuous heat treatment is partly due to distribution of residence times: proteins in the slowest parts of the fluid have more time to aggregate. Experiments were carried out on a continuous pilot of heat treatment
Sonet, Virginie. "Les usages sociaux et les logiques économiques de l'audiovisuel sur smartphone." Thesis, Paris 2, 2014. http://www.theses.fr/2014PA020049.
Watching and delivering video contents on smartphone is a complex phenomenon because it is forming as we investigate it. Both dimensions of mobility and hybridization characterize this new media territory and this thesis explains how their contribution leads to the construction of the social uses and the economic logics. Through several qualitative surveys based on interviews with users and professionals and a long term observation of the TV offers on this new screen, this research analyzes how the users on one hand and the broadcasting industry (TV networks) on the other hand, seize this new screen.This new audiovisual field is put into perspective by drawing up its offers overview and the first observed uses. An original reading of the audiovisual field evolution with the prism of technological, economical and use disruptions is then presented, and the construction of our scientific position is described, by explaining the interdisciplinary approach.Therefore, this research highlights that the mobility contributes to the appropriation of the smartphone as an audiovisual screen, essentially through the dimensions of context, as well as technological, commercial and social constraints. It also analyzes how the techno-economic environment, generated by the mobile platforms (Apple and Google), constrains the deployment of French TV networks’ business models on this new screen. With the dimension of hybridization, this thesis explains that the uses are expanding through the interlacing of communication, connection and audiovisual uses. Therefore, Television Networks try to conquer the attention of smartphone users by providing enhanced applications, by spreading in Social Networking Sites and by developing interactive systems between the smartphone and theTV set. But, these new aggregations often come with adverse economic implications
Sebbar, Mehdi. "On unsupervised learning in high dimension." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLG003.
In this thesis, we discuss two topics, high-dimensional clustering on the one hand and estimation of mixing densities on the other. The first chapter is an introduction to clustering. We present various popular methods and we focus on one of the main models of our work which is the mixture of Gaussians. We also discuss the problems with high-dimensional estimation (Section 1.3) and the difficulty of estimating the number of clusters (Section 1.1.4). In what follows, we present briefly the concepts discussed in this manuscript. Consider a mixture of K Gaussians in ℝ^p. One of the common approaches to estimate the parameters is to use the maximum likelihood estimator. Since this problem is not convex, we can not guarantee the convergence of classical methods such as gradient descent or Newton's algorithm. However, by exploiting the biconvexity of the negative log-likelihood, the iterative 'Expectation-Maximization' (EM) procedure described in Section 1.2.1 can be used. Unfortunately, this method is not well suited to meet the challenges posed by the high dimension. In addition, it is necessary to know the number of clusters in order to use it. Chapter 2 presents three methods that we have developed to try to solve the problems described above. The works presented there have not been thoroughly researched for various reasons. The first method that could be called 'graphical lasso on Gaussian mixtures' consists in estimating the inverse matrices of covariance matrices Σ (Section 2.1) in the hypothesis that they are parsimonious. We adapt the graphic lasso method of [Friedman et al., 2007] to a component in the case of a mixture and experimentally evaluate this method. The other two methods address the problem of estimating the number of clusters in the mixture. The first is a penalized estimate of the matrix of posterior probabilities T∈ℝ^{n x K} whose component (i, j) is the probability that the i-th observation is in the j-th cluster. Unfortunately, this method proved to be too expensive in complexity (Section 2.2.1). Finally, the second method considered is to penalize the weight vector π in order to make it parsimonious. This method shows promising results (Section 2.2.2). In Chapter 3, we study the maximum likelihood estimator of density of n i.i.d observations, under the assumption that it is well approximated by a mixture with a large number of components. The main focus is on statistical properties with respect to the Kullback-Leibler loss. We establish risk bounds taking the form of sharp oracle inequalities both in deviation and in expectation. A simple consequence of these bounds is that the maximum likelihood estimator attains the optimal rate ((log K)/n)^{1/2}, up to a possible logarithmic correction, in the problem of convex aggregation when the number K of components is larger than n^{1/2}. More importantly, under the additional assumption that the Gram matrix of the components satisfies the compatibility condition, the obtained oracle inequalities yield the optimal rate in the sparsity scenario. That is, if the weight vector is (nearly) D-sparse, we get the rate (Dlog K)/n. As a natural complement to our oracle inequalities, we introduce the notion of nearly-D-sparse aggregation and establish matching lower bounds for this type of aggregation. Finally, in Chapter 4, we propose an algorithm that performs the Kullback-Leibler aggregation of components of a dictionary as discussed in Chapter 3. We compare its performance with different methods: the kernel density estimator , the 'Adaptive Danzig' estimator, the SPADES and EM estimator with the BIC criterion. We then propose a method to build the dictionary of densities and study it numerically. This thesis was carried out within the framework of a CIFRE agreement with the company ARTEFACT