Дисертації з теми "Méthode RAGE"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Méthode RAGE".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Barret, Julien. "Clonage, ingénierie et transfert de grands fragments de génome chez Bacillus subtilis." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0458.
Повний текст джерелаGenome engineering of microorganisms has become a standard in microbial biotechnology. In 2010, promising synthetic biology technologies using yeast as a platform for the assembly and engineering of synthetic bacterial genomes followed by their transplantation into a recipient cell have emerged. These technologies have led to the creation of the first synthetic cells and opened new avenues towards the construction of cells with fully controlled biological properties. Transferring these tools to microorganisms of industrial interest such as the Gram+ bacterium Bacillus subtilis (Bsu), a model in the biotechnology sector, would be a major step forward. This is precisely the aim of the ANR "Bacillus 2.0" project, which brings together two INRAE teams and aims to adapt all these synthetic biology tools to Bsu so as to be able to go from computer-aided design of semi-synthetic Bsu genomes to the production of new industrial strains. However, initial work on this project showed that the entire Bsu genome could not be cloned and maintained in yeast in its current state. These results threatened to call into question the feasibility of the entire project and, in particular, the relevance of using yeast as a platform for assembling the semi-synthetic Bsu genome.The goal of my thesis was to demonstrate that yeast remained a relevant host for the Bacillus 2.0 project. It was divided into 3 parts. In the first part, a genome cloning method recently developed in the laboratory, called CReasPy-Fusion, was progressively adapted to Bsu. The results obtained showed (i) the possible transfer of plasmid DNA between bacterial protoplasts and yeast spheroplasts, (ii) the efficiency of a CRISPR-Cas9 system carried by yeast cells to capture/modify this plasmid DNA during Bsu/yeast fusion, and then (iii) the efficiency of the same system to capture genomic fragments of about a hundred kb from three different strains. Fluorescence microscopy observations were also carried out revealing two types of interaction that would enable the transition from protoplast/spheroplast contact to cloned bacterial DNA in yeast. In the second part of my thesis, the CReasPy-Fusion method was used in an attempt to clone large Bsu genome fragments in yeast. Genomic fragments of up to ~1 Mb could be cloned in yeast, but their capture required the prior addition of a large number of ARS to the Bsu genome to stabilize the genetic constructs. The final part was the adaptation of the RAGE method to Bsu. This method allow the transfer, not of a whole genome, but of portions of bacterial genomes from yeast to the bacteria to be edited. Proof of concept was achieved by exchanging a 155 kb genome fragment with a reduced 44 kb version.In conclusion, the work carried out during this thesis has shown the relevance of using yeast as an engineering platform for large-scale modifications of the Bsu genome. On the one hand, we have shown that fragments of around 100 kb can be cloned in yeast, modified and transferred into a recipient cell to generate Bsu mutants. This strategy offers a real alternative to genome transplantation. On the other hand, we have shown that large fragments of the Bsu genome (up to 1 Mb) can also be cloned in yeast, provided they contain numerous ARS in their sequences. Thanks to these results, cloning a reduced Bsu genome in yeast has once again become an achievable goal
Gonzalez, Camacho Juan-Manuel. "Modélisation stochastique d'une irrigation à la raie." Montpellier 2, 1991. http://www.theses.fr/1991MON20302.
Повний текст джерелаDong, Jia. "Fast Estimation of Bit Error Rate of any digital Communication System." Télécom Bretagne, 2013. http://www.telecom-bretagne.eu/publications/publication.php?idpublication=14186.
Повний текст джерелаThis thesis is related to the Bit Error Rate (BER) estimation for any digital communication system. In many designs of communication systems, the BER is a Key Performance Indicator (KPI). The popular Monte-Carlo (MC) simulation technique is well suited to any system but at the expense of long time simulations when dealing with very low error rates. In this thesis, we propose to estimate the BER by using the Probability Density Function (PDF) estimation of the soft observations of the received bits. First, we have studied a non-parametric PDF estimation technique named the Kernel method. Simulation results in the context of several digital communication systems are proposed. Compared with the conventional MC method, the proposed Kernel-based estimator provides good precision even for high SNR with very limited number of data samples. Second, the Gaussian Mixture Model (GMM), which is a semi-parametric PDF estimation technique, is used to estimate the BER. Compared with the Kernel-based estimator, the GMM method provides better performance in the sense of minimum variance of the estimator. Finally, we have investigated the blind estimation of the BER, which is the estimation when the sent data are unknown. We denote this case as unsupervised BER estimation. The Stochastic Expectation-Maximization (SEM) algorithm combined with the Kernel or GMM PDF estimation methods has been used to solve this issue. By analyzing the simulation results, we show that the obtained BER estimate can be very close to the real values. This is quite promising since it could enable real-time BER estimation on the receiver side without decreasing the user bit rate with pilot symbols for example. This thesis is related to the Bit Error Rate (BER) estimation for any digital communication system. In many designs of communication systems, the BER is a Key Performance Indicator (KPI). The popular Monte-Carlo (MC) simulation technique is well suited to any system but at the expense of long time simulations when dealing with very low error rates. In this thesis, we propose to estimate the BER by using the Probability Density Function (PDF) estimation of the soft observations of the received bits. First, we have studied a non-parametric PDF estimation technique named the Kernel method. Simulation results in the context of several digital communication systems are proposed. Compared with the conventional MC method, the proposed Kernel-based estimator provides good precision even for high SNR with very limited number of data samples. Second, the Gaussian Mixture Model (GMM), which is a semi-parametric PDF estimation technique, is used to estimate the BER. Compared with the Kernel-based estimator, the GMM method provides better performance in the sense of minimum variance of the estimator. Finally, we have investigated the blind estimation of the BER, which is the estimation when the sent data are unknown. We denote this case as unsupervised BER estimation. The Stochastic Expectation-Maximization (SEM) algorithm combined with the Kernel or GMM PDF estimation methods has been used to solve this issue. By analyzing the simulation results, we show that the obtained BER estimate can be very close to the real values. This is quite promising since it could enable real-time BER estimation on the receiver side without decreasing the user bit rate with pilot symbols for example
Hu, Peng. "Méthodes particulaires et applications en finance." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14530/document.
Повний текст джерелаThis thesis is concerned with the analysis of these particle models for computational finance.The manuscript is organized in four chapters. Each of them could be read separately.The first chapter provides an overview of the thesis, outlines the motivation and summarizes the major contributions. The second chapter gives a general in- troduction to the theory of interacting particle methods, with an overview of their applications to computational finance. We survey the main techniques and results on interacting particle systems and explain how they can be applied to the numerical solution of a variety of financial applications; to name a few: pricing complex path dependent European options, computing sensitivities, pricing American options, as well as numerically solving partially observed control and estimation problems.The pricing of American options relies on solving a backward evolution equation, termed Snell envelope in stochastic control and optimal stopping theory. The third and fourth chapters focus on the analysis of the Snell envelope and its variation to several particular cases. Different type of particle models are proposed and studied
Jegourel, Cyrille. "Rare event simulation for statistical model checking." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.
Повний текст джерелаIn this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
Walter, Clément. "Using Poisson processes for rare event simulation." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC304/document.
Повний текст джерелаThis thesis address the issue of extreme event simulation. From a original understanding of the Splitting methods, a new theoretical framework is proposed, regardless of any algorithm. This framework is based on a point process associated with any real-valued random variable and lets defined probability, quantile and moment estimators without any hypothesis on this random variable. The artificial selection of threshold in Splitting vanishes and the estimator of the probability of exceeding a threshold is indeed an estimator of the whole cumulative distribution function until the given threshold. These estimators are based on the simulation of independent and identically distributed replicas of the point process. So they allow for the use of massively parallel computer cluster. Suitable practical algorithms are thus proposed.Finally it can happen that these advanced statistics still require too much samples. In this context the computer code is considered as a random process with known distribution. The point process framework lets handle this additional source of uncertainty and estimate easily the conditional expectation and variance of the resulting random variable. It also defines new SUR enrichment criteria designed for extreme event probability estimation
Chang, Bingbing. "Évaluation du stress au travail et méthodes de prévention." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS214.
Повний текст джерелаThis thesis aims to develop and validate experimentally a method of assessing professional stress inspired by the “Job Demands-Resources” model in a company and in Ph.D students. Two questionnaires customized to these two populations (employees vs. Ph.D students) were created and validated. The results show that our two questionnaires are reliable, valid and flexible to various professional contexts. Measures of coping, well-being and lifestyle allow for an overall assessment of the stressful situation and its repercussions on the individual. Stress profiles and coping combinations identified by Cluster analysis provide a better understanding of the complexities of psychological problems. The final part is devoted to a protocol of induction of psychological stress in a laboratory setting to evaluate and treat an inherent state of stress in a professional context. This protocol of stress induction inspired by the Trier Social Stress Test was set up to study the associations among personality, coping, emotions and heart rate variability. Anxiety traits, neuroticism, extraversion, and consciousness play important roles in stress, coping, and heart rate variability. Fear and shame have been identified as the negative emotions of stress. Mitigation of certain positive emotions has been revealed under stress. These studies offer new directions for the psychological risks’ prevention and serve as a basis for an innovative, inexpensive and adapted tool to a wider variety of professions for the detection and management of occupational stress
Imhoff, Jean-François. "Modélisation magnétique et mécanique des machines électriques par la méthode des éléments finis." Grenoble INPG, 1989. http://www.theses.fr/1989INPG0115.
Повний текст джерелаReynal, Sylvain. "Phase transitions in long-range spin models : the power of generalized ensembles." Cergy-Pontoise, 2005. http://tel.archives-ouvertes.fr/docs/00/04/84/71/PDF/tel-00010256.pdf.
Повний текст джерелаThis thesis uses generalized ensembles Monte Carlo methods to explore the critical behavior of spin chains with algebraically decaying interactions. The first part of this thesis investigates the phase diagram of a long-range Potts chain using a multicanonical algorithm. A new method based on spinodal points is proposed to detect the order of phase transitions. The boundary between first- and second-order transitions is located with unprecedented accuracy using this method, and a new, unusual finite-size effect is observed. The second part of this thesis formulates a new, versatile multicanonical method that includes cluster updates, considerably extending the range of attainable lattice sizes. The method is shown to be far more accurate than standard multicanonical methods. It is applied to the investigation of finite-size effects at first-order transitions, where strong evidence suggests that the mixed-phase configuration has a fractal dimension depending on the decay parameter of the interaction. Finally, a long-range Ising chain with bimodal random fields is studied. The existence of a tricritical point for slowly decaying interactions is demonstrated
Saadani, Ahmed. "Méthodes de simulations rapides du lien radio pour les systèmes 3G." Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00000651.
Повний текст джерелаJung, Matthieu. "Evolution du VIH : méthodes, modèles et algorithmes." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20052/document.
Повний текст джерелаNucleotide sequences data enable the inference of phylogenetic trees, or phylogenies, describing their evolutionary re-lationships during evolution. Combining these sequences with their sampling date or country of origin, allows inferring the temporal or spatial localization of their common ancestors. These data and methods are widely used with viral sequences, and particularly with human immunodeficiency virus (HIV), to trace the viral epidemic history over time and throughout the globe. Using sequences sampled at different points in time (or heterochronous) is also a mean to estimate their substitution rate, which characterizes the speed of evolution. The most commonly used methods to achieve these tasks are accurate, but are computationally heavy since they are based on complex models, and can only handle few hundreds of sequences. With an increasing number of sequences avail-able in the databases, often several thousand for a given study, the development of fast and accurate methods becomes essential. Here, we present a new distance-based method, named Ultrametric Least Squares, which is based on the princi-ple of least squares (very popular in phylogenetics) to estimate the substitution rate of a set of heterochronous sequences and the dates of their most recent common ancestors. We demonstrate that the criterion to be optimized is piecewise parabolic, and provide an efficient algorithm to find the global minimum.Using sequences sampled at different locations also helps to trace transmission chains of an epidemic. In this respect, we used all available sequences (~3,500) of HIV-1 subtype C, responsible for nearly 50% of global HIV-1 infections, to estimate its major migratory flows on a worldwide scale and its geographic origin. Innovative tools, based on the principle of parsimony, combined with several statistical criteria were used to synthesize and interpret information in a large phylogeny representing all the studied sequences. Finally, the temporal and geographical origins of the HIV-1 subtype C in Senegal were further explored and more specifically for men who have sex with men
Tola, Pascal. "Détection visible de l'EXAFS : une nouvelle méthode de détection de la structure fine des spectres d'absorption X." Nancy 1, 1992. http://www.theses.fr/1992NAN10002.
Повний текст джерелаDoyen, Matthieu. "Méthodes probabilistes pour le monitoring cardio-respiratoire des nouveau-nés prématurés." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S049/document.
Повний текст джерелаThe surveillance of premature newborns placed in intensive care units led to the notion of monitoring and the acquisition of many physiological signals. While this information is well used for the diagnosis and prevention of emergency situations, it must be acknowledged that, to date, it is less the case for predictive purposes. This is mainly due to the difficulty of extracting reliable information in real time, without any visual control, from non-stationary signals. This thesis aims to propose robust methods, adapted to the context of neonatal intensive care units and real time. For this purpose, a set of generic methods applied to cardiac variability, but capable of being adapted to other physiological constants such as respiration, have been developed and tested in clinical context. Four main parts illustrate these points : - The proposal of an original multicharacteristic probabilistic real time detection method for robust detection of interest events of noisy physiological signals. Generic, this solution is applied to the robust QRS complex detection of the ECG signals. It is based on the real time calculation of several posterior probabilities of the signal properties before merging them into a decision node using the weighted Kullback-Leibler divergence. Compared to two classic methods from the literature on two noisy databases, it has a lower detection error rate (20.91% vs. 29.02% (wavelets) and 33.08% (Pan-Tompkins) on the test database). - The proposal of using hidden semi-markovian models for the segmentation of temporal periods with most reliable event detections. Compared to two methods from the literature, the proposed solution achieves better performance, the error criterion obtained is significantly lower (between -21.37% and -74.98% depending on the basis and approach evaluated). - The selection of an optimal detector for the monitoring of apnea-bradycardia events, in terms of reliability and precocity, based on ECG data obtained from newborns. The performance of the selected detector will be compared to the alarms generated by an industrial continuous monitoring device traditionally used in neonatology service (Philips IntelliVue monitor). The method based on the abrupt change of the RR average achieves the best results in terms of time (3.99 s vs. 11.53 s for the IntelliVue monitor) and reliability (error criterion of 43.60% vs. 80.40%). - The design and development of SYNaPSE (SYstem for Noninvasive Physiological Signal Explorations) software platform for the acquisition of various physiological signals in large quantities, and in a non-invasive way, within the care units. The modular design of this platform, as well as its real time properties, allows simple and fast integration of complex signal processing methods. Its translational interest is shown in the analysis of a database in order to study the impact of bilirubin on cardiac variability
Hajj, Paméla El. "Méthodes d'aide à la décision thérapeutique dans les cas des maladies rares : intérêt des méthodes bayésiennes et application à la maladie de Horton." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS037/document.
Повний текст джерелаIn recent years, scientists have difficulties to study rare diseases by conventional methods, because the sample size needed in such studies to meet a conventional frequentist power is not adapted to the number of available patients. After systemically searching in literature and characterizing different methods used in the contest of rare diseases, we remarked that most of the proposed methods are deterministic and are globally unsatisfactory because it is difficult to correct the insufficient statistical power.More attention has been placed on Bayesian models which through a prior distribution combined with a current study enable to draw decisionsfrom a posterior distribution. Determination of the prior distribution in a Bayesian model is challenging, we will describe the process of determining the prior including the possibility of considering information from some historical controlled trials and/or data coming from other studies sufficiently close to the subject of interest.First, we describe a Bayesian model that aims to test the hypothesis of the non-inferiority trial based on the hypothesis that methotrexate is more effective than corticosteroids alone.On the other hand, our work rests on the use of the epsilon-contamination method, which is based on contaminating an a priori not entirely satisfactory by a series of distributions drawn from information on other studies sharing close conditions,treatments or even populations. Contamination is a way to include the proximity of information provided bythese studies
Silva, lopes Laura. "Méthodes numériques pour la simulation d'évènements rares en dynamique moléculaire." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1045.
Повний текст джерелаIn stochastic dynamical systems, such as those encountered in molecular dynamics, rare events naturally appear as events due to some low probability stochastic fluctuations. Examples of rare events in our everyday life includes earthquakes and major floods. In chemistry, protein folding, ligandunbinding from a protein cavity and opening or closing of channels in cell membranes are examples of rare events. Simulation of rare events has been an important field of research in biophysics over the past thirty years.The events of interest in molecular dynamics generally involve transitions between metastable states, which are regions of the phase space where the system tends to stay trapped. These transitions are rare, making the use of a naive, direct Monte Carlo method computationally impracticable. To dealwith this difficulty, sampling methods have been developed to efficiently simulate rare events. Among them are splitting methods, that consists in dividing the rare event of interest into successive nested more likely events.Adaptive Multilevel Splitting (AMS) is a splitting method in which the positions of the intermediate interfaces, used to split reactive trajectories, are adapted on the fly. The surfaces are defined suchthat the probability of transition between them is constant, which minimizes the variance of the rare event probability estimator. AMS is a robust method that requires a small quantity of user defined parameters, and is therefore easy to use.This thesis focuses on the application of the adaptive multilevel splitting method to molecular dynamics. Two kinds of systems are studied. The first one contains simple models that allowed us to improve the way AMS is used. The second one contains more realistic and challenging systems, where AMS isused to get better understanding of the molecular mechanisms. Hence, the contributions of this thesis include both methodological and numerical results.We first validate the AMS method by applying it to the paradigmatic alanine dipeptide conformational change. We then propose a new technique combining AMS and importance sampling to efficiently sample the initial conditions ensemble when using AMS to obtain the transition time. This is validatedon a simple one dimensional problem, and our results show its potential for applications in complex multidimensional systems. A new way to identify reaction mechanisms is also proposed in this thesis.It consists in performing clustering techniques over the reactive trajectories ensemble generated by the AMS method.The implementation of the AMS method for NAMD has been improved during this thesis work. In particular, this manuscript includes a tutorial on how to use AMS on NAMD. The use of the AMS method allowed us to study two complex molecular systems. The first consists in the analysis of the influence of the water model (TIP3P and TIP4P/2005) on the β -cyclodextrin and ligand unbinding process. In the second, we apply the AMS method to sample unbinding trajectories of a ligand from the N-terminal domain of the Hsp90 protein
Gérard, Anthony. "Bruit de raie des ventilateurs axiaux : Estimation des sources aéroacoustiques par modèles inverse et Méthodes de contrôle." Phd thesis, Université de Poitiers, 2006. http://tel.archives-ouvertes.fr/tel-00162200.
Повний текст джерелаGérard, Anthony. "Bruit de raie des ventilateurs axiaux : estimation des sources aéroacoustiques par modèles inverses et méthodes de contrôle." Thèse, Université de Sherbrooke, 2006. http://savoirs.usherbrooke.ca/handle/11143/1798.
Повний текст джерелаMosnier, Jean-Paul. "Spectre d'émission X d'ions silicium par la méthode "faisceau-feuille"." Paris 6, 1986. http://www.theses.fr/1986PA066025.
Повний текст джерелаDubois, Franck. "Cuprates à intercroissance double couche pérovskite-couche AO type NaCl, substitués par une terre rare de "petite taille" : cristallochimie et propriétés électriques." Orléans, 2001. http://www.theses.fr/2001ORLE2023.
Повний текст джерелаGaudrat, Véronique. "Quelques méthodes pour l'optimisation de la coulée continue de l'acier dans le cas non stationnaire." Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090032.
Повний текст джерелаGuinet, Mickaël. "Première détermination de la constante de Boltzmann par une méthode optique." Phd thesis, Université Paris-Nord - Paris XIII, 2006. http://tel.archives-ouvertes.fr/tel-00121051.
Повний текст джерелаDans ce mémoire, je décris les solutions apportées pour obtenir un faisceau laser de fréquence parfaitement contrôlée, largement accordable autour de 10 µm et d'intensité constante. Je décris également les options retenues pour le contrôle en température du gaz d'ammoniac.
Les résultats obtenus sont très encourageants, nous avons après seulement 2 ans déjà obtenu une première mesure de la constante de Boltzmann avec une incertitude relative de 1,9X10-4: k=1,38065(26)X10-23 J.K-1.
Dans ce manuscrit, je présente également plusieurs voies d'amélioration à court et moyen termes.
Je présente enfin une expérience de franges de Ramsey-Bordé référencée sur l'étalon primaire de fréquence localisé à Paris (au SYRTE). La chaîne de mesure absolue de fréquence atteint une résolution de 10-14 et l'incertitude pour la mesure de la frange centrale est également de l'ordre de 10-14. A moyen terme, ce système de mesure par rapport à l'étalon primaire pourra être utilisé pour contrôler la fréquence du laser à CO2 dans les futures expériences de mesure de la constante de Boltzmann.
Guinet, Mickaël. "Première détermination de la constante de Boltzmann par une méthode optique." Phd thesis, Paris 13, 2006. http://www.theses.fr/2006PA132029.
Повний текст джерелаTurati, Pietro. "Méthodes de simulation adaptative pour l’évaluation des risques de système complexes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC032/document.
Повний текст джерелаRisk assessment is conditioned on the knowledge and information available at the moment of the analysis. Modeling and simulation are ways to explore and understand system behavior, for identifying critical scenarios and avoiding surprises. A number of simulations of the model are run with different initial and operational conditions to identify scenarios leading to critical consequences and to estimate their probabilities of occurrence. For complex systems, the simulation models can be: i) high-dimensional; ii) black-box; iii) dynamic; and iv) computationally expensive to run, preventing the analyst from running the simulations for the multiple conditions that need to be considered.The present thesis presents advanced frameworks of simulation-based risk assessment. The methods developed within the frameworks are attentive to limit the computational cost required by the analysis, in order to keep them scalable to complex systems. In particular, all methods proposed share the powerful idea of automatically focusing and adaptively driving the simulations towards those conditions that are of interest for the analysis, i.e., for risk-oriented information.The advantages of the proposed methods have been shown with respect to different applications including, among others, a gas transmission subnetwork, a power network and the Advanced Lead Fast Reactor European Demonstrator (ALFRED)
Beaumont, Catherine. "Application de la méthode du maximum de vraisemblance restreint (REMI) à l'estimation des paramètres génétiques des trois premières lactations en race Montbéliarde." Paris 11, 1988. http://www.theses.fr/1988PA112064.
Повний текст джерелаGenetic parameters of the first three lactations of Montbéliarde cows were estimated for the main dairy traits (milk, fat, protein and "useful" yields adjusted for lactation length, fat and protein contents). Literature about variance and covariance components estimators was rewieved emphasizing the optimal properties of the Restricted Maximum Likelihood (REML) estimator. Two data sets were analysed including records on 30. 751 cows born from 128 young sires and 52 proven sires Daughters performances from the most widely used proven sires were incorporated in order to improve the degree of connectedness between herds. The model fitted young sires as random and proven sires, herd-year, season-year of calving, age at first calving and length of the previous lactation as fixed effects. The relationships among young bulls were included. MEYER's "Short cut" algorithm was first considered. Since it gave estimates that laid outside the parameter space, a different: approach, an "E-M" type algorithm was used. All genetic correlations were larger than 0. 89. Correlations between the first and third lactations were slightly lower than the ethers. Heritabilities of milk, fat, protein and useful yields ranged from 0. 17 to 0. 27. Phenotypic correlations between successive lactations were higher than 0. 6 and those between lactations 1 and 3 lower than 0. 55. Heritabilities of fat and protein contents were higher than 0. 44 with phenotypic correlations being stable at about 0. 7. It was concluded that, in this breed, the "repeatability model" which considers all lactation records as a single trait could be considered in genetic evaluation procedures for dairy traits without significant lasses in efficiency
Pigneur, Judith. "Mise au point d’une méthode intégrée d’analyse des impacts des filières de matières premières minérales." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC093.
Повний текст джерелаThe subject of the thesis is the development of an integrated method of analysis of the social and environmental costs of depletion in the metal value chains. The supported thesis is that the depletion of metallic resources, beyond the question of the economic limits of exploitation, is a multiplier of the social and environmental costs generated by our current modes of production and consumption. The thesis was carried out as part of the Bureau for Appraisal of Social Impacts for Citizen information (Basic) research and development project on the development of sustainability indicators. The thesis is part of a transdisciplinary approach, combining an approach from the management sciences, namely the analysis of global value chains (GVC), mobilized to understand the influence of the organization of globalized chains on social and environmental impacts, and an economics-based approach, that of social costs, as developed by Karl William Kapp, which looks at the costs of social and environmental damage inherent in our economic system. The thesis aims to contribute to both the reinforcement of the methodological framework of social costs developed by the Basic, but also to the emerging research on depletion within the field of development of sustainability indicators. The research aims to explore the links between depletion, increased social and environmental costs of exploitation, and the influence of globalized chains on the occurrence of these costs. This general problem is divided into two parts. A first theoretical part, composed of chapters 1 and 2, contributes to define the depletion of metals in a perspective of strong sustainability and to formalize a framework of evaluation of the social costs coherent with this definition. In Chapter 1, depletion is redefined as two joint, continuous and irreversible phenomena: 1) loss of quantity (loss of material throughout the supply chain) and quality (diminution of ore grades and difficulties of recycling) of the resource.2) as a multiplier of environmental, health and social impacts of the metal value chains. Chapter 2 proposes a new framework to account for depletion and its social costs, linking the work of ecological economics and institutional economics based on the work of Karl William Kapp. This methodology develops an approach focused on studying the causes of social costs and levers to reduce these costs. A second empirical part applies the evaluation framework to the case study of the neodymium chain used in Nd-Fe-B magnets. This case study shows that, although the depletion of rare earth reserves is not perceived as an imminent danger, the social costs of depletion are already significant and that actions could be implemented to reduce these costs. This case study demonstrates the social and ecological relevance of the analysis of social cost depletion
Bonnaud, Céline. "Vers une méthode de recyclage et de valorisation des aimants permanents à base de terres rares par électrochimie en milieux liquides ioniques." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAI039/document.
Повний текст джерелаRare earth elements (REE) are currently essential for new technologies development; from everyday life objects to green energies devices, they are especially used in permanent magnets. NdFeB and SmCo permanent magnets represent more than 50% of the REE market and offer a high recycling opportunity. The present study focuses on their recycling, mainly via electrochemistry in ionic liquid medium (IL), which enables to reach the reduction potentials of REE (< -2 V vs. ENH).Samarium, neodymium, dysprosium, praseodymium and cobalt were successfully electrodeposited in [C1C4Pyrr] [Tf2N] according to a potentiometric method at 25 ° C and on a glassy carbon electrode.Two general recycling methods are finally proposed and have been applied to industrial permanent magnets. The first uses only electrochemistry and is based on a first magnet electrodissolution step in the IL followed by a selective electroplating to recover the transition metals. Electrodeposition of REE could then be possible. The second method starts by an acid leaching of the magnet in order to efficiently separate the transition metals from REE via the formation of phosphate salts. Dissolution of the salts obtained in LI would then enable to electrodeposit the metal ions
Dalloz, Richard, and Jean Huot. "L'entraînement sportif par la méthode des créneaux : Application et validation sur le terrain." Nancy 1, 1988. http://www.theses.fr/1988NAN10482.
Повний текст джерелаMeyer, Françoise. "Élaboration de couches minces métalliques sur semiconducteur par méthodes ioniques : caractérisation des films et de leurs interfaces." Paris 11, 1989. http://www.theses.fr/1989PA112297.
Повний текст джерелаThis thesis reports a study of meta/semiconductor contacts prepared by Ion Bearn Sputter Deposition in ultrahigh vacuum systems. Mo/GaAs, Ni/GaAs, Mo/Si and W/Si contacts have been investigated and a lot of characterizations have been carried out to determine the influence of the deposition method on the properties of the layers and of the interfaces and the effects of fast particles on the growing film in particular. Growth mechanisms of metallayers and thermal stability have been studied by in situ Auger analyses. Silicide formation has been observed at rather low temperatures: 350°C and 450°C for MoSi2and WSi2 respectively. The use of xenon leads to better electrical results (resistivity of the films and characteristics of Schottky diodes) than that of argon. We correlate fairly well the embedded argon atoms with the incident ions backscattered by the target and striking the growing film with an energy higher than the threshold incorporation energy. We evidence a lincar relationship between the argon content and the resistivity of W films. An evaluation of the energy of backscattered and sputtered species allow a better understanding of the origin of stresses in films deposited by sputtering
Jacquemart, Damien. "Contributions aux méthodes de branchement multi-niveaux pour les évènements rares, et applications au trafic aérien." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S186/document.
Повний текст джерелаThe thesis deals with the design and mathematical analysis of reliable and accurate Monte Carlo methods in order to estimate the (very small) probability that a Markov process reaches a critical region of the state space before a deterministic final time. The underlying idea behind the multilevel splitting methods studied here is to design an embedded sequence of intermediate more and more critical regions, in such a way that reaching an intermediate region, given that the previous intermediate region has already been reached, is not so rare. In practice, trajectories are propagated, selected and replicated as soon as the next intermediate region is reached, and it is easy to accurately estimate the transition probability between two successive intermediate regions. The bias due to time discretization of the Markov process trajectories is corrected using perturbed intermediate regions as proposed by Gobet and Menozzi. An adaptive version would consist in the automatic design of the intermediate regions, using empirical quantiles. However, it is often difficult if not impossible to remember where (in which state) and when (at which time instant) did each successful trajectory reach the empirically defined intermediate region. The contribution of the thesis consists in using a first population of pilot trajectories to define the next threshold, in using a second population of trajectories to estimate the probability of exceeding this empirically defined threshold, and in iterating these two steps (definition of the next threshold, and evaluation of the transition probability) until the critical region is reached. The convergence of this adaptive two-step algorithm is studied in the asymptotic framework of a large number of trajectories. Ideally, the intermediate regions should be defined in terms of the spatial and temporal variables jointly (for example, as the set of states and times for which a scalar function of the state exceeds a time-dependent threshold). The alternate point of view proposed in the thesis is to keep intermediate regions as simple as possible, defined in terms of the spatial variable only, and to make sure that trajectories that manage to exceed a threshold at an early time instant are more replicated than trajectories that exceed the same threshold at a later time instant. The resulting algorithm combines importance sampling and multilevel splitting. Its preformance is evaluated in the asymptotic framework of a large number of trajectories, and in particular a central limit theorem is obtained for the relative approximation error
Dalce, Rejane. "Méthodes de localisation par le signal de communication dans les réseaux de capteurs sans fil en intérieur." Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0020/document.
Повний текст джерелаThe development of Wireless Sensor Networks has given a new life to research in thefield of localization. Many proposals have been made which can be classified as either rangefreeor range-based solutions. The range-free category relies on a priori knowledge of thenetwork while the latter uses the available hardware to measure signal characteristics fromwhich distance information can be derived. Although the information origin can vary, allproposals either introduce a new protocol, a novel algorithm or a new and improved physicallayer.Our work led to the definition of a new protocol and an efficient algorithm. Aside fromallowing the nodes to collect Time Of Flight related data, the Parallel Symmetric Double-Sided Two-Way Ranging protocol (PSDS-TWR) reduces overhead and energy consumption,making the localization service affordable for the network. The performance of this protocol,in terms of duration, has been studied using a homemade simulator named DokoSim. We alsointroduce an algorithm based on rings and linear search. This inter-Ring LocalizationAlgorithm (iRingLA) achieves a localization error of less than 2m in 70% of the cases whilebeing tested on our Chirp Spread Sprectrum based prototype
Youbi, Abdelaziz. "Méthode d'analyse des réponses cardio-vasculaire de sujets à des changements posturaux et d'activité." Paris 6, 1988. http://www.theses.fr/1988PA066607.
Повний текст джерелаSayin, Halil. "Balance sympatho-vagale chez le rat éveillé : méthodes d’étude et application à la fibrillation atriale." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1149/document.
Повний текст джерелаThe aim of the present work was (1) to compare the different methods currently used to assess sympathovagal balance (SVB) in rats, and (2) to assess SVB alterations towards vagal predominance on atrial electrical instability in aging spontaneously hypertensive rats (SHR). The electrocardiogram was measured in conscious rats using chronically implanted telemetric probes. The reference method to estimate SVB is based on the calculation of the ratio of intrinsic heart rate (HR) to resting HR. Depending on whether the index is greater or lower than 1, one can conclude to sympathetic or vagal predominance, respectively. Intrinsic HR is obtained after the combined administration of selective antagonists of both branches of the autonomic nervous system, i.e. β- adrenergic blocker (atenolol) and muscarinic receptor antagonist (methylatropine). Other methods (autonomic tones measured separately, calculation of indices derived from heart rate variability analysis) provide inconsistent or conflicting results. The chronic infusion of an acetylcholinesterase inhibitor (pyridostigmine) in aging SHRs induced relative vagal hypertonia (SVB=0.81±0.02) in comparison with untreated rats (SVB=1.06±0.01) along with sinus bradycardia and increased frequency and duration of atrial tachyarrhythmia episodes. These studies highlight the value of the reference method for evaluating SVB in conscious rats. Potentiation of endogenous vagal activity aggravates atrial electrical instability in aging SHRs, consistent with a pathogenic role of the parasympathetic nervous system in this model
Bernhardt, Stéphanie. "Performances et méthodes pour l'échantillonnage comprimé : Robustesse à la méconnaissance du dictionnaire et optimisation du noyau d'échantillonnage." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS443/document.
Повний текст джерелаIn this thesis, we are interested in two different low rate sampling schemes that challenge Shannon’s theory: the sampling of finite rate of innovation signals and compressed sensing.Recently it has been shown that using appropriate sampling kernel, finite rate of innovation signals can be perfectly sampled even though they are non-bandlimited. In the presence of noise, reconstruction is achieved by a model-based estimation procedure. In this thesis, we consider the estimation of the amplitudes and delays of a finite stream of Dirac pulses using an arbitrary kernel and the estimation of a finite stream of arbitrary pulses using the Sum of Sincs (SoS) kernel. In both scenarios, we derive the Bayesian Cramér-Rao Bound (BCRB) for the parameters of interest. The SoS kernel is an interesting kernel since it is totally configurable by a vector of weights. In the first scenario, based on convex optimization tools, we propose a new kernel minimizing the BCRB on the delays, while in the second scenario we propose a family of kernels which maximizes the Bayesian Fisher Information, i.e., the total amount of information about each of the parameter in the measures. The advantage of the proposed family is that it can be user-adjusted to favor either of the estimated parameters.Compressed sensing is a promising emerging domain which outperforms the classical limit of the Shannon sampling theory if the measurement vector can be approximated as the linear combination of few basis vectors extracted from a redundant dictionary matrix. Unfortunately, in realistic scenario, the knowledge of this basis or equivalently of the entire dictionary is often uncertain, i.e. corrupted by a Basis Mismatch (BM) error. The related estimation problem is based on the matching of continuous parameters of interest to a discretized parameter set over a regular grid. Generally, the parameters of interest do not lie in this grid and there exists an estimation error even at high Signal to Noise Ratio (SNR). This is the off-grid (OG) problem. The consequence of the BM and the OG mismatch problems is that the estimation accuracy in terms of Bayesian Mean Square Error (BMSE) of popular sparse-based estimators collapses even if the support is perfectly estimated and in the high Signal to Noise Ratio (SNR) regime. This saturation effect considerably limits the effective viability of these estimation schemes.In this thesis, the BCRB is derived for CS model with unstructured BM and OG. We show that even though both problems share a very close formalism, they lead to different performances. In the biased dictionary based estimation context, we propose and study analytical expressions of the Bayesian Mean Square Error (BMSE) on the estimation of the grid error at high SNR. We also show that this class of estimators is efficient and thus reaches the Bayesian Cramér-Rao Bound (BCRB) at high SNR. The proposed results are illustrated in the context of line spectra analysis for several popular sparse estimator. We also study the Expected Cramér-Rao Bound (ECRB) on the estimation of the amplitude for a small OG error and show that it follows well the behavior of practical estimators in a wide SNR range.In the context of BM and OG errors, we propose two new estimation schemes called Bias-Correction Estimator (BiCE) and Off-Grid Error Correction (OGEC) respectively and study their statistical properties in terms of theoretical bias and variances. Both estimators are essentially based on an oblique projection of the measurement vector and act as a post-processing estimation layer for any sparse-based estimator and mitigate considerably the BM (OG respectively) degradation. The proposed estimators are generic since they can be associated to any sparse-based estimator, fast, and have good statistical properties. To illustrate our results and propositions, they are applied in the challenging context of the compressive sampling of finite rate of innovation signals
Qin, Liang. "Application of irreversible Monte Carlo in realistic long-range systems." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE009.
Повний текст джерелаThis thesis studies the behavior of event-chain Monte Carlo (ECMC) in long-range particle systems. In the first two chapters, we introduce established methods for molecular simulation, highlighting their difficulties in dealing with Coulomb interaction, and gives the basic of ECMC. The third chapter presents our framework of Coulomb system sampling using ECMC. Under the tin-foil convention, the formulation consisting of pairwise terms for electrostatics can be directly applied to the cell-veto method. Together with dipole factorization, we obtain an O(NlogN)-per-sweep algorithm for dipole systems. Chapters four and five describe our development of a scientific application called JeLLyFysh for molecular simulation through ECMC. Its mediator design and stream processing of all operations can best accommodate future extensions. Using JeLLyFysh, we profile the performance of ECMC for large water systems in chapter six. The resulting dynamics imply that a more sophisticated scheme is needed to equilibrate the polarization. Finally, in chapter seven, we test the sampling strategy with sequential direction change. The dipole evolution exhibits distinct dynamics, and the set of direction choices and the order to select prove both crucial in mixing the dipole's orientation
Hafienne, Imen. "Matériaux dopés terres rares pour la détection tout-optique de méthane." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC220.
Повний текст джерелаThis work is devoted to the all optical gas sensor development using an IR signal conversion to a visible signal, making possible to transport the probe signal through a silica optical fiber over large distances thus considerably increasing the scope of possible applications. The envisaged sensor falls into two main parts, the first an infrared source around 3.5 µm that probes the methane absorption and the second a frequency converter consisting of an Er3+ doped fiber. The principle of the sensor is, first of all, to develop an IR source around 3.5 µm. This IR signal is then passed to a methane cell to probe its absorption, after that the transmitted signal is injected into a doped Er3+ fiber. In this fiber, the IR signal, transmitted through the gas cell, is converted into a visible light at 660 nm by the excited state absorption phenomenon following the simultaneous excitation of the fiber at 808 nm (pump) and at 3.5 µm (probe). This 660 nm converted signal can then be transported over long distances via conventional silica fibers. A description of up-conversion phenomena in earth-rare doped materials is made in three matrixes: sulphide glasses, ZBLAN and KPb2Cl5. A comparison is made between these three Er3+ doped materials to identify the best material for an efficient frequency conversion process. The results of the energy conversion from 3.4µm to 660 nm are discussed, on the one hand, in three bulk samples and on the other hand in Er3+ doped fibers and, in both cases, compared to the results of a specific simulation. In order to develop an efficient IR source around 3.5 µm that probes the methane absorption. Four different materials: Pr3+ doped GeGaSbS (2S2G), Pr3+ doped GeGaSbSe (2S2G), Dy3+ doped GeGaSbSe (2S2G), Pr3+- Yb3 doped ZBLAN were comprehensively studied. A simulation aimed to optimize the IR fluorescence in these four fibers is described in detail. Model results have been validated successfully by comparison with experimental measurements. Finally, an assembly of the all-optical sensor prototype including the IR source made with a Pr3+ doped GeGaSbSe (2S2G) fiber and the frequency conversion fiber consisting of an Er3+ doped GeGaSbS (2S2G) fiber. Several methane detection tests were conducted to find the best operation conditions for the IR source and the frequency converter to reach the best detection sensitivity. Finally, a simultaneous CO2 detection test with this same prototype was successfully validated by taking advantage of the wide emission band of the 3H6 -> 3H4 Pr3+ ion transition, covering both part of the methane and carbon dioxide absorption and the 4.3µm to 808nm conversion also permitted by the same Er3+ doped converter
Benkaddour, Mourad. "Céramiques conductrices oxydes à base de bismuth, terre rare et vanadium : élaboration, microstructure et propriétés électriques." Lille 1, 2000. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2000/50376-2000-408.pdf.
Повний текст джерелаLelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.
Повний текст джерелаEngineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
Estecahandy, Maïder. "Méthodes accélérées de Monte-Carlo pour la simulation d'événements rares. Applications aux Réseaux de Petri." Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3008/document.
Повний текст джерелаThe dependability analysis of safety instrumented systems is an important industrial concern. To be able to carry out such safety studies, TOTAL develops since the eighties the dependability software GRIF. To take into account the increasing complexity of the operating context of its safety equipment, TOTAL is more frequently led to use the engine MOCA-RP of the GRIF Simulation package. Indeed, MOCA-RP allows to estimate quantities associated with complex aging systems modeled in Petri nets thanks to the standard Monte Carlo (MC) simulation. Nevertheless, deriving accurate estimators, such as the system unavailability, on very reliable systems involves rare event simulation, which requires very long computing times with MC. In order to address this issue, the common fast Monte Carlo methods do not seem to be appropriate. Many of them are originally defined to improve only the estimate of the unreliability and/or well-suited for Markovian processes. Therefore, the work accomplished in this thesis pertains to the development of acceleration methods adapted to the problematic of performing safety studies modeled in Petri nets and estimating in particular the unavailability. More specifically, we propose the Extension of the "Méthode de Conditionnement Temporel" to accelerate the individual failure of the components, and we introduce the Dissociation Method as well as the Truncated Fixed Effort Method to increase the occurrence of their simultaneous failures. Then, we combine the first technique with the two other ones, and we also associate them with the Randomized Quasi-Monte Carlo method. Through different sensitivities studies and benchmark experiments, we assess the performance of the acceleration methods and observe a significant improvement of the results compared with MC. Furthermore, we discuss the choice of the confidence interval method to be used when considering rare event simulation, which is an unfamiliar topic in the field of dependability. Last, an application to an industrial case permits the illustration of the potential of our solution methodology
Beaucaire, Paul. "Application des méthodes fiabilistes à l'analyse et à la synthèse des tolérances." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00836759.
Повний текст джерелаCheikh, Khalfa Nadia. "Détection de ruptures de signaux physiologiques en situation in vivo via la méthode FDpV : cas de la fréquence cardiaque et de l'activité électrodermale de marathoniens." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066653/document.
Повний текст джерелаThis thesis was carried out in a multidisciplinary approach that combines experimental protocol, instrumentation, in vivo measurements, physiological change detection instants and identification and preprocessing of measurement artefacts for marathon runners. We considered the analysis of the heart rate variability (HRV) and the electrodermal activity (EDA) recorded during a semi-marathon including pre and post competition periods. A study of the HRV and EDA change detection was carried based on the mean and the trend using the Filtered Derivative with pValue method (FDpV) throughout this thesis. This segmentation method is based on a dynamical approach using a piece-wise stationary model. As a result, itallowed to introduce an index of cardiac regulation for semi-marathon runners. Physiological state changes tracking of affective dimension i.e. "stress" and motivation via the EDA by change detection on its tonic component which reflects the EDA general trend throughout a semi-marathon was also proposed. This enabled us to characterize start and finish phases of a race which are key elements in any competition. A special attention was given to the tonic component of the EDA reflecting the overall level of affective activation. We compared three methods of tonic level extraction by taking into account the present potential artefacts. Thiswork focused on case studies; It can be generalized over a cohort and include more physiological parameters such that VO2 or EEG. Hence, a classification of stress states may also be considered and represent other significant features for characterizing in vivo physiological data for sport performance optimization
Ogloblinsky, Marie-Sophie. "Statistical strategies leveraging population data to help with the diagnosis of rare diseases." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0039.
Повний текст джерелаHigh genetic heterogeneity and complex modes of inheritance in rare diseases pose the challenge of identifying an n-of-one sequencing data and standard analysis methods. To tackle this issue, the PSAP method uses gene-specific null distributions of CADD pathogenicity scores to assess the probability of observing a given genotype in a healthy population. The goal of this work was to address rare disease lack of diagnosis through statistical strategies. We propose PSAP-genomic-regions an extension of the PSAP method to the non-coding genome, using as testing units predefined regions reflecting functional constraint at the scale of the whole genome.We implemented PSAP-genomic-regions and the initial PSAP-genes in Easy-PSAP a user-friendly and versatile Snakemake workflow, accessible to both researchers and clinicians. When applied to families affected by male infertility, Easy-PSAP allowed the prioritization of relevant candidate variants in known and novel genes. We then focused on digenism, the most simple mode of complex inheritance, which implicates the simultaneous alteration of two genes to develop a disease. We reviewed and benchmarked current methods in the literature to detect digenism and put forward new strategies to improve the diagnostic of this complex mode of inheritance
Thao, Nguyen Thi Phuong. "Analyse statistique et analyse spatiale des valeurs extrêmes de précipitation : application de cette méthode pour cartographie des caractéristiques pluviométriques de la région Cévennes-Vivarais." Grenoble INPG, 1993. http://www.theses.fr/1993INPG0034.
Повний текст джерелаOmar, Ahmad. "Développement et validation d'un modèle aux éléments discrets de comportement du béton sous chargement dynamique." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENI014/document.
Повний текст джерелаThis work concerns the analysis of the vulnerability of sensitive concrete structures subjected to severe dynamic actions such as impacts due to natural hazards or human factors. The object is to develop a numerical tool that can describe objectively the dynamic behaviour of concrete. Then, a 3D discrete element method (DEM) was developed and used to perform the analysis. The first part of this thesis focuses on the simulation of quasi-static uniaxial compression and traction tests. A moment transfer law (MTL) was introduced to overcome the problem of brittle compressive behavior. Then, the identification procedure of the modified DEM model has been optimized in order to reproduce very well the macroscopic behaviour of concrete. Finally, the model has been validated by representing properly the real quasi-static behavior of different types of concrete. The second part of the study deals with the simulation of the dynamic Hopkinson traction bar tests of concrete. The results showed that a local rate effect has to be introduced to reproduce the strain rate dependency, which would then be a material-intrinsic effect. Then, the parameters of the model have been identified. Finally, simulations were run at high strain rates and showed consistent results with respect to experimental behaviour
Cheikh, Khalfa Nadia. "Détection de ruptures de signaux physiologiques en situation in vivo via la méthode FDpV : cas de la fréquence cardiaque et de l'activité électrodermale de marathoniens." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066653.
Повний текст джерелаThis thesis was carried out in a multidisciplinary approach that combines experimental protocol, instrumentation, in vivo measurements, physiological change detection instants and identification and preprocessing of measurement artefacts for marathon runners. We considered the analysis of the heart rate variability (HRV) and the electrodermal activity (EDA) recorded during a semi-marathon including pre and post competition periods. A study of the HRV and EDA change detection was carried based on the mean and the trend using the Filtered Derivative with pValue method (FDpV) throughout this thesis. This segmentation method is based on a dynamical approach using a piece-wise stationary model. As a result, itallowed to introduce an index of cardiac regulation for semi-marathon runners. Physiological state changes tracking of affective dimension i.e. "stress" and motivation via the EDA by change detection on its tonic component which reflects the EDA general trend throughout a semi-marathon was also proposed. This enabled us to characterize start and finish phases of a race which are key elements in any competition. A special attention was given to the tonic component of the EDA reflecting the overall level of affective activation. We compared three methods of tonic level extraction by taking into account the present potential artefacts. Thiswork focused on case studies; It can be generalized over a cohort and include more physiological parameters such that VO2 or EEG. Hence, a classification of stress states may also be considered and represent other significant features for characterizing in vivo physiological data for sport performance optimization
Saggadi, Samira. "Simulation d'évènements rares par Monte Carlo dans les réseaux hautement fiables." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S055.
Повний текст джерелаNetwork reliability determination, is an NP-hard problem. For instance, in telecommunications, it is desired to evaluate the probability that a selected group of nodes communicate or not. In this case, a set of disconnected nodes can lead to critical financials security consequences. A precise estimation of the reliability is, therefore, needed. In this work, we are interested in the study and the calculation of the reliability of highly reliable networks. In this case the unreliability is very small, which makes the standard Monte Carlo approach useless, because it requires a large number of iterations. For a good estimation of system reliability with minimum cost, we have developed new simulation techniques based on variance reduction using importance sampling
Chronopoulos, Dimitrios. "Prediction of the vibroacoustic response of aerospace composite structures in a broadband frequency range." Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00787864.
Повний текст джерелаStepan, Jiri. "Etude du transfert du rayonnement polarisé dans l'atmosphère solaire." Phd thesis, Observatoire de Paris, 2008. http://tel.archives-ouvertes.fr/tel-00608761.
Повний текст джерелаCourreau, Jean-François. "Etude génétique des qualités de travail dans l'espèce canine : application des méthodes de la génétique quantitative aux épreuves de concours de chiens de défense en race berger belge." Paris 11, 2004. http://www.theses.fr/2004PA112250.
Повний текст джерелаThe aim the present study was to evaluate the heritability h² of defence capacity traits in the belgian shepherd dog and to calculate the breeding values of the dogs and the role of particular environmental factors. 15 772 competition results from 2427 defence dogs were used. A competition included 6 to 19 tests and according to their difficulty, 5 levels. The tests were grouped together to form 8 general ability measures: jumping, following at heel, fetching an object, attacking, guarding, obedience, biting and global success. The analysis was performed in parallel on two types of variable obtained from the marks of the tests. Firstly, on the calculated scores after the dogs had been ranked within a competition. Secondly, on the marks raised to a specific power. The genetic parameters were estimated using a mixed animal model using the restricted maximum likelihood method. The effects of the model were estimated by the blue method. The most pertinent results were obtained with the scores. The h² estimates are low (0. 07 for following at heel and global success) to moderate (0. 13-0. 18 for the other criteria). The repeatability of results was relatively high (0. 39-0. 59). The genetic correlations between abilities were moderate to high, except for jumping which appears to be independent. The males performed better than the females. The malinois was the best variety of dog. The age was optimal between 3 and 7 years-of-age. The breeding values were calculated for 2284 competitors; the precision was moderate (dc = 0. 25). These results indicate that it should be possible to select belgian shepherd dogs for behavioural characteristics
Labouret, Stéphane. "Détermination du taux de vide d'un champ de bulles de cavitation ultrasonore par une méthode hyperfréquence : corrélation du taux de vide et de la puissance du bruit de cavitation." Valenciennes, 1998. https://ged.uphf.fr/nuxeo/site/esupversions/e5dc6fee-0e7e-4a6e-8d10-b4209ceaf46f.
Повний текст джерелаBeluffi, Camille. "Search for rare processes with a Z+bb signature at the LHC, with the matrix element method." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAE022/document.
Повний текст джерелаThis thesis presents a detailed study of the final state with the Z boson decaying into two leptons, produced in the CMS detector at the LHC. In order to tag this topology, sophisticated b jet tagging algorithms have been used, and the calibration of one of them, the Jet Probability (JP) tagger is exposed. A study of the tagger degradation at high energy has been done and led to a small gain of performance. This investigation is followed by the search for the associated production of the standard model (SM) Higgs boson with a Z boson and decaying into two b quarks (ZH channel), using the Matrix Element Method (MEM) and two b-taggers: JP and Combined Secondary Vertex (CSV). The MEM is an advanced tool that produces an event-by-event discriminating variable, called weight. To apply it, several sets of transfer function have been produced. The final results give an observed limit on the ZH production cross section with the H → bb branching ratio of 5.46xσSM when using the CSV tagger and 4.89xσSM when using the JP algorithm: a small excess of data is observed only for CSV. Finally, based on the previous analysis, a model-independent search of new physics has been performed. The goal was to discriminate all the SM processes to categorize the Zbb phase space, with a recursive approach using the MEM weights. Free parameters had to be tuned in order to find the best exclusion limit on the ZH signal strength. For the best configuration, the computed limit was 3.58xσSM, close to the one obtained with the dedicated search