Tesis sobre el tema "Échantillonage adaptatif"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 41 mejores tesis para su investigación sobre el tema "Échantillonage adaptatif".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Zhong, Anruo. "Machine learning and adaptive sampling to predict finite-temperature properties in metallic materials at the atomic scale". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP107.
Texto completoThe properties and behaviors of materials under extreme conditions are essential for energy systems such as fission and fusion reactors. However, accurately predicting the properties of materials at high temperatures remains challenging. Direct measurements of these properties are constrained by experimental instrument limitations, and atomic-scale simulations based on empirical force fields are often unreliable due to a lack of accuracy. This problem can be addressed using machine learning techniques, which have recently become widely used in materials research. Machine learning force fields achieve the accuracy of ab initio calculations; however, their implementation in sampling methods is limited by high computational costs, typically several orders of magnitude greater than those of traditional force fields. To overcome this limitation, this thesis has two objectives: (i) developing machine learning force fields with a better accuracy-efficiency trade-off, and (ii) creating accelerated sampling methods to facilitate the use of computationally expensive machine learning force fields and accurately estimate free energy. For the first objective, we enhance the construction of machine learning force fields by focusing on three key factors: the database, the descriptor of local atomic environments, and the regression model. Within the framework of Gaussian process regression, we propose and optimize descriptors based on Fourier-sampled kernels and novel sparse points selection methods for kernel regression. For the second objective, we develop a fast and robust Bayesian sampling scheme for estimating the fully anharmonic free energy, which is crucial for understanding temperature effects in crystalline solids, utilizing an improved adaptive biasing force method. This method performs a thermodynamic integration from a harmonic reference system, where numerical instabilities associated with zero frequencies are screened off. The proposed sampling method significantly improves convergence speed and overall accuracy. We demonstrate the efficiency of the improved method by calculating the second-order derivatives of the free energy, such as the elastic constants, which are computed several hundred times faster than with standard methods. This approach enables the prediction of the thermodynamic properties of tungsten and Ta-Ti-V-W high-entropy alloys at temperatures that cannot be investigated experimentally, up to their melting point, with ab initio accuracy by employing accurate machine learning force fields. An extension of this method allows for the sampling of a specified metastable state without transitions between different energy basins, thereby providing the formation and binding free energies of defective configurations. This development helps to explain the mechanism behind the observation of voids in tungsten, which cannot be explained by existing ab initio calculations. The free energy profile of vacancies in the Ta-Ti-V-W system is also computed for the first time. Finally, we validate the application of this free energy sampling method to liquids. The accuracy and numerical efficiency of the proposed computational framework, which combines machine learning force fields and enhanced sampling methods, opens up numerous possibilities for the reliable prediction of finite-temperature material properties
Sedki, Mohammed. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00769095.
Texto completoSedki, Mohammed Amechtoh. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20041/document.
Texto completoThis thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model
Bonneau, Mathieu. "Échantillonnage adaptatif optimal dans les champs de Markov, application à l'échantillonnage d'une espèce adventice". Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1909/.
Texto completoThis work is divided into two parts: (i) the theoretical study of the problem of adaptive sampling in Markov Random Fields (MRF) and (ii) the modeling of the problem of weed sampling in a crop field and the design of adaptive sampling strategies for this problem. For the first point, we first modeled the problem of finding an optimal sampling strategy as a finite horizon Markov Decision Process (MDP). Then, we proposed a generic algorithm for computing an approximate solution to any finite horizon MDP with known model. This algorithm, called Least-Squared Dynamic Programming (LSDP), combines the concepts of dynamic programming and reinforcement learning. It was then adapted to compute adaptive sampling strategies for any type of MRF distributions and observations costs. An experimental evaluation of this algorithm was performed on simulated problems. For the second point, we first modeled the weed spatial repartition in the MRF framework. Second, we have built a cost model adapted to the weed sampling problem. Finally, both models were used together to design adaptive sampling strategies with the LSDP algorithm. Based on real world data, these strategies were compared to a simple heuristic and to static sampling strategies classically used for weed sampling
Oudot, Steve. "Echantillonnage et maillage de surfaces avec garanties". Palaiseau, Ecole polytechnique, 2005. http://www.theses.fr/2005EPXX0060.
Texto completoYan, Alix. "Restauration d'images corrigées par optique adaptative pour l'observation astronomique et de satellites : approche marginale par échantillonnage". Electronic Thesis or Diss., Université Paris sciences et lettres, 2023. http://www.theses.fr/2023UPSLO012.
Texto completoAdaptive-optics-corrected image restoration is particularly difficult, as it suffers from the poor knowledge on the point spread function (PSF). One efficient approach is to marginalize the object out of the problem, and to estimate the PSF and (object and noise) hyper-parameters only before the deconvolution. Recent works have applied this marginal deconvolution, combined to a parametric model for the PSF, to astronomical and satellite images. This thesis aims at extending this previous method, using Markov chain Monte Carlo (MCMC) algorithms. This will enable us to derive uncertainties on the estimates, as well as to study posterior correlation between the parameters. We present detailled results on simulated and experimental, astronomical and satellite data. We also provide elements on the impact of a support constraint on the object
Kourda, Ferid. "Simulation d'alimentation à découpage sur micro-ordinateur". Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0031.
Texto completoBurlion, Laurent. "Contribution à l'analyse et à la commande de systèmes non linéaires à commande échantillonnée". Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00461750.
Texto completoClaisse, Alexandra. "Modèle de reconstruction d'une surface échantillonnée par une méthode de ligne de niveau, et applications". Paris 6, 2009. https://tel.archives-ouvertes.fr/tel-00443640.
Texto completoClaisse, Alexandra. "Modèle de reconstruction d'une surface échantillonnée par un méthode de ligne de niveau, et applications". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00443640.
Texto completoChiky, Raja. "Résumé de flux de données distribués". Paris, ENST, 2009. https://pastel.hal.science/pastel-00005137.
Texto completoIn this thesis, we consider a distributed computing environment, describing a collection of multiple remote sensors that feed a unique central server with numeric and uni-dimensional data streams (also called curves). The central server has a limited memory but should be able to compute aggregated value of any subset of the stream sources from a large time horizon including old and new data streams. Two approaches are studied to reduce the size of data : (1) spatial sampling only consider a random sample of the sources observed at every instant ; (2) temporal sampling consider all sources but samples the instants to be stored. In this thesis, we propose a new approach for summarizing temporally a set of distributed data streams : From the observation of what is happening during a period t -1, we determine a data collection model to apply to the sensors for period t. The computation of aggregates involves statistical inference in the case of spatial sampling and interpolation in the case of temporal sampling. To the best of our knowledge, there is no method for estimating interpolation errors at each timestamp that would take into account some curve features such as the knowledge of the integral of the curve during the period. We propose two approaches : one uses the past of the data curve (naive approach) and the other uses a stochastic process for interpolation (stochastic approach)
Shah, Kashif. "Model adaptation techniques in machine translation". Phd thesis, Université du Maine, 2012. http://tel.archives-ouvertes.fr/tel-00718226.
Texto completoAchibi, Mohamed. "Estimation des probabilités associées aux événements rares aéronautique". Paris 6, 2012. http://www.theses.fr/2012PA066133.
Texto completoThe target of this PhD dissertation is to propose a new approach to improve the forecasting methods with regards to the lifespan of the mechanical parts of a turbofan engine. Probability models and statistical methods are introduced into the mechanical analyses to take into account the uncertainties on the parameters that influence the lifespan. Two very important subjects are developed in this doctoral thesis. The first one is about modeling the dependence of data, that are the inputs of the lifespan’s numerical simulator. The second subject handles with the estimation of extreme quantiles, that we estimate from the outputs of the simulation. The input parameters for a deterministic mechanical computation (length and depth of a defect) are represented by realizations of random variables. This asks for a modeling of their dependence, independently of their marginal law. The theory of copulas and the Cox’s models supply the natural frame of this study. So, we developed a new model to estimate the dependence’s structure of a vector describing the geometrical properties of defects. Copulas permit to model the dependence between the components of the vector, whereas the Cox’s model permits to model the effect of a covariable on the marginal laws of these components. In this PhD dissertation, we are also interested in response surfaces or surrogate models allowing to simplify the lifespan’s mechanical simulation. Designs of experiments are considered and we are more focused in the quantile regression. An emphasis is made on the estimation of the extreme quantiles
Gandar, Benoît. "Apprentissage actif pour l'approximation de variétés". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00954409.
Texto completoZgheib, Rawad. "Algorithmes adaptatifs d'identification et de reconstruction de processus AR à échantillons manquants". Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00273585.
Texto completoLopez, David. "Diagrammes de Voronoï et surfaces évolutives". Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0247.
Texto completoIn this paper, we propose to address the problem of tracking a deformable surface, typically the free surface of a liquid. This surface domain sees its geometry and topology evolve over time by displacement of its vertices, so the elements of the mesh (edges and facets) are all potentially contracted or expanded and require a remeshing. To do so, we propose to use a technique based on restricted Voronoi diagrams. Voronoi diagrams offer a space partition and, more particularly, a partition of the surface domain considered that allow us, among other things, to optimize the distribution of a sample set among the domain and to define a specific triangulation : the restricted Delaunay triangulation.This remeshing solution is only effective when certain conditions are met. Therefore, the first work consisted in implementing an analysis of the restricted cell configurations to ensure that the dual object meets the definition of triangulated manifold and that it is homeomorphic to the initial domain. For less favourable configurations, we have developed a method to correct the partition automatically. Based on the previous analysis, the method proposes a new minimal approximation for each of the faulty cells, thus we can limit the number of vertices used, contrary to the classical Delaunay refinement.A second work proposes to improve the proximity between the initial mesh and the result of the remeshing : new vertex positions are finely adjusted to minimize the approximation error which is here expressed as local volume differences.These tools are combined with a sampling strategy that allows to maintain a constant sampling density throughout the deformation and thus we propose a new method to track free surfaces in incompressible fluid simulation
Poisson, Antonin. "Spectroscopie adaptative à deux peignes de fréquences". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00852102.
Texto completoLassoued, Imed. "Adaptive monitoring and management of Internet traffic". Nice, 2011. http://www.theses.fr/2011NICE4110.
Texto completoTraffic measurement allows network operators to achieve several purposes such as traffic engineering, network resources provisioning and management, accounting and anomaly detection. However, existing solutions suffer from different problems namely the problem of scalability to high speeds, the problem of detecting changes in network conditions, and the problem of missing meaningful information in the traffic. The main consequence of this trend is an inherent disagreement between existing monitoring solutions and the increasing needs of management applications. Hence, increasing monitoring capabilities presents one of the most challenging issues and an enormous undertaking in a large network. This challenge becomes increasingly difficult to meet with the remarkable growth of the Internet infrastructure, the increasing heterogeneity of user’s behaviour and the emergence of a wide variety of network applications. In this context, we present the design of an adaptive centralized architecture that provides visibility over the entire network through a net-work-wide cognitive monitoring system. We consider the following important requirements in the design of our network-wide monitoring system. The first underscores the fact that the vendors do not want to implement sophisticated sampling schemes that give good results under certain circumstances. They want to implement simple and robust solutions that are well described by some form of a standard (i. E. SFlow, NetFlow). Thus, we decide to design a new solution that deals with existing monitoring techniques and tries to coordinate responsibilities between the different monitors in order to improve the overall accuracy. The second requirement stipulates that the monitoring system should provide general information of the entire network. To do so, we adopt a centralized approach that provides visibility over the entire network. Our system investigates the different local measurements and correlates their results in order to address the trade off between accuracy and monitoring constraints. Ands the last requirement indicates that the monitoring system should address the scalability problem and respect monitoring constraints. To this end, our system relies on a network configuration module hat provides a responsive solution able to detect changes in network conditions and adapt the different sampling rates to network state. At the same time it avoids unnecessary details and oscillations in the traffic in order to keep the resulting overhead within the desired bounds. The network reconfiguration module deals with local monitoring tools and adjusts automatically and periodically sampling rates in order to coordinate responsibilities and distribute the work between the different monitors
Soare, Marta. "Sequential resources allocation in linear stochastic bandits". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10147/document.
Texto completoThis thesis is dedicated to the study of resource allocation problems in uncertain environments, where an agent can sequentially select which action to take. After each step, the environment returns a noisy observation of the value of the selected action. These observations guide the agent in adapting his resource allocation strategy towards reaching a given objective. In the most typical setting of this kind, the stochastic multi-armed bandit (MAB), it is assumed that each observation is drawn from an unknown probability distribution associated with the selected action and gives no information on the expected value of the other actions. This setting has been widely studied and optimal allocation strategies were proposed to solve various objectives under the MAB assumptions. Here, we consider a variant of the MAB setting where there exists a global linear structure in the environment and by selecting an action, the agent also gathers information on the value of the other actions. Therefore, the agent needs to adapt his resource allocation strategy to exploit the structure in the environment. In particular, we study the design of sequences of actions that the agent should take to reach objectives such as: (i) identifying the best value with a fixed confidence and using a minimum number of pulls, or (ii) minimizing the prediction error on the value of each action. In addition, we investigate how the knowledge gathered by a bandit algorithm in a given environment can be transferred to improve the performance in other similar environments
Mabika, Bienvenu. "Analyse bayésienne des données de survie : Application à des essais cliniques en pharmacologie". Rouen, 1999. http://www.theses.fr/1999ROUES100.
Texto completoTwo distinct sections constitute this thesis : the first section deals with the Bayesian procedures of survival data and the second with the applying procedures of Bayesian methods. The methods are illustrated with some examples of a mortality study in cardiologic and cancer research where a new treatment is compared to a standard treatment. In the first section, we study the Bayesian framework allowing to compare two Weibull survival distributions with unequal shape parameters, in the case of right censored survival data obtained for two independent samples is considered. For a family of appropriate priors we give the posterior distributions and the highest posterior density intervals about relevant parameters allowing to search for a conclusion of clinical superiority of the treatment. We introduce a Bayesian estimator of the survival function. We propose and study a Bayesian testing for equivalence of two survival functions and an algorithm using Gibbs sampling allowed to resolve this test. Moreover the predictive distributions are used to obtain an early stopping rule in the case of interim analyses. On using some criteria of determination of number patient we propose two approaches. We generalize the Weibull model by a Bayesian approach with covariates, this approach is like to Cox model. The methods inference used there do appeal to Markov Chain Monte Carlo methods
Chamroo, Afzal. "Contribution à l'étude des Systèmes à Fonctionnement par Morceaux : Application à l'Identification en Ligne et à la commande en Temps Réel". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2006. http://tel.archives-ouvertes.fr/tel-00374158.
Texto completoDubourg, Vincent. "Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00697026.
Texto completoHmida, Hmida. "Extension des Programmes Génétiques pour l’apprentissage supervisé à partir de très larges Bases de Données (Big data)". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED047.
Texto completoIn this thesis, we investigate the adaptation of GP to overcome the data Volume hurdle in Big Data problems. GP is a well-established meta-heuristic for classification problems but is impaired with its computing cost. First, we conduct an extensive review enriched with an experimental comparative study of training set sampling algorithms used for GP. Then, based on the previous study results, we propose some extensions based on hierarchical sampling. The latter combines active sampling algorithms on several levels and has proven to be an appropriate solution for sampling techniques that can’t deal with large datatsets (like TBS) and for applying GP to a Big Data problem as Higgs Boson classification.Moreover, we formulate a new sampling approach called “adaptive sampling”, based on controlling sampling frequency depending on learning process and through fixed, determinist and adaptive control schemes. Finally, we present how an existing GP implementation (DEAP) can be adapted by distributing evaluations on a Spark cluster. Then, we demonstrate how this implementation can be run on tiny clusters by sampling.Experiments show the great benefits of using Spark as parallelization technology for GP
Bouhlel, Mohamed Amine. "Optimisation auto-adaptative en environnement d’analyse multidisciplinaire via les modèles de krigeage combinés à la méthode PLS". Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0002/document.
Texto completoAerospace turbomachinery consists of a plurality of blades. Their main function is to transfer energybetween the air and the rotor. The bladed disks of the compressor are particularly important becausethey must satisfy both the requirements of aerodynamic performance and mechanical resistance.Mechanical and aerodynamic optimization of blades consists in searching for a set of parameterizedaerodynamic shape that ensures the best compromise solution between a set of constraints.This PhD introduces a surrogate-based optimization method well adapted to high-dimensionalproblems. This kind of high-dimensional problem is very similar to the Snecma’s problems. Ourmain contributions can be divided into two parts : Kriging models development and enhancementof an existing optimization method to handle high-dimensional problems under a large number ofconstraints. Concerning Kriging models, we propose a new formulation of covariance kernel which is able toreduce the number of hyper-parameters in order to accelerate the construction of the metamodel.One of the known limitations of Kriging models is about the estimation of its hyper-parameters.This estimation becomes more and more difficult when the number of dimension increases. Inparticular, the initial design of experiments (for surrogate modelling construction) requires animportant number of points and therefore the inversion of the covariance matrix becomes timeconsuming. Our approach consists in reducing the number of parameters to estimate using the Partial LeastSquares regression method (PLS). This method provides information about the linear relationshipbetween input and output variables. This information is integrated into the Kriging model kernelwhile maintaining the symmetry and the positivity properties of the kernels. Thanks to this approach,the construction of these new models called KPLS is very fast because of the low number of newparameters to estimate. When the covariance kernel used is of an exponential type, the KPLS methodcan be used to initialize parameters of classical Kriging models, to accelerate the convergence of theestimation of parameters. The final method, called KPLS+K, allows to improve the accuracy of themodel for multimodal functions. The second main contribution of this PhD is to develop a global optimization method to tacklehigh-dimensional problems under a large number of constraint functions thanks to KPLS or KPLS+Kmethod. Indeed, we extended the self adaptive optimization method called "Efficient Global Optimization,EGO" for high-dimensional problems under constraints. Several enriching criteria have been tested. This method allows to estimate known global optima on academic problems up to 50 inputvariables. The proposed method is tested on two industrial cases, the first one, "MOPTA", from the automotiveindustry (with 124 input variables and 68 constraint functions) and the second one is a turbineblade from Snecma company (with 50 input variables and 31 constraint functions). The results showthe effectiveness of the method to handle industrial problems.We also highlight some importantlimitations
Rolland, Thierry. "Adaptation des méthodes d'échantillonnage et d'analyse en rivières méditerranéennes du Sud-est de la France : étude de l'hétérogénéité spatio-temporelle de l'épilithon et de la dérive algale". Aix-Marseille 3, 1995. http://www.theses.fr/1995AIX30071.
Texto completoGong, Li. "On-demand Development of Statistical Machine Translation Systems". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112338/document.
Texto completoStatistical Machine Translation (SMT) produces results that make it apreferred choice in most machine-assisted translation scenarios.However,the development of such high-performance systems involves thecostly processing of very large-scale data. New data are constantly madeavailable while the constructed SMT systems are usually static, so thatincorporating new data into existing SMT systems imposes systemdevelopers to re-train systems from scratch. In addition, the adaptationprocess of SMT systems is typically based on some available held-outdevelopment set and is performed once and for all.In this thesis, wepropose an on-demand framework that tackles the 3 above problemsjointly, to enable to develop SMT systems on a per-need with incremental updates and to adapt existing systems to each individual input text.The first main contribution of this thesis is devoted to a new on-demandword alignment method that aligns training sentence pairs in isolation.This property allows SMT systems to compute information on a per-needbasis and to seamlessly incorporate new available data into an exiting SMT system without re-training the whole systems. The second maincontribution of this thesis is the integration of contextual sampling strategies to select translation examples from large-scale corpora that are similar to the input text so as to build adapted phrase tables
Gabillon, Victor. "Algorithmes budgétisés d'itérations sur les politiques obtenues par classification". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10032/document.
Texto completoThis dissertation is motivated by the study of a class of reinforcement learning (RL) algorithms, called classification-based policy iteration (CBPI). Contrary to the standard RL methods, CBPI do not use an explicit representation for value function. Instead, they use rollouts and estimate the action-value function of the current policy at a collection of states. Using a training set built from these rollout estimates, the greedy policy is learned as the output of a classifier. Thus, the policy generated at each iteration of the algorithm, is no longer defined by a (approximated) value function, but instead by a classifier. In this thesis, we propose new algorithms that improve the performance of the existing CBPI methods, especially when they have a fixed budget of interaction with the environment. Our improvements are based on the following two shortcomings of the existing CBPI algorithms: 1) The rollouts that are used to estimate the action-value functions should be truncated and their number is limited, and thus, we have to deal with bias-variance tradeoff in estimating the rollouts, and 2) The rollouts are allocated uniformly over the states in the rollout set and the available actions, while a smarter allocation strategy could guarantee a more accurate training set for the classifier. We propose CBPI algorithms that address these issues, respectively, by: 1) the use of a value function approximation to improve the accuracy (balancing the bias and variance) of the rollout estimates, and 2) adaptively sampling the rollouts over the state-action pairs
Al-Tahir, Ali Abdul Razzaq. "Synthèse d’observateur d'état et commande non-linéaire à retour de sortie des systèmes électriques". Caen, 2016. http://www.theses.fr/2016CAEN2070.
Texto completoThe research work developed in this thesis has been mainly devoted to the observation and sensorless control problems of electrical systems. Three major contributions have been carried out using the high - gain concept and output feedback adaptive nonlinear control for online UPS. In this thesis, we dealt with synthesis of sampled high - gain observers for nonlinear systems application to PMSMs and DFIGs. We particularly focus on two constraints: sampling effect and tracking unmeasured mechanical and magnetic state variables. The first contribution consists in a high gain observer design that performs a relatively accurate estimation of both mechanical and magnetic state variable using the available measurements on stator currents and voltages of PMSM. We propose a global exponential observer having state predictor for a class of nonlinear globally Lipschitz system. In second contribution, we proposed a novel non – standard HGO design for non-injective feedback relation application to variable speed DFIG based WPGS. Meanwhile, a reduced system model is analyzed, provided by observability test to check is it possible synthesis state observer for sensorless control. In last contribution, an adaptive observer for states and parameters estimation are designed for a class of state - affine systems application to output feedback adaptive nonlinear control of three-phase AC/DC boost power converter for online UPS systems. Basically, the problem focused on cascade nonlinear adaptive controller that is developed making use Lyapunov theory. The parameters uncertainties are processed by the practical control laws under backstepping design techniques with capacity of adaptation
Carpentier, Alexandra. "De l'échantillonage optimal en grande et petite dimension". Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10041/document.
Texto completoDuring my PhD, I had the chance to learn and work under the great supervision of my advisor Rémi (Munos) in two fields that are of particular interest to me. These domains are Bandit Theory and Compressed Sensing. While studying these domains I came to the conclusion that they are connected if one looks at them trough the prism of optimal sampling. Both these fields are concerned with strategies on how to sample the space in an efficient way: Bandit Theory in low dimension, and Compressed Sensing in high dimension. In this Dissertation, I present most of the work my co-authors and I produced during the three years that my PhD lasted
Pastel, Rudy. "Estimation de probabilités d'évènements rares et de quantiles extrêmes : applications dans le domaine aérospatial". Phd thesis, Université Européenne de Bretagne, 2012. http://tel.archives-ouvertes.fr/tel-00728108.
Texto completoCastellanos, Silva Abraham. "Compensation adaptative par feedback pour le contrôle actif de vibrations en présence d’incertitudes sur les paramètres du procédé". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT050/document.
Texto completoIn this thesis, solutions for the design of robust Active Vibration Control (AVC) systems are presented. The thesis report is composed of two main parts.In the first part of the thesis uncertainties issues in Active Vibration Control systems are examined. In addition of the uncertainties on the frequency of the disturbances it has been found that the presence of low damped complex zeros raise difficult design problems even if plant and models are perfectly known. Solutions for the linear control in this context have been proposed. In order to reduce the uncertainties in the identification of low complex zeros and improved closed loop identification procedure has been developed. To handle the uncertainties on the disturbance frequency adaptation has any way to be used.The second part is concerned with the further development and/or the improvement of the now classical direct adaptive feedback compensation algorithms using Youla Kucera controller parametrization. Two new solutions have been proposed in this context. The first one results from the improvement of a previous work (Landau et al., 2005). The contributions are a new robust central controller design to the optional use of over parameterization of the Q-FIR filter which aims to ensure a small waterbed effect for the output sensitivity function and therefore reducing the unwanted amplification. The second algorithm presents a mixed direct/indirect structure which uses a Q-IIR filter. The improvements are mainly the effect of the Q filter denominator, which is obtained from a disturbance identification. This solution in addition drastically simplifies the design of the central controller.The algorithms have been tested, compared and validated on an international benchmark setup available at the Control System Department of GIPSA-Lab, Grenoble, France
Scheidt, Céline. "Analyse statistique d'expériences simulées : Modélisation adaptative de réponses non régulières par krigeage et plans d'expériences, Application à la quantification des incertitudes en ingénierie des réservoirs pétroliers". Phd thesis, Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/SCHEIDT_Celine_2006.pdf.
Texto completoQuantification of uncertainty in reservoir performance is an essential phase of oil field evaluation and production. Due to the large number of parameters and the physical complexity of the reservoir, fluid flow models can be computationally time consuming. Traditional uncertainty management is thus routinely performed using proxy models of the fluid flow simulator, following experimental design methodology. However, this approach often ignores the irregularity of the response. The objective of the thesis is to construct non-linear proxy models of the fluid flow simulator. Contrary to classical experimental designs which assume a polynomial behavior of the response, we build evolutive experimental designs to fit gradually the potentially non-linear shape of uncertainty. This methodology combines the advantages of experimental design with geostatistical methods. Starting from an initial trend of the uncertainty, the method determines iteratively new simulations that might bring crucial information to update the estimation of the uncertainty. Four criteria of adding new simulations are proposed. We suggest performing simulation at the extremes and the null derivative points of the approximation in order to better characterize irregularity. In addition, we propose an original way to increase the prior predictivity of the approximation using pilot points. The pilot points are also good candidates for simulation. This methodology allows for an efficient modeling of highly non-linear responses, while reducing the number of simulations compared to latin hypercubes. This work can potentially improve the efficiency in decision making under uncertainty
Zhu, Boyao. "Identification and metamodeling characterization of singularities in composite, highly dispersive media". Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0006.
Texto completoStructural health monitoring (SHM) plays a crucial role in many industrial fields to ensure the safety, reliability, and performance of critical structures. The development of various types of sensors, data analysis, and wireless communication systems, enables the collection in situ of data attesting to the real-time state of structures within the framework of SHM modules helping for more accurate and automated decision-making processes. However, the SHM modules require data basis characterizing safe and damaged structures. Simulations based on numerical modelling such as finite element methods, are often used to construct this data basis. However, this approach is very time-consuming especially when the finite element model is complex, which is often the case due to the increasing complexity of structures. This thesis is within this framework. Indeed, it deals with the problem of efficiently obtaining damage-sensitive features of complex composite structures. More specifically, it aims to define and develop efficient numerical tools helping for SHM of complex composite structures. Hence, model reduction and metamodeling approaches based on the Wave-finite element (WFE) and Kriging methods respectively are proposed and investigated. So, the main objective of the present work is to assess the potential of the combination of the WFE and kriging metamodeling to be useful and efficient in predicting the structural and dynamic characteristics of complex composite structures. This efficiency is quantified by the prediction accuracy and the involved cost. Based on the predicted dynamic properties, some damage-sensitive indicators (such as amplitudes, natural frequencies, phase shifts) are defined and exploited to evaluate the health status of the considered structures.Based on the accomplished studies, it is shown that the proposed strategy, namely the Kriging-based WFEM, can ensure an interesting efficiency resulting in a suitable accuracy of predictions of the structural and dynamical properties while involving a smaller cost than the WFEM-based calculations. Moreover, the proposed strategy has kept the same sensitivity levels of dynamic properties to the considered damages (cracks and delamination) with the associated indexes. The strategy proved to be more efficient when using the adaptive sampling scheme with kriging
Jacquemart, Damien. "Contributions aux méthodes de branchement multi-niveaux pour les évènements rares, et applications au trafic aérien". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S186/document.
Texto completoThe thesis deals with the design and mathematical analysis of reliable and accurate Monte Carlo methods in order to estimate the (very small) probability that a Markov process reaches a critical region of the state space before a deterministic final time. The underlying idea behind the multilevel splitting methods studied here is to design an embedded sequence of intermediate more and more critical regions, in such a way that reaching an intermediate region, given that the previous intermediate region has already been reached, is not so rare. In practice, trajectories are propagated, selected and replicated as soon as the next intermediate region is reached, and it is easy to accurately estimate the transition probability between two successive intermediate regions. The bias due to time discretization of the Markov process trajectories is corrected using perturbed intermediate regions as proposed by Gobet and Menozzi. An adaptive version would consist in the automatic design of the intermediate regions, using empirical quantiles. However, it is often difficult if not impossible to remember where (in which state) and when (at which time instant) did each successful trajectory reach the empirically defined intermediate region. The contribution of the thesis consists in using a first population of pilot trajectories to define the next threshold, in using a second population of trajectories to estimate the probability of exceeding this empirically defined threshold, and in iterating these two steps (definition of the next threshold, and evaluation of the transition probability) until the critical region is reached. The convergence of this adaptive two-step algorithm is studied in the asymptotic framework of a large number of trajectories. Ideally, the intermediate regions should be defined in terms of the spatial and temporal variables jointly (for example, as the set of states and times for which a scalar function of the state exceeds a time-dependent threshold). The alternate point of view proposed in the thesis is to keep intermediate regions as simple as possible, defined in terms of the spatial variable only, and to make sure that trajectories that manage to exceed a threshold at an early time instant are more replicated than trajectories that exceed the same threshold at a later time instant. The resulting algorithm combines importance sampling and multilevel splitting. Its preformance is evaluated in the asymptotic framework of a large number of trajectories, and in particular a central limit theorem is obtained for the relative approximation error
Capel, Eliott. "La grande révolution terrestre du Silurien-Dévonien : diversité et évolution des premières plantes terrestres". Electronic Thesis or Diss., Université de Lille (2022-....), 2022. https://pepite-depot.univ-lille.fr/ToutIDP/EDSMRE/2022/2022ULILR059.pdf.
Texto completoPlants underwent an extensive Silurian-Devonian diversification during their progressive colonization of terrestrial surfaces (440-360 Ma). Nonetheless, the tempo and mode of this radiation remains controversial, and drivers of diversity have yet to be clearly identified. This thesis, through a series of newly-compiled datasets of plant macrofossils, and via a wide array of quantitative methods, characterizes temporal and spatial dynamics. It further evaluates the biases that may alter our perception of this landmark event. Firstly, a four-factor model was found adequate to describe the underlying structure of early vegetation dynamics. The pattern suggests ecological shifts during transitions phases, further corroborated through an in-depth characterization of global plant diversity patterns. Nevertheless, the general pattern of Silurian-Devonian plant diversity was found to heavily depend on sampling effort, although several signals of diversification and extinction seemed to be dissociated from it, implying real underlying biological signals. A subsequent continental-scale study further demonstrated that, in addition to sampling heterogeneity, geological incompleteness remained an important element in driving apparent early land plant diversity patterns. This bias is not easily corrected even with the most advanced sampling-standardization methods. Furthermore, paleogeographical discrepancies were assessed to uncover a possible spatial component into early land plant radiation. This led to the discovery of a climatologically-driven plant distribution and dispersion, further enhanced during colder periods. Lastly, this thesis includes a review of an Early Devonian plant fossil assemblage from northern France, providing taxonomically up to date and well-dated occurrences to integrate in future studies
Wendland, David. "The equation of state of the Hydrogen-Helium mixture with application to the Sun". Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1029/document.
Texto completoThe study of the thermodynamic properties of a multi-component quantum Coulomb system is of fundamental theoretical interest and has, beyond that, a wide range of applications. The Hydrogen-Helium mixture can be found in the interstellar nebulae and giant planets, however the most prominent example is the Sun. Here the interaction between the electrons and the nuclei is almost purely electrostatic.In this work we study the equation of state of the Hydrogen-Helium mixture starting from first principles, meaning the fundamental Coulomb interaction of its constituting particles. In this context we develop numerical methods to study the few-particle clusters appearing in the theory by using the path integral language. To capture the effects of the long-range Coulomb interaction between the fundamental particles, we construct a new version of Mayer-diagrammatic, which is appropriate for our purposes. In a first step, we ameliorate the scaled-low-temperature (SLT) equation of state, valid in the limit of low density and low temperature, by taking three-body terms into account and we compare the predictions to the well-established OPAL equation of state. Higher densities are accessed by direct inversion of the density equations and by the use of cluster functions that include screening effects. These cluster functions put the influence of screening on the ionization, unto now treated ad-hoc, on a theoretically well-grounded basis. We also inspect other equilibrium quantities such as the speed of sound and the inner energy. In the last part we calculate the equation of state of the Hydrogen-Helium mixture including the charged He+ ions in the screening process. Our work gives insights in the physical content of previous phenomenological descriptions and helps to better determine their range of validity. The equation of state derived in this thesis is expected to be very precise as well as reliable for conditions found in the Sun
Ben-Hamou, Anna. "Concentration et compression sur alphabets infinis, temps de mélange de marches aléatoires sur des graphes aléatoires". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC197/document.
Texto completoThis document presents the problems I have been interested in during my PhD thesis. I begin with a concise presentation of the main results, followed by three relatively independent parts. In the first part, I consider statistical inference problems on an i.i.d. sample from an unknown distribution over a countable alphabet. The first chapter is devoted to the concentration properties of the sample's profile and of the missing mass. This is a joint work with Stéphane Boucheron and Mesrob Ohannessian. After obtaining bounds on variances, we establish Bernstein-type concentration inequalities and exhibit a vast domain of sampling distributions for which the variance factor in these inequalities is tight. The second chapter presents a work in progress with Stéphane Boucheron and Elisabeth Gassiat, on the problem of universal adaptive compression over countable alphabets. We give bounds on the minimax redundancy of envelope classes, and construct a quasi-adaptive code on the collection of classes defined by a regularly varying envelope. In the second part, I consider random walks on random graphs with prescribed degrees. I first present a result obtained with Justin Salez, establishing the cutoff phenomenon for non-backtracking random walks. Under certain degree assumptions, we precisely determine the mixing time, the cutoff window, and show that the profile of the distance to equilibrium converges to the Gaussian tail function. Then I consider the problem of comparing the mixing times of the simple and non-backtracking random walks. The third part is devoted to the concentration properties of weighted sampling without replacement and corresponds to a joint work with Yuval Peres and Justin Salez
Meynaoui, Anouar. "New developments around dependence measures for sensitivity analysis : application to severe accident studies for generation IV reactors". Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0028.
Texto completoAs part of safety studies for nuclear reactors, numerical simulators are essential for understanding, modelling and predicting physical phenomena. However, the information on some of the input variables of the simulator is often limited or uncertain. In this framework, Global Sensitivity Analysis (GSA) aims at determining how the variability of the input parameters affects the value of the output or the quantity of interest. The work carried out in this thesis aims at proposing new statistical methods based on dependence measures for GSA of numerical simulators. We are particularly interested in HSIC-type dependence measures (Hilbert-Schmidt Independence Criterion). After Chapters 1 and 2 introducing the general context and motivations of the thesis in French and English versions respectively, Chapter 3 first presents a general review of HSIC measures, in a theoretical and methodological framework. Subsequently, new developments around the estimation of HSIC measures from an alternative sample and inspired by importance sampling techniques are proposed. As a result of these theoretical developments, an efficient methodology for GSA in the presence of uncertainties of input probability distributions is developed in Chapter 4. The relevance of the proposed methodology is first demonstrated on an analytical case before being applied to the MACARENa simulator modeling a ULOF (Unprotected Loss Of Flow) accidental scenario on a sodium-cooled fast neutron reactor. Finally, Chapter 5 deals with the development of an independence test aggregating several parametrizations of HSIC kernels and allowing to capture a wider spectrum of dependencies between the inputs and the output. The optimality of this methodology is first demonstrated from a theoretical point of view. Then, its performance and practical interest are illustrated on several analytical examples as well as on the test case of the MACARENa simulator
Chen, Long. "Méthodes itératives de reconstruction tomographique pour la réduction des artefacts métalliques et de la dose en imagerie dentaire". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112015/document.
Texto completoThis thesis contains two main themes: development of new iterative approaches for metal artifact reduction (MAR) and dose reduction in dental CT (Computed Tomography). The metal artifacts are mainly due to the beam-hardening, scatter and photon starvation in case of metal in contrast background like metallic dental implants in teeth. The first issue concerns about data correction on account of these effects. The second one involves the radiation dose reduction delivered to a patient by decreasing the number of projections. At first, the polychromatic spectra of X-ray beam and scatter can be modeled by a non-linear direct modeling in the statistical methods for the purpose of the metal artifacts reduction. However, the reconstruction by statistical methods is too much time consuming. Consequently, we proposed an iterative algorithm with a linear direct modeling based on data correction (beam-hardening and scatter). We introduced a new beam-hardening correction without knowledge of the spectra of X-ray source and the linear attenuation coefficients of the materials and a new scatter estimation method based on the measurements as well. Later, we continued to study the iterative approaches of dose reduction since the over-exposition or unnecessary exposition of irradiation during a CT scan has been increasing the patient's risk of radio-induced cancer. In practice, it may be useful that one can reconstruct an object larger than the field of view of scanner. We proposed an iterative algorithm on super-short-scans on multiple scans in this case, which contain a minimal set of the projections for an optimal dose. Furthermore, we introduced a new scanning mode of variant angular sampling to reduce the number of projections on a single scan. This was adapted to the properties and predefined interesting regions of the scanned object. It needed fewer projections than the standard scanning mode of uniform angular sampling to reconstruct the objet. All of our approaches for MAR and dose reduction have been evaluated on real data. Thanks to our MAR methods, the quality of reconstructed images was improved noticeably. Besides, it did not introduce some new artifacts compared to the MAR method of state of art NMAR [Meyer et al 2010]. We could reduce obviously the number of projections with the proposed new scanning mode and schema of super-short-scans on multiple scans in particular case
Baltcheva, Irina. "Contrôle adaptatif et autoréglage : applications de l'approximation stochastique". Thèse, 2004. http://hdl.handle.net/1866/16644.
Texto completoEbert, Maximilian. "Shifting the boundaries of experimental studies in engineering enzymatic functions : combining the benefits of computational and experimental methods". Thèse, 2016. http://hdl.handle.net/1866/19025.
Texto completoL'industrie chimique mondiale est en pleine mutation, cherchant des solutions pour rendre la synthèse organique classique plus durable. Une telle solution consiste à passer de la catalyse chimique classique à la biocatalyse. Bien que les avantages des enzymes incluent leur stéréo, régio et chimiosélectivité, cette sélectivité réduit souvent leur promiscuité. Les efforts requis pour adapter la fonction enzymatique aux réactions désirées se sont révélés d'une efficacité modérée, de sorte que des méthodes rapides et rentables sont nécessaires pour générer des biocatalyseurs qui rendront la production chimique plus efficace. Dans l’ère de la bioinformatique et des outils de calcul pour soutenir l'ingénierie des enzymes, le développement rapide de nouvelles fonctions enzymatiques devient une réalité. Cette thèse commence par un examen des développements récents sur les outils de calcul pour l’ingénierie des enzymes. Ceci est suivi par un exemple de l’ingénierie des enzymes purement expérimental ainsi que de l’évolution des protéines. Nous avons exploré l’espace mutationnel d'une enzyme primitive, la dihydrofolate réductase R67 (DHFR R67), en utilisant l’ingénierie semi-rationnelle des protéines. La conception rationnelle d’une librarie de mutants, ou «Smart library design», impliquait l’association covalente de monomères de l’homotétramère DHFR R67 en dimères afin d’augmenter la diversité de la librairie d’enzymes mutées. Le criblage par activité enzymatique a révélé un fort biais pour le maintien de la séquence native dans un des protomères tout en tolérant une variation de séquence élevée pour le deuxième. Il est plausible que les protomères natifs procurent l’activité observée, de sorte que nos efforts pour modifier le site actif de la DHFR R67 peuvent n’avoir été que modérément fructueux. Les limites des méthodes expérimentales sont ensuite abordées par le développement d’outils qui facilitent la prédiction des points chauds mutationnels, c’est-à-dire les sites privilégiés à muter afin de moduler la fonction. Le développement de ces techniques est intensif en termes de calcul, car les protéines sont de grandes molécules complexes dans un environnement à base d’eau, l’un des solvants les plus difficiles à modéliser. Nous présentons l’identification rapide des points chauds mutationnels spécifiques au substrat en utilisant l'exemple d’une enzyme cytochrome P450 industriellement pertinente, la CYP102A1. En appliquant la technique de simulation de la dynamique moléculaire par la force de polarisation adaptative, ou «ABF», nous confirmons les points chauds mutationnels connus pour l’hydroxylation des acides gras tout en identifiant de nouveaux points chauds mutationnels. Nous prédisons également la conformation du substrat naturel, l’acide palmitique, dans le site actif et nous appliquons ces connaissances pour effectuer un criblage virtuel d'autres substrats de cette enzyme. Nous effectuons ensuite des simulations de dynamique moléculaire pour traiter l’impact potentiel de la dynamique des protéines sur la catalyse enzymatique, qui est le sujet de discussions animées entre les experts du domaine. Avec la disponibilité accrue de structures cristallines dans la banque de données de protéines (PDB), il devient clair qu’une seule structure de protéine n’est pas suffisante pour élucider la fonction enzymatique. Nous le démontrons en analysant quatre structures cristallines que nous avons obtenues d’une enzyme β-lactamase, parmi lesquelles un réarrangement important des résidus clés du site actif est observable. Nous avons réalisé de longues simulations de dynamique moléculaire pour générer un ensemble de structures suggérant que les structures cristallines ne reflètent pas nécessairement la conformation de plus basse énergie. Enfin, nous étudions la nécessité de compléter de manière informatisée un hémisphère où l’expérimental n’est actuellement pas possible, à savoir la prédiction de la migration des gaz dans les enzymes. À titre d'exemple, la réactivité des enzymes cytochrome P450 dépend de la disponibilité des molécules d’oxygène envers l’hème du site actif. Par le biais de simulations de la dynamique moléculaire de type Simulation Implicite du Ligand (ILS), nous dérivons le paysage de l’énergie libre de petites molécules neutres de gaz pour cartographier les canaux potentiels empruntés par les gaz dans les cytochromes P450 : CYP102A1 et CYP102A5. La comparaison pour les gaz CO, N2 et O2 suggère que ces enzymes évoluent vers l’exclusion du CO inhibiteur. De plus, nous prédisons que les canaux empruntés par les gaz sont distincts des canaux empruntés par le substrat connu et que ces canaux peuvent donc être modifiés indépendamment les uns des autres.
The chemical industry worldwide is at a turning point, seeking solutions to make classical organic synthesis more sustainable. One such solution is to shift from classical catalysis to biocatalysis. Although the advantages of enzymes include their stereo-, regio-, and chemoselectivity, their selectivity often reduces versatility. Past efforts to tailor enzymatic function towards desired reactions have met with moderate effectiveness, such that fast and cost-effective methods are in demand to generate biocatalysts that will render the production of fine and bulk chemical production more benign. In the wake of bioinformatics and computational tools to support enzyme engineering, the fast development of new enzyme functions is becoming a reality. This thesis begins with a review of recent developments on computational tools for enzyme engineering. This is followed by an example of purely experimental enzyme engineering and protein evolution. We explored the mutational space of a primitive enzyme, the R67 dihydrofolate reductase (DHFR), using semi-rational protein engineering. ‘Smart library design’ involved fusing monomers of the homotetrameric R67 DHFR into dimers, to increase the diversity in the resulting mutated enzyme libraries. Activity-based screening revealed a strong bias for maintenance of the native sequence in one protomer with tolerance for high sequence variation in the second. It is plausible that the native protomers procure the observed activity, such that our efforts to modify the enzyme active site may have been only moderately fruitful. The limitations of experimental methods are then addressed by developing tools that facilitate computational mutational hotspot prediction. Developing these techniques is computationally intensive, as proteins are large molecular objects and work in aqueous media, one of the most complex solvents to model. We present the rapid, substrate-specific identification of mutational hotspots using the example of the industrially relevant P450 cytochrome CYP102A1. Applying the adaptive biasing force (ABF) molecular dynamics simulation technique, we confirm the known mutational hotspots for fatty acid hydroxylation and identify a new one. We also predict a catalytic binding pose for the natural substrate, palmitic acid, and apply that knowledge to perform virtual screening for further substrates for this enzyme. We then perform molecular dynamics simulations to address the potential impact of protein dynamics on enzyme catalysis, which is the topic of heated discussions among experts in the field. With the availability of more crystal structures in the Protein Data Bank, it is becoming clear that a single protein structure is not sufficient to elucidate enzyme function. We demonstrate this by analyzing four crystal structures we obtained of a β-lactamase enzyme, among which a striking rearrangement of key active site residues was observed. We performed long molecular dynamics simulations to generate a structural ensemble that suggests that crystal structures do not necessarily reflect the conformation of lowest energy. Finally, we address the need to computationally complement an area where experimentation is not currently possible, namely the prediction of gas migration into enzymes. As an example, the reactivity of P450 cytochrome enzymes depends on the availability of molecular oxygen at the active-site heme. Using the Implicit Ligand Sampling (ILS) molecular dynamics simulation technique, we derive the free energy landscape of small neutral gas molecules to map potential gas channels in cytochrome P450 CYP102A1 and CYP102A5. Comparison of CO, N2 and O2 suggests that those enzymes evolved towards exclusion of the inhibiting CO. In addition, we predict that gas channels are distinct from known substrate channels and therefore can be engineered independently from one another.