Dissertationen zum Thema „Modèle à grande échelle“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Modèle à grande échelle" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Stref, Philippe. „Application à grande échelle d'un modèle hydrodispersif tridimensionnel“. Montpellier 2, 1987. http://www.theses.fr/1987MON20068.
Der volle Inhalt der QuelleCôté, Benoit, und Benoit Côté. „Modèle d’évolution de galaxies pour simulations cosmologiques à grande échelle“. Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25550.
Der volle Inhalt der QuelleTableau d'honneur de la Faculté des études supérieures et postdorales, 2014-2015
Nous présentons un modèle semi-analytique (MSA) conçu pour être utilisé dans une simulation hydrodynamique à grande échelle comme traitement de sous-grille afin de générer l’évolution des galaxies dans un contexte cosmologique. Le but ultime de ce projet est d’étudier l’histoire de l’enrichissement chimique du milieu intergalactique (MIG) ainsi que les interactions entre les galaxies et leur environnement. Le MSA inclut tous les ingrédients nécessaires pour reproduire l’évolution des galaxies de faible masse et de masse intermédiaire. Cela comprend l’accrétion du halo galactique et du MIG, le refroidissement radiatif, la formation stellaire, l’enrichissement chimique et la production de vents galactiques propulsés par l’énergie mécanique et la radiation des étoiles massives. La physique des bulles interstellaires est appliquée à chaque population d’étoiles qui se forme dans le modèle afin de relier l’activité stellaire à la production des vents galactiques propulsés par l’énergie mécanique. Nous utilisons des modèles stellaires à jour pour générer l’évolution de chacune des populations d’étoiles en fonction de leur masse, de leur métallicité et de leur âge. Cela permet d’inclure, dans le processus d’enrichissement, les vents stellaires des étoiles massives, les supernovae de Type II, Ib et Ic, les hypernovae, les vents stellaires des étoiles de faible masse et de masse intermédiaire ainsi que les supernovae de Type Ia. Avec ces ingrédients, notre modèle peut reproduire les abondances de plusieurs éléments observées dans les étoiles du voisinage solaire. De manière plus générale, notre MSA peut reproduire la relation actuelle observée entre la masse stellaire des galaxies et la masse de leur halo de matière sombre. Il peut aussi reproduire la métallicité, la quantité d’hydrogène et le taux de formation stellaire spécifique observés dans les galaxies de l’Univers local. Notre modèle est également consistant avec les observations suggérant que les galaxies de faible masse sont davantage affectées par la rétroaction stellaire que les galaxies plus massives. De plus, le modèle peut reproduire les différents comportements, soit oscillatoire ou stable, observés dans l’évolution du taux de formation stellaire des galaxies. Tous ces résultats démontrent que notre MSA est suffisamment qualifié pour traiter l’évolution des galaxies à l’intérieur d’une simulation cosmologique.
Nous présentons un modèle semi-analytique (MSA) conçu pour être utilisé dans une simulation hydrodynamique à grande échelle comme traitement de sous-grille afin de générer l’évolution des galaxies dans un contexte cosmologique. Le but ultime de ce projet est d’étudier l’histoire de l’enrichissement chimique du milieu intergalactique (MIG) ainsi que les interactions entre les galaxies et leur environnement. Le MSA inclut tous les ingrédients nécessaires pour reproduire l’évolution des galaxies de faible masse et de masse intermédiaire. Cela comprend l’accrétion du halo galactique et du MIG, le refroidissement radiatif, la formation stellaire, l’enrichissement chimique et la production de vents galactiques propulsés par l’énergie mécanique et la radiation des étoiles massives. La physique des bulles interstellaires est appliquée à chaque population d’étoiles qui se forme dans le modèle afin de relier l’activité stellaire à la production des vents galactiques propulsés par l’énergie mécanique. Nous utilisons des modèles stellaires à jour pour générer l’évolution de chacune des populations d’étoiles en fonction de leur masse, de leur métallicité et de leur âge. Cela permet d’inclure, dans le processus d’enrichissement, les vents stellaires des étoiles massives, les supernovae de Type II, Ib et Ic, les hypernovae, les vents stellaires des étoiles de faible masse et de masse intermédiaire ainsi que les supernovae de Type Ia. Avec ces ingrédients, notre modèle peut reproduire les abondances de plusieurs éléments observées dans les étoiles du voisinage solaire. De manière plus générale, notre MSA peut reproduire la relation actuelle observée entre la masse stellaire des galaxies et la masse de leur halo de matière sombre. Il peut aussi reproduire la métallicité, la quantité d’hydrogène et le taux de formation stellaire spécifique observés dans les galaxies de l’Univers local. Notre modèle est également consistant avec les observations suggérant que les galaxies de faible masse sont davantage affectées par la rétroaction stellaire que les galaxies plus massives. De plus, le modèle peut reproduire les différents comportements, soit oscillatoire ou stable, observés dans l’évolution du taux de formation stellaire des galaxies. Tous ces résultats démontrent que notre MSA est suffisamment qualifié pour traiter l’évolution des galaxies à l’intérieur d’une simulation cosmologique.
We present a semi-analytical model (SAM) designed to be used in a large-scale hydrodynamical simulation as a sub-grid treatment in order to generate the evolution of galaxies in a cosmological context. The ultimate goal of this project is to study the chemical enrichment history of the intergalactic medium (IGM) and the interactions between galaxies and their surrounding. Presently, the SAM takes into account all the ingredients needed to compute the evolution of low- and intermediate-mass galaxies. This includes the accretion of the galactic halo and the IGM, radiative cooling, star formation, chemical enrichment, and the production of galactic outflows driven by the mechanical energy and the radiation of massive stars. The physics of interstellar bubbles is applied to every stellar population which forms in the model in order to link the stellar activity to the production of outflows driven by mechanical energy. We use up-to-date stellar models to generate the evolution of each stellar population as a function of their mass, metallicity, and age. This enables us to include, in the enrichment process, the stellar winds from massive stars, Type II, Ib, and Ic supernovae, hypernovae, the stellar winds from low- and intermediate-mass stars in the asymptotic giant branch, and Type Ia supernovae. With these ingredients, our model can reproduce the abundances of several elements observed in the stars located in the solar neighborhood. More generally, our SAM reproduces the current stellar-to-dark-halo mass relation observed in galaxies. It can also reproduce the metallicity, the hydrogen mass fraction, and the specific star formation rate observed in galaxies as a function of their stellar mass. Our model is also consistent with observations which suggest that low-mass galaxies are more affected by stellar feedback than higher-mass galaxies. Moreover, the model can reproduce the periodic and the stable behaviors observed in the star formation rate of galaxies. All these results show that our SAM is sufficiently qualified to treat the evolution of low- and intermediate-mass galaxies inside a large-scale cosmological simulation.
We present a semi-analytical model (SAM) designed to be used in a large-scale hydrodynamical simulation as a sub-grid treatment in order to generate the evolution of galaxies in a cosmological context. The ultimate goal of this project is to study the chemical enrichment history of the intergalactic medium (IGM) and the interactions between galaxies and their surrounding. Presently, the SAM takes into account all the ingredients needed to compute the evolution of low- and intermediate-mass galaxies. This includes the accretion of the galactic halo and the IGM, radiative cooling, star formation, chemical enrichment, and the production of galactic outflows driven by the mechanical energy and the radiation of massive stars. The physics of interstellar bubbles is applied to every stellar population which forms in the model in order to link the stellar activity to the production of outflows driven by mechanical energy. We use up-to-date stellar models to generate the evolution of each stellar population as a function of their mass, metallicity, and age. This enables us to include, in the enrichment process, the stellar winds from massive stars, Type II, Ib, and Ic supernovae, hypernovae, the stellar winds from low- and intermediate-mass stars in the asymptotic giant branch, and Type Ia supernovae. With these ingredients, our model can reproduce the abundances of several elements observed in the stars located in the solar neighborhood. More generally, our SAM reproduces the current stellar-to-dark-halo mass relation observed in galaxies. It can also reproduce the metallicity, the hydrogen mass fraction, and the specific star formation rate observed in galaxies as a function of their stellar mass. Our model is also consistent with observations which suggest that low-mass galaxies are more affected by stellar feedback than higher-mass galaxies. Moreover, the model can reproduce the periodic and the stable behaviors observed in the star formation rate of galaxies. All these results show that our SAM is sufficiently qualified to treat the evolution of low- and intermediate-mass galaxies inside a large-scale cosmological simulation.
Côté, Benoît. „Modèle de vents galactiques destiné aux simulations cosmologiques à grande échelle“. Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27873/27873.pdf.
Der volle Inhalt der QuelleMohamed, Drissi. „Un modèle de propagation de feux de végétation à grande échelle“. Phd thesis, Université de Provence - Aix-Marseille I, 2013. http://tel.archives-ouvertes.fr/tel-00931806.
Der volle Inhalt der QuelleCôté, Benoit. „Modèle de vents galactiques destiné aux simulations cosmologiques à grande échelle“. Master's thesis, Université Laval, 2010. http://hdl.handle.net/20.500.11794/22281.
Der volle Inhalt der QuelleDrissi, Mohamed. „Un modèle de propagation de feux de végétation à grande échelle“. Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4704.
Der volle Inhalt der QuelleThe present work is devoted to the development of a hybrid model for predicting the rate of spread of wildland fires at a large scale, taking into account the local heterogeneities related to vegetation, topography, and meteorological conditions. Some methods for generating amorphous network, representative of real vegetation landscapes, are proposed. Mechanisms of heat transfer from the flame front to the virgin fuel are modeled: radiative preheating from the flame and embers, convective preheating from hot gases, radiative heat losses and piloted ignition of the receptive vegetation item. Flame radiation is calculated by combining the solid flame model with the Monte Carlo method and by taking into account its attenuation by the atmospheric layer between the flame and the receptive vegetation. The model is applied to simple configurations where the fire spreads on a flat or inclined terrain, with or without a constant wind. Model results are in good agreement with literature data. A sensitivity study is conducted to identify the most influential parameters of the model. Eventually, the model is validated by comparing predicted fire patterns with those obtained from a prescribed burning in Australia and from a historical fire that occurred in Corsica in 2009, showing a very good agreement in terms of fire patterns, rate of spread, and burned area
Guenot, Damien. „Simulation des effets instationnaires à grande échelle dans les écoulements décollés“. École nationale supérieure de l'aéronautique et de l'espace (Toulouse ; 1972-2007), 2004. http://www.theses.fr/2004ESAE0009.
Der volle Inhalt der QuelleOudinet, Johan. „Approches combinatoires pour le test statistique à grande échelle“. Paris 11, 2010. http://www.theses.fr/2010PA112347.
Der volle Inhalt der QuelleThis thesis focuses on the development of combinatorial methods for testing and formal verification. Particularly on probabilistic approaches because exhaustive verification is often not tractable for complex systems. For model-based testing, I guide the random exploration of the model to ensure a satisfaction with desired probability of the expected coverage criterion, regardless of the underlying topology of the explored model. Regarding model-checking, I show how to generate a random number of finite paths to check if a property is satisfied with a certain probability. In the first part, I compare different algorithms for generating uniformly at random paths in an automaton. Then I propose a new algorithm that offers a good compromise with a sub-linear space complexity in the path length and a almost-linear time complexity. This algorithm allows the exploration of large models (tens of millions of states) by generating long paths (hundreds of thousands of transitions). In a second part, I present a way to combine partial order reduction and on-the-fly generation techniques to explore concurrent systems without constructing the global model, but relying on models of the components only. Finally, I show how to bias the previous algorithms to satisfy other coverage criteria. When a criterion is not based on paths, but on a set of states or transitions, we use a mixed solution to ensure both various ways of exploring those states or transitions and the criterion satisfaction with a desired probability
Perret, Gaële. „Etude de l' asymétrie cyclone-anticyclone dans les sillages de grande échelle“. Phd thesis, Paris 6, 2005. http://pastel.archives-ouvertes.fr/pastel-00002367.
Der volle Inhalt der QuelleHussein, Mohammad. „Un modèle d'exécution à base d'agents mobiles pour l'optimisation dynamique de requêtes réparties à grande échelle“. Toulouse 3, 2005. http://www.theses.fr/2005TOU30203.
Der volle Inhalt der QuellePraga, Alexis. „Un modèle de transport et de chimie atmosphérique à grande échelle adapté aux calculateurs massivement parallèles“. Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30012/document.
Der volle Inhalt der QuelleWe present in this thesis the development of a large-scale bi-dimensional atmospheric transport scheme designed for parallel architectures with scalability in mind. The current version, named Pangolin, contains a bi-dimensional advection and a simple linear chemistry scheme for stratospheric ozone and will serve as a basis for a future Chemistry Transport Model (CTM). For mass-preservation, a van Leer finite volume scheme was chosen for advection and extended to 2D with operator splitting. To ensure mass preservation, winds are corrected in a preprocessing step. We aim at addressing the "pole issue" of the traditional regular latitude-longitude by presenting a new quasi area-preserving grid mapping the sphere uniformly. The parallelization of the model is based on the advection operator and a custom domain-decomposition algorithm is presented here to attain load-balancing in a message-passing context. To run efficiently on current and future parallel architectures, algebraic features of the grid are exploited in the advection scheme and parallelization algorithm to favor the cheaper costs of flops versus data movement. The model is validated on algebraic test cases and compared to other state-of-theart schemes using a recent benchmark. Pangolin is also compared to the CTM of Météo-France, MOCAGE, using a linear ozone scheme and isentropic coordinates
Vega, Baez Germàn Eduardo. „Développement d'applications à grande échelle par composition de méta-modèles“. Université Joseph Fourier (Grenoble), 2005. http://www.theses.fr/2005GRE10278.
Der volle Inhalt der QuelleModel Driven Software Engineering (MDSE) is a Software Engineering approach that addresses the ever increasing complexity of software development and maintenance through a unified conceptual framework in which the whole software life cycle is seen as a process of model production, refinement and integration. This thesis contributes to this MDSE trend. We focus mainly on the issues raised by the complexity and diversity of the domains of expertise involved in large size software applications, and we propose to address these issues in an MDSE perspective. A domain is an expertise area, potentially shared by many different software applications. The knowledge and know-how in a domain are major assets. This expertise can be formalized and reused when captured by a Domain Specific Language (DSL). We propose an approach in which the target system is described by different models, written in different DSL. In this approach, composing these different models allows for modeling complex application covering simultaneously different domains. Our approach is an original contribution in that each DSL is specified by a meta model precise enough to build, in a semi automatic way, a domain virtual machine ; it is this virtual machine that interprets the domain models. Then, it is possible to compose these meta models to define new and more complex domains. Meta model composition increases modularity and reuse, and allows building domain with much larger functional scope than possible with traditional approaches
Guidard, Vincent. „Assimilation multi-échelle dans un modèle météorologique régional“. Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00569483.
Der volle Inhalt der QuelleOueslati, Boutheina. „Interaction entre convection nuageuse et circulation de grande échelle dans les tropiques“. Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1795/.
Der volle Inhalt der QuelleThe spurious double intertropical convergence zone (ITCZ) is a systematic bias affecting state-of-the-art coupled general circulation models (GCM); there is still no consensus on its causes. The goal of this thesis is to shed some light on this outstanding problem toward the improvement of climate model performances. This work emphasizes the roles of coupled ocean-atmosphere and dynamics-thermodynamics feedbacks in the ITCZ structure. The first step was to study the response of the atmospheric GCMs ARPEGE-climat and LMDz in aquaplanet configuration, to a range of SST latitudinal distributions. The purpose was to investigate the existence of multiple precipitation regimes, explore their characteristics and untangle the mechanisms at play in regime transition. The transition from the double regime with two ITCZs to the single regime with only one ITCZ at the equator was analyzed. In both models, the transition between these regimes is mainly driven by changes in the low-level convergence that are forced by the atmospheric boundary layer temperature gradients. Model-dependent, dry and moist feedbacks intervene to reinforce or weaken the effect of the temperature forcing. Dry dynamical feedbacks are mainly driven by horizontal advection of cold subtropical air. Moist thermodynamics which are only active in LMDz; they act as negative feedbacks on low-level convergence and are associated with cooling in the stratospheric cold top and in the boundary layer by convective downdrafts. Moist processes play a crucial role in the ITCZ structure through their influence on the vertical profile of convective heating and modulation of moisture-convection feedbacks, two variables that are very sensitive to the convection scheme and, in particular, to lateral convective entrainment. The influence of lateral convective entrainment on the ITCZ structure is analyzed through a hierarchy of model configurations (coupled ocean-atmosphere, atmospheric and aquaplanet) using the CNRM-CM5 GCM. The sensitivity of the ITCZ structure to this parameter is robust across our hierarchy of models. In response to an increased entrainment rate, the realistic simulations exhibit a weakening of the southern side of the double ITCZ over the southeastern Pacific. The change in ITCZ configuration is associated with a more realistic representation of the tropical circulation driven by feedbacks between large-scale dynamics and deep convection. Together with vertical dynamics, SST and associated coupled feedbacks drive the ITCZ location. Sensitivity experiments to lateral entrainment show that ocean-atmosphere feedbacks amplify the double ITCZ bias. A multi-model analysis using CMIP5 GCMs show that the double ITCZ bias has become small in atmosphere-only simulations, and that coupled atmosphere-ocean feedbacks account for a large part of this bias in coupled simulations
Hoyos-Idrobo, Andrés. „Ensembles des modeles en fMRI : l'apprentissage stable à grande échelle“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS029/document.
Der volle Inhalt der QuelleIn medical imaging, collaborative worldwide initiatives have begun theacquisition of hundreds of Terabytes of data that are made available to thescientific community. In particular, functional Magnetic Resonance Imaging --fMRI-- data. However, this signal requires extensive fitting and noise reduction steps to extract useful information. The complexity of these analysis pipelines yields results that are highly dependent on the chosen parameters.The computation cost of this data deluge is worse than linear: as datasetsno longer fit in cache, standard computational architectures cannot beefficiently used.To speed-up the computation time, we considered dimensionality reduction byfeature grouping. We use clustering methods to perform this task. We introduce a linear-time agglomerative clustering scheme, Recursive Nearest Agglomeration (ReNA). Unlike existing fast agglomerative schemes, it avoids the creation of giant clusters. We then show empirically how this clustering algorithm yields very fast and accurate models, enabling to process large datasets on budget.In neuroimaging, machine learning can be used to understand the cognitiveorganization of the brain. The idea is to build predictive models that are used to identify the brain regions involved in the cognitive processing of an external stimulus. However, training such estimators is a high-dimensional problem, and one needs to impose some prior to find a suitable model.To handle large datasets and increase stability of results, we propose to useensembles of models in combination with clustering. We study the empirical performance of this pipeline on a large number of brain imaging datasets. This method is highly parallelizable, it has lower computation time than the state-of-the-art methods and we show that, it requires less data samples to achieve better prediction accuracy. Finally, we show that ensembles of models improve the stability of the weight maps and reduce the variance of prediction accuracy
Cordonnier, Guillaume. „Modèles à couches pour simuler l'évolution de paysages à grande échelle“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM072/document.
Der volle Inhalt der QuelleThe development of new technologies allows the interactive visualization of virtual worlds showing an increasing amount of details and spacial extent. The production of plausible landscapes within these worlds becomes a major challenge, not only because the important part that terrain features and ecosystems play in the quality and realism of 3D sceneries, but also from the editing complexity of large landforms at mountain range scales. Interactive authoring is often achieved by coupling editing techniques with computationally and time demanding numerical simulation, whose calibration is harder as the number of non-intuitive parameters increases.This thesis explores new methods for the simulation of large-scale landscapes. Our goal is to improve both the control and the realism of the synthetic scenes. Our strategy to increase the plausibility consist on building our methods on physically and geomorphologically-inspired laws: we develop new solving schemes, which, combined with intuitive control tools, improve user experience.By observing phenomena triggered by compression areas within the Earth's crust, we propose a method for the intuitive control of the uplift based on a metaphor on the sculpting of the tectonic plates. Combined with new efficient methods for fluvial and glacial erosion, this allows for the fast sculpting of large mountain ranges. In order to visualize the resulting landscapes withing human sight, we demonstrate the need of combining the simulation of various phenomena with different time spans, and we propose a stochastic simulation technique to solve this complex cohabitation. This methodology is applied to the simulation of geological processes such as erosion interleaved with ecosystems formation. This method is then implemented on the GPU, combining long term effects (snow fall, phase changes of water) with highly dynamics ones (avalanches, skiers impact).Our methods allow the simulation of the evolution of large scale, visually plausible landscapes, while accounting for user control. These results were validated by user studies as well as comparisons with data obtained from real landscapes
Mellah, Kohi Meryem. „Modèle de caractérisation d'une bibliothèque CMOS : définition d'une sélection optimale d'éléments“. Montpellier 2, 1995. http://www.theses.fr/1995MON20156.
Der volle Inhalt der QuelleAudinot, Timothée. „Développement d’un modèle de dynamique forestière à grande échelle pour simuler les forêts françaises dans un contexte non-stationnaire“. Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0179.
Der volle Inhalt der QuelleContext. Since the industrial revolution, European forests have shown expansion of their area and growing stock. This expansion, together with climate change, drive changes in the processes of forest dynamic. The emergence of a European bioeconomy strategy suggests new developments of forest management strategies at European and national levels. Simulating future forest resources and their management with large-scale models is therefore essential to provide strategic planning support tools. In France, forest resources show high diversity as compared with other European countries' forests. The MARGOT forest dynamic model (MAtrix model of forest Resource Growth and dynamics On the Territory scale), was developed by the national forest inventory (IFN) in 1993 to simulate French forest resources from data of this inventory, but has been the subject of restricted developments, and simulations remain limited to a time horizon shorter than 30 years, under “business as usual” management scenarios, and not taking into account non-stationary forest and environmental contexts.Aims. The general ambition of this thesis was to consent a significant development effort on MARGOT model, in order to tackle current forestry issues. The specific objectives were: i) to assess the capacity of MARGOT to describe French forest expansion over a long retrospective period (1971-2016), ii) to take into account the heterogeneity of forests at large-scale in a holistic way, iii) to account for the impacts of forest densification in demographic dynamic processes, iv) to encompass external climatic forcing in forest growth, v) in a very uncertain context, to be able to quantify NFI sampling uncertainty in model parameters and simulations with respect to the magnitude of other trends considered. The development of forest management scenarios remained outside the scope of this work.Main results. A generic method for forest partitioning according to their geographic and compositional heterogeneity has been implemented. This method is intended to be applied to other European forest contexts. A method of propagating sampling uncertainty to model parameters and simulations has been developed from data resampling and error modelling approaches. An original approach to integrating density-dependence in demographic processes has been developed, based on a density metric and the reintroduction of forest stand entities adapted to the model. A strategy for integrating climate forcing of model demographic parameters was developed based on an input-output coupling approach with the process-based model CASTANEA, for a subset of French forests including oak, beech, Norway spruce, and Scots pine forests. All of these developments significantly reduced the prediction bias of the initial model.Conclusions. These developments make MARGOT a much more reliable forest resource assessment tool, and are based on an original modeling approach that is unique in Europe. The use of ancient forest statistics will make it possible to evaluate the model and simulate the carbon stock of French forests over a longer time horizon (over 100 years). Intensive simulations to assess the performance of this new model must be done
Martin, Armel. „Influence des ondes de gravité de montagne sur l'écoulement de grande échelle en présence de niveaux critiques“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00812517.
Der volle Inhalt der QuelleLopez, Alain. „Réduction de Grosstalk, fenêtre inductive et modèles équivalents de lignes de transmission couplées“. Montpellier 2, 2004. http://www.theses.fr/2004MON20112.
Der volle Inhalt der QuelleAouad, Lamine. „Contribution à l'algorithmique matricielle et évaluation de performances sur les grilles de calcul, vers un modèle de programmation à grande échelle“. Lille 1, 2005. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2005/50376-2005-Aouad.pdf.
Der volle Inhalt der QuelleLambaerts, Julien. „Les effets dynamiques de l'humidité dans les modèles idéalisés de l'atmosphère de grande échelle“. Paris 6, 2011. http://www.theses.fr/2011PA066514.
Der volle Inhalt der QuelleJenni, Sandra. „Calage de la géométrie des réseaux de fractures aux données hydrodynamiques de production d'un champ pétrolier“. Paris 6, 2005. http://www.theses.fr/2005PA066024.
Der volle Inhalt der QuelleAssouline, Béatrice Gillian. „Mise au point d'un nouveau modèle d' étude des gènes exprimés au cours du développement du pancréas : le poisson médaka, Oryzias Latipes“. Paris 7, 2005. http://www.theses.fr/2005PA077003.
Der volle Inhalt der QuelleNguyen, Thi Thanh Tam. „Codèle : Une Approche de Composition de Modèles pour la Construction de Systèmes à Grande Échelle“. Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00399655.
Der volle Inhalt der QuelleGuziolowski, Carito. „Étude des réseaux biologiques à grande échelle par modélisation statique et résolution des contraintes“. Rennes 1, 2010. http://www.theses.fr/2010REN1S006.
Der volle Inhalt der QuelleIl existe plusieurs approches qui modélisent des réseaux de régulation génétiques afin d'élucider la dynamique d'un système biologique. Cependant, ces approches concernent des modèles à petite-échelle. Dans cette thèse nous utilisons un approche formelle sur les réseaux de régulation à grande-échelle qui modélise les variations des concentrations des molécules d'une cellule entre deux états stationnaires. On teste la cohérence entre la topologie du réseau et des données d'expression génétique en utilisant une règle causale de consistance. Les résultats de cette approche sont : test de la consistance entre les données et un réseau, diagnostic des régions du réseau inconsistantes avec les données expérimentales, et inférence des variations des éléments du réseau. Notre méthode raisonne sur la topologie globale du réseau en utilisant des algorithmes efficaces basés sur des diagrammes de décision, des graphes de dépendance, ou la programmation par ensemble réponse. Nous avons proposé des programmes et des outils bioinformatiques basés sur ces algorithmes qui automatisent ces raisonnements. On a validé cette approche en utilisant des réseaux transcriptionnels des espèces E. Coli et S. Cerevisiae, et le réseau de signalisation de l'oncogène EWS-FLI1. Nos résultats principaux sont: (1) un pourcentage élevé de validation des prédictions sur la variation des molécules du réseau, (2) des corrections manuelles et automatiques efficaces du modèle et/ou données, (3) l'inférence automatique des rôles des facteurs de transcription, et (4) raisonnement automatique sur les causes qui influencent des phénotypes importants dans des réseaux de signalisation
Fernandez-Abrevaya, Victoria. „Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D“. Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.
Der volle Inhalt der QuelleData-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task
Sultan, Emmanuelle. „Etude de la circulation océanique à grande échelle dans l'océan indien sud par méthode inverse“. Paris, Muséum national d'histoire naturelle, 2001. http://www.theses.fr/2001MNHN0019.
Der volle Inhalt der QuelleKulunchakov, Andrei. „Optimisation stochastique pour l'apprentissage machine à grande échelle : réduction de la variance et accélération“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM057.
Der volle Inhalt der QuelleA goal of this thesis is to explore several topics in optimization for high-dimensional stochastic problems. The first task is related to various incremental approaches, which rely on exact gradient information, such as SVRG, SAGA, MISO, SDCA. While the minimization of large limit sums of functions was thoroughly analyzed, we suggest in Chapter 2 a new technique, which allows to consider all these methods in a generic fashion and demonstrate their robustness to possible stochastic perturbations in the gradient information.Our technique is based on extending the concept of estimate sequence introduced originally by Yu. Nesterov in order to accelerate deterministic algorithms.Using the finite-sum structure of the problems, we are able to modify the aforementioned algorithms to take into account stochastic perturbations. At the same time, the framework allows to derive naturally new algorithms with the same guarantees as existing incremental methods. Finally, we propose a new accelerated stochastic gradient descent algorithm and a new accelerated SVRG algorithm that is robust to stochastic noise. This acceleration essentially performs the typical deterministic acceleration in the sense of Nesterov, while preserving the optimal variance convergence.Next, we address the problem of generic acceleration in stochastic optimization. For this task, we generalize in Chapter 3 the multi-stage approach called Catalyst, which was originally aimed to accelerate deterministic methods. In order to apply it to stochastic problems, we improve its flexibility on the choice of surrogate functions minimized at each stage. Finally, given an optimization method with mild convergence guarantees for strongly convex problems, our developed multi-stage procedure, accelerates convergence to a noise-dominated region, and then achieves the optimal (up to a logarithmic factor) worst-case convergence depending on the noise variance of the gradients. Thus, we successfully address the acceleration of various stochastic methods, including the variance-reduced approaches considered and generalized in Chapter 2. Again, the developed framework bears similarities with the acceleration performed by Yu. Nesterov using the estimate sequences. In this sense, we try to fill the gap between deterministic and stochastic optimization in terms of Nesterov's acceleration. A side contribution of this chapter is a generic analysis that can handle inexact proximal operators, providing new insights about the robustness of stochastic algorithms when the proximal operator cannot be exactly computed.In Chapter 4, we study properties of non-Euclidean stochastic algorithms applied to the problem of sparse signal recovery. A sparse structure significantly reduces the effects of noise in gradient observations. We propose a new stochastic algorithm, called SMD-SR, allowing to make better use of this structure. This method is a multi-step procedure which uses the stochastic mirror descent algorithm as a building block over its stages. Essentially, SMD-SR has two phases of convergence with the linear bias convergence during the preliminary phase and the optimal asymptotic rate during the asymptotic phase.Comparing to the most effective existing solution to the sparse stochastic optimization problems, we offer an improvement in several aspects. First, we establish the linear bias convergence (similar to the one of the deterministic gradient descent algorithm, when the full gradient observation is available), while showing the optimal robustness to noise. Second, we achieve this rate for a large class of noise models, including sub-Gaussian, Rademacher, multivariate Student distributions and scale mixtures. Finally, these results are obtained under the optimal condition on the level of sparsity which can approach the total number of iterations of the algorithm (up to a logarithmic factor)
Quaas, Johannes. „L'effet indirect des aérosols : paramétrisation dans des modèles de grande échelle et évaluation des données satellitales“. Palaiseau, Ecole polytechnique, 2003. http://www.theses.fr/2003EPXX0046.
Der volle Inhalt der QuelleVie, Jill-Jênn. „Modèles de tests adaptatifs pour le diagnostic de connaissances dans un cadre d'apprentissage à grande échelle“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC090/document.
Der volle Inhalt der QuelleThis thesis studies adaptive tests within learning environments. It falls within educational data mining and learning analytics, where student educational data is processed so as to optimize their learning.Computerized assessments allow us to store and analyze student data easily, in order to provide better tests for future learners. In this thesis, we focus on computerized adaptive testing. Such adaptive tests which can ask a question to the learner, analyze their answer on the fly, and choose the next question to ask accordingly. This process reduces the number of questions to ask to a learner while keeping an accurate measurement of their level. Adaptive tests are today massively used in practice, for example in the GMAT and GRE standardized tests, that are administered to hundreds of thousands of students. Traditionally, models used for adaptive assessment have been mostly summative : they measure or rank effectively examinees, but do not provide any other feedback. Recent advances have focused on formative assessments, that provide more useful feedback for both the learner and the teacher ; hence, they are more useful for improving student learning.In this thesis, we have reviewed adaptive testing models from various research communities. We have compared them qualitatively and quantitatively. Thus, we have proposed an experimental protocol that we have implemented in order to compare the most popular adaptive testing models, on real data. This led us to provide a hybrid model for adaptive cognitive diagnosis, better than existing models for formative assessment on all tried datasets. Finally, we have developed a strategy for asking several questions at the beginning of a test in order to measure the learner more accurately. This system can be applied to the automatic generation of worksheets, for example on a massive online open course (MOOC)
Guichard, Françoise. „Impact d'un ensemble de nuages sur l'environnement de plus grande échelle vu par un modèle de convection nuageuse explicite (cas GATE et TOGA-GAte)“. Toulouse, INPT, 1995. http://www.theses.fr/1995INPT034H.
Der volle Inhalt der QuelleErez, Giacomo. „Modélisation du terme source d'incendie : montée en échelle à partir d'essais de comportement au feu vers l'échelle réelle : approche "modèle", "numérique" et "expérimentale"“. Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0189.
Der volle Inhalt der QuelleNumerical simulations can provide valuable information to fire investigators, but only if the fire source is precisely defined. This can be done through full- or small-scale testing. The latter is often preferred because these tests are easier to perform, but their results have to be extrapolated in order to represent full-scale fire behaviour. Various approaches have been proposed to perform this upscaling. An example is pyrolysis models, which involve a detailed description of condensed phase reactions. However, these models are not ready yet for investigation applications. This is why another approach was chosen for the work presented here, employing a heat transfer model: the prediction of mass loss rate for a material is determined based on a heat balance. This principle explains the two-part structure of this study: first, a detailed characterisation of heat transfers is performed; then, the influence of these heat transfers on thermal decomposition is studied. The first part focuses on thermal radiation because it is the leading mechanism of flame spread. Flame radiation was characterised for several fuels (kerosene, diesel, heptane, polyurethane foam and wood) and many fire sizes (from 0.3 m up to 3.5 m wide). Measurements included visible video recordings, multispectral opacimetry and infrared spectrometry, which allowed the determination of a simplified flame shape as well as its emissive power. These data were then used in a model (Monte-Carlo method) to predict incident heat fluxes at various locations. These values were compared to the measurements and showed a good agreement, thus proving that the main phenomena governing flame radiation were captured and reproduced, for all fire sizes. Because the final objective of this work is to provide a comprehensive fire simulation tool, a software already available, namely Fire Dynamics Simulator (FDS), was evaluated regarding its ability to model radiative heat transfers. This was done using the data and knowledge gathered before, and showed that the code could predict incident heat fluxes reasonably well. It was thus chosen to use FDS and its radiation model for the rest of this work. The second part aims at correlating thermal decomposition to thermal radiation. This was done by performing cone calorimeter tests on polyurethane foam and using the results to build a model which allows the prediction of MLR as a function of time and incident heat flux. Larger tests were also performed to study flame spread on top and inside foam samples, through various measurements: videos processing, temperatures analysis, photogrammetry. The results suggest that using small-scale data to predict full-scale fire behaviour is a reasonable approach for the scenarios being investigated. It was thus put into practice using FDS, by modifying the source code to allow for the use of a thermal model, in other words defining the fire source based on the model predicting MLR as a function of time and incident heat flux. The results of the first simulations are promising, and predictions for more complex geometries will be evaluated to validate this method
Guendouz, Hassina. „Implémentation d'un modèle timing dans un simulateur logique junior "VLSI" et restructuration de la chaine "CAO" correspondante“. Paris 11, 1988. http://www.theses.fr/1988PA112115.
Der volle Inhalt der QuelleFagot, Christophe. „Méthodes et algorithmes pour le test intégré de circuits VLSI combinatoires“. Montpellier 2, 2000. http://www.theses.fr/2000MON20003.
Der volle Inhalt der QuelleFuchs, Frank. „Contribution à la reconstruction du bâti en milieu urbain, à l'aide d'images aériennes stéréoscopiques à grande échelle : étude d'une approche structurelle“. Paris 5, 2001. http://www.theses.fr/2001PA058004.
Der volle Inhalt der QuelleGaronne, Vincent. „Etude, définition et modélisation d'un système distribué à grande échelle : DIRAC - Distributed infrastructure with remote agent control“. Aix-Marseille 2, 2005. http://theses.univ-amu.fr.lama.univ-amu.fr/2005AIX22057.pdf.
Der volle Inhalt der QuelleVeber, Philippe. „Modélisation grande échelle de réseaux biologiques : vérification par contraintes booléennes de la cohérence des données“. Phd thesis, Université Rennes 1, 2007. http://tel.archives-ouvertes.fr/tel-00185895.
Der volle Inhalt der QuelleHartmann, Philipp. „Effet de l'hydrodynamique sur l'utilisation de la lumière au sein de cultures de microalgues à grande échelle“. Thesis, Nice, 2014. http://www.theses.fr/2014NICE4022/document.
Der volle Inhalt der QuelleMicroalgae are often seen as a promising candidate to contribute to energy generation in the future. However, the link between the energy contained in the biomass and the required energy to grow the microalgae, especially to mix the culture, is complex. Mixing has a direct effect on photosynthesis since it affects the way cells are successively transported between light and dark zones, especially the hydrodynamics modulates the frequency at which light is percept by the cells. In this thesis the question of nonlinear response of the photosynthesis process to varying light signals at different time scales has been investigated. Firstly, the effect of light-dark cycle frequency on the response of a mechanistic model for photosynthesis and growth has been studied. It is shown that increasing the light supply frequency enhances photosynthetic efficiency. A model for photoacclimation has been developed assuming both a change in the number and the cross section of the photosystems. The proposed concepts have been experimentally validated using a self-developed LED device to expose the green algae Dunaliella Salina to light-dark cycles at different frequencies. The results support model hypotheses, i.e. mid-term photoacclimation depends on the average light intensity. Finally, a 3D hydrodynamic model for a raceway type culturing device has been used to compute Lagrangian trajectories numerically. Based on the trajectories, time-dependent light signals for individual cells have been calculated. Using these light signals, a photosynthesis model was integrated in order to investigate the dependency of photosynthetic efficiency on hydrodynamic regime
Bousserez, Nicolas. „Étude du transport intercontinental de la pollution atmosphérique entre l'Amérique du nord et l'Europe à l'aide du modèle de chimie-transport MOCAGE“. Toulouse 3, 2008. http://www.theses.fr/2008TOU30330.
Der volle Inhalt der QuelleThis work aims to characterize the intercontinental transport of pollution between North America and Europe. We focus our study on the summer 2004, using aircraft in situ data from the ICARTT experiment together with simulations from the MOCAGE global chemistry-transport model. In a first part we present an evaluation of the model against in situ measurements over three domains of interest : North-East United States, North Atlantic and Europe. In a second part we analyze the respective impacts of North American biomass burning and anthropogenic plumes on ozone chemistry over the North Atlantic. The last part of our study compares two different modeling strategies to take into account the high space and time variability of biomass burning sources: assimilating daily space-based CO observations into the model and using a daily emission inventory of fire emissions
Flottes, Marie-Lise. „Contribution au test déterministe des circuits cmos : équivalences de pannes“. Montpellier 2, 1990. http://www.theses.fr/1990MON20060.
Der volle Inhalt der QuelleLoko, Houdété Odilon. „Analyse de l'impact d'une intervention à grande échelle avec le modèle de risques proportionnels de Cox avec surplus de zéros : application au projet Avahan de lutte contre le VIH/SIDA en Inde“. Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30546/30546.pdf.
Der volle Inhalt der QuelleLequay, Victor. „Une approche ascendante pour la gestion énergétique d'une Smart-Grid : modèle adaptatif et réactif fondé sur une architecture décentralisée pour un système générique centré sur l'utilisateur permettant un déploiement à grande échelle“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1304.
Der volle Inhalt der QuelleThe field of Energy Management Systems for Smart Grids has been extensively explored in recent years, with many different approaches being described in the literature. In collaboration with our industrial partner Ubiant, which deploys smart homes solutions, we identified a need for a highly robust and scalable system that would exploit the flexibility of residential consumption to optimize energy use in the smart grid. At the same time we observed that the majority of existing works focused on the management of production and storage only, and that none of the proposed architectures are fully decentralized. Our objective was then to design a dynamic and adaptive mechanism to leverage every existing flexibility while ensuring the user's comfort and a fair distribution of the load balancing effort ; but also to offer a modular and open platform with which a large variety of devices, constraints and even algorithms could be interfaced. In this thesis we realised (1) an evaluation of state of the art techniques in real-time individual load forecasting, whose results led us to follow (2) a bottom-up and decentralized approach to distributed residential load shedding system relying on a dynamic compensation mechanism to provide a stable curtailment. On this basis, we then built (3) a generic user-centered platform for energy management in smart grids allowing the easy integration of multiple devices, the quick adaptation to changing environment and constraints, and an efficient deployment
Telescu, Mihai. „Modélisation d'ordre réduit des interconnexions de circuits VLSI“. Brest, 2007. http://www.theses.fr/2007BRES2038.
Der volle Inhalt der QuelleLntegrated circuit designers are showing a growing interest in the effects of interconnect structures. Taking these effects into consideration during simulations has become a major goal. The main objective of this PhD was the development new model order reduction mathematical tools. VLSI interconnect applications were our main priority. Our model order reduction strategy supposes an initial modeling of the origjnal system using either a Laguerre or a Kautz representation. This manuscript contains a synthetic presentation 0f these orthogonal function bases. The five order reduction methods studied during this PhD are then presented. We make available several examples of application of methods to interconnect lines. Weillçistate, among other aspects, the possibility of obtaining Iow complexity equivalent circuits from our models and the possibility of performing reduced order modeling directly from data provided by full-wave simulation
Limoge, Claire. „Méthode de diagnostic à grande échelle de la vulnérabilité sismique des Monuments Historiques : Chapelles et églises baroques des hautes vallées de Savoie“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN014/document.
Der volle Inhalt der QuelleThe aim of this thesis is to propose a seismic vulnerability assessment method well suited to the study of a complete historical heritage, regardless of the prestige of each building. Indeed the great seismic vulnerability of the historical heritage, often in masonry, requires to act preventively in order to avoid irreparable damage. Our approach must tackle three main requirements: to develop large-scale tools of choice to prioritize the needs, to provide relevant analysis of seismic behavior on the structural scale even in the first study, and to manage the large number of uncertainties characterizing the old buildings structural assessment. To this aim, we study the baroque churches and chapels in the high valleys of the French Savoie. They witness to a particularly prosperous period in the history of Savoy and a unique artistic movement adapted to a harsh environment. In this context we have therefore developed or adapted different tools in order to handle the peculiarities of the old buildings. This way we can use the today proposed techniques for modern buildings to study these ancient buildings in rustic masonry: non-linear temporal dynamics numerical modeling, vibratory in situ measurements, non-linear multi modal analysis
Picourlat, Fanny. „Mise à l'échelle des processus hydrologiques pour les modèles de surface continentale, de la modélisation 3D intégrée au modèle de réservoir : Application au bassin du Little Washita“. Thesis, université Paris-Saclay, 2022. http://www.theses.fr/2022UPASJ016.
Der volle Inhalt der QuelleAs the water cycle is a driving force of climate, accurate modeling of the various continental hydrological fluxes is a major challenge in climate modeling. These flows are modeled within Land Surface Models (LSM) with a horizontal resolution of about 100 km. At this scale, the representation of continental hydrology is simplified: lateral flows are conceptualized through reservoirs, and their influence on the spatial distribution of soil water content is neglected. Such simplifications introduce biases on the calculation of evapotranspiratory flux and river flow. A consensus is therefore observed within the scientific community on the need to improve the representation of hydrology in LSM. In this context, the objective of this thesis is to develop an upscaling approach of the hydrological processes for LSM, ranging from integrated 3D model to reservoir model. Applied to the Little Washita basin (Oklahoma, USA), this approach is articulated in three steps of dimensionality reduction. First, a 3D simulation is conducted over 20 years using a physically-based integrated code. The 3D model of the basin is then reduced to a 2D equivalent hillslope model. A third step consists in reducing the 2D model to a conceptual reservoir model using simplifying assumptions. Finally, a 1D column simulation is performed using a LSM. A comparison with the conceptual model resulting from the upscaling approach allows us to identify different avenues for the development of LSM hydrology
Legrand, Caroline. „Simulation des variations de débits et de l’activité de crue du Rhône amont à partir de l’information atmosphérique de grande échelle sur le dernier siècle et le dernier millénaire“. Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALU011.
Der volle Inhalt der QuelleFloods are often destructive natural hazards that can have considerable implications on ecosystemsand societies. In many regions of the world, flood activity and intensity are expected to be amplifiedby the ongoing climate change. However, quantifying possible changes over the coming decades isdifficult. The classical approach is to estimate possible changes from hydrological projections obtainedby simulation using meteorological scenarios produced for different future climate scenarios. Amongother things, these meteorological scenarios have to be adapted to the spatial and temporal scalesof the considered basins. They are typically produced with downscaling models from the large-scaleatmospheric conditions simulated by climate models. Downscaling models are either dynamical orstatistical. The possibility of producing relevant meteorological scenarios with downscaling models istaken for granted, but is rarely assessed.In this study, we assessed the ability of two modelling chains to reproduce, over the last century(1902-2009) and from large-scale atmospheric information only, the observed temporal variations inflows and flood events in the Upper Rhône River catchment (10,900 km2). The modelling chains aremade up of (i) the ERA-20C atmospheric reanalysis, (ii) either the statistical downscaling modelSCAMP or the dynamical downscaling model MAR, and (iii) the glacio-hydrological model GSM-SOCONT.When compared to observations, the downscaled scenarios of daily temperatures and precipitationshighlight the need for a bias correction. This is the case for both downscaling models. For thedynamical downscaling chain, bias correction is additionally necessary for the temperature lapse ratescenarios to avoid irrelevant simulations of snowpack dynamics, particularly for high elevations.The observed multi-scale variations (daily, seasonal and interannual) in flows and low frequencyhydrological situations (low flow sequences and flood events) are generally well reproduced for theperiod 1961-2009. For the first half of the century, the agreement with the reference flows is wea-ker, probably due to lower data quality (ERA-20C and flow data) and/or certain assumptions andmodelling choices (e.g. calibration based on hydrological signatures, stationarity assumption). Theseresults, and those obtained over the last century on variations in flood activity, suggest that themodelling chains can be used in other climatic contexts.In the last part, we simulated variations in flood activity over the last millennium using cli-mate model outputs made available by the Paleoclimate Modelling Intercomparison Project (PMIP).Outputs from the climate model CESM Last Millennium Ensemble, made up of 12 members, werestatistically downscaled at the daily time step over the period 850-2004 with SCAMP (for reasons ofcomputational cost) and used as input to the GSM-SOCONT model.The simulated variations in flood activity in the Upper Rhône River over the last millennium werecompared with those reconstructed from the sediments cores of Lake Bourget. The results suggestthat the variations in flood activity reconstructed over this period could only be due to internalclimate variability and not to any large-scale atmospheric forcing
Federici, Dominique. „Simulation de fautes comportementales de systèmes digitaux décrits à haut niveau d'abstraction en VHDL“. Corte, 1999. http://www.theses.fr/1999CORT3039.
Der volle Inhalt der QuelleLaurent, Pierre. „L'univers aux grandes échelles : études de l'homogénéité cosmique et de l'énergie noire à partir des relevés de quasars BOSS et eBOSS“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS227/document.
Der volle Inhalt der QuelleThis work consists in two parts. The first one is a study of cosmic homogeneity, and the second one a measurement of the BAO scale, which provides a standard ruler that allows for a direct measurement of the expansion rate of the universe. These two analyses rely on the study of quasar clustering in the BOSS and eBOSS quasar samples, which cover the redshift range 0.9 < z < 2.8. On large scales, the measurement of statistical observables is very sensitive to systematic effects, so we deeply studied these effects. We found evidences that the target selections of BOSS and eBOSS quasars are not perfectly homogeneous, and we have corrected this effect. The measurement of the quasar correlation function provides the quasar bias in the redshift range 0.9 < z < 2.8. We obtain the most precise measurement of the quasar bias at high redshift, b = 3.85 ± 0.11, in the range 2.2 < z < 2.8 for the BOSS survey, and b = 2.44 ± 0.04 in the range 0.9 < z < 2.2 for the eBOSS survey. The Cosmological Principle states that the universe is homogeneous and isotropic on large scales. It is one of the basic assumptions of modern cosmology. By studying quasar clustering on large scales, we have proved ''spatial isotropy'', i.e. the fact that the universe is isotropic in each redshift bins. This has been done in the range 0.9 < z < 2.8 without any assumption or fiducial cosmology. If we combine spatial isotropy with the Copernican Principle, which states that we do not occupy a peculiar place in the universe, it is proved that the universe is homogeneous on large scales. We provide a measurement of the fractal correlation dimension of the universe, D₂(r), which is 3 for an homogeneous distribution, and we used a new estimator inspired from the Landy-Szalay estimator for the correlation function. If we correct our measurement for quasar bias, we obtain (3 - D₂(r)) = (6.0 ± 2.1) x 10⁻⁵ between 250 h⁻¹ Mpc and 1200 h⁻¹ Mpc for eBOSS, in the range 0.9 < z < 2.2. For BOSS, we obtain (3 - D₂(r)) = (3.9 ± 2.1) x 10⁻⁵, in the range 2.2 < z < 2.8. Moreover, we have shown that the Lambda-CDM model provide a very nice description of the transition from structures to homogeneity. We have also measured the position of the BAO peak in the BOSS and eBOSS quasar correlation functions, which yield a 2,5 sigma detection in both surveys. If we measure the α parameter, which corresponds to the ratio of the measured position of the peak to the predicted position in a fiducial cosmology (here Planck 2013), we measure α = 1.074 for BOSS, and α = 1.009 for eBOSS. These measurements, combined only with the local measurement of H₀, allows for constraints in parameter space for models beyond Lambda-CDM
Al, Shaer Ali. „Analyse des déformations permanentes des voies ferrées : approche dynamique“. Marne-la-vallée, ENPC, 2005. https://pastel.archives-ouvertes.fr/pastel-00001592.
Der volle Inhalt der Quelle