Dissertations / Theses on the topic 'Simulation séquentielle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 49 dissertations / theses for your research on the topic 'Simulation séquentielle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Wackernagel, Hans. "Géostatistique et assimilation séquentielle de données." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00542362.
Full textMeneveaux, Daniel. "Simulation d'éclairage dans des environnements architecturaux complexes : approches séquentielle et parallèle." Phd thesis, Université Rennes 1, 1998. http://tel.archives-ouvertes.fr/tel-00007237.
Full textKlaudel, Witold. "Contribution aux logiciels de simulation généralisée basée sur une approche séquentielle et l'emploi de flux d'information." Châtenay-Malabry, Ecole centrale de Paris, 1989. http://www.theses.fr/1989ECAP0085.
Full textPoli, Emmanuelle. "Stratigraphie séquentielle haute-résolution, modèles de dépôt et géométrie 2D-3D des séquences triasiques de la marge téthysienne ardéchoise." Dijon, 1997. http://www.theses.fr/1997DIJOS081.
Full textRivallain, Mathieu. "Étude de l'aide à la décision par optimisation multicritère des programmes de réhabilitation énergétique séquentielle des bâtiments existants." Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-00861172.
Full textEmery, Xavier. "Simulation conditionnelle de modèles isofactoriels." Phd thesis, École Nationale Supérieure des Mines de Paris, 2004. http://pastel.archives-ouvertes.fr/pastel-00001185.
Full textShields, Jean-Philippe. "Élaboration du modèle conceptuel flexible et extensible d'une architecture logicielle orientée-objet permettant la parallélisation et la distribution d'une architecture de simulation séquentielle." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24474/24474.pdf.
Full textSakho, Ibrahima. "Synthèse et simulation d'algorithmes systoliques." Phd thesis, Grenoble INPG, 1987. http://tel.archives-ouvertes.fr/tel-00323979.
Full textLanglais, Clothilde. "Étude de la variabilité interannuelle des échanges côte-large : simulation haute résolution de la dynamique du Golfe du Lion." Toulon, 2007. http://www.theses.fr/2007TOUL0030.
Full textQuestions of global change entail elucidation of the processes that determine the exchanges between coastal ocean and deep sea. Fluxes across, ocean shelves are dominated by complex coastal dynamics and in the microtidal site of the Gulf of Lions, mixing and dispersion are dominated by interactions between coastal processes (freshwater dynamics associated to the Rhône river discharge, coastal upwellings, dense water formation and cascading) and the North Mediterranean shelf current. To know how these processes operate and interact, a regional model of the Gulf of Lions has been integrated over the 10 year period 1990-2000. The model is based on the NEMO code, with a 1/64" resolution and 130 vertical z-levels. Two main objectives can be underlined. The numerical outputs allow us to investigate the interanmial variability of the coastal circulation and fluxes across the shelf break and then to strike a balance, of the exchanges at the Gulf of Lion's margin over a quasi-climatic period. Coastal modelling and forecasting is a major challenge for the scientific community, and this study allow us to evaluate the modelling ability of the NEMO code in a coastal and microtidal application
Aunay, Bertrand. "Apport de la stratigraphie séquentielle à la gestion et à la modélisation des ressources en eau des aquifères côtiers." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00275467.
Full textGardet, Caroline. "Modélisation multi-échelles de réservoir et calage d'historique de production." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066645/document.
Full textIn this manuscript, we propose two multiscale algorithms for the simulation of geological reservoir models.The first algorithm is based on two-points statistics methods. It is based upon the sequential Gaussian simulation with a secondary variable. In our multiscale approach, the scale of the secondary variable is considered as a second scale of coarser resolution. It represents the trend or the average of the primary variable. In the context of history-matching, it can be shown that adjusting the properties of the geological model at the coarse scale is more effective than doing the same at the fine scale provided a suitable parameterization technique is used. Our method holds for both continuous and discrete variables. The second algorithm is rooted in texture synthesis techniques developed in computer graphics and modified to cope with reservoir simulation. This is a multipoint simulation algorithm. As such, it requires the use of a training image. It permits to simulates complex geological objects like channels or networks of fractures. However, as all multipoint algorithms, it requires significant computation times. We show how the introduction of an intermediate scale reduces the computation time and improves the reproduction of large structures. We also test two techniques to further reduce the computation time: the partial scan of the training image and the preliminary organization of the information extracted from this image
Bosch, Daphnée. "Simulation, fabrication et caractérisation électrique de transistors MOS avancés pour une intégration 3D monolithique." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT077.
Full textNowadays, Microelectronics industry must handle a real “data deluge” and a growing demand of added functionalities due to the new market sector of Internet Of Things, 5G but also Artificial Intelligence... At the same time, energy becomes a major issue and new computation paradigms emerge to break the traditional Von-Neumann architecture. In this context, this PhD manuscript explores both 3D monolithic integration and nano-electronic devices for In-Memory Computing. First, 3D monolithic integration is not seen only as an alternative to Moore’s law historic scaling but also to leverage circuit diversification. The advantages of this integration are analysed in depth and in particular an original top-tier Static Random Access Memories (SRAM) assist is proposed, improving significantly SRAM stability and performances without area overhead. In a second time, an original transistor architecture, called junctionless, suitable for 3D-monolithic integration is studied in detail. Devices are simulated, fabricated and electrically characterised for mixed digital/analog applications. In particular, the impact of channel doping density on mismatch is tackled. Also, low temperature (<500°C) junctionless bricks are developed and device optimization trade-off are discussed. In a third time, an innovative 3D structure combining state of the art devices: junctionless stacked Silicon nanowires and Resistive Random Access Memories (RRAM) is envisioned. This technology is proved to enable In-Memory Boolean operations through a so-called “scouting logic” approach
Heinrich, Franz. "Modélisation, prédiction et optimisation de la consommation énergétique d'applications MPI à l'aide de SimGrid." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM018/document.
Full textThe High-Performance Computing (HPC) community is currently undergoingdisruptive technology changes in almost all fields, including a switch towardsmassive parallelism with several thousand compute cores on a single GPU oraccelerator and new, complex networks. Powering a massively parallel machinebecomesThe energy consumption of these machines will continue to grow in the future,making energy one of the principal cost factors of machine ownership. This explainswhy even the classic metric "flop/s", generally used to evaluate HPC applicationsand machines, is widely regarded as to be replaced by an energy-centric metric"flop/watt".One approach to predict energy consumption is through simulation, however, a pre-cise performance prediction is crucial to estimate the energy faithfully. In this thesis,we contribute to the performance and energy prediction of HPC architectures. Wepropose an energy model which we have implemented in the open source SimGridsimulator. We validate this model by carefully and systematically comparing itwith real experiments. We leverage this contribution to both evaluate existingand propose new DVFS governors that are part*icularly designed to suit the HPCcontext
Hoock, Jean-Baptiste. "Contributions to Simulation-based High-dimensional Sequential Decision Making." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00912338.
Full textSchegg, Pierre. "Autonomous catheter and guidewire navigation for robotic percutaneous coronary interventions." Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILB007.
Full textThis thesis focuses on automatic navigation of guidewires and catheters in the coronary arteries. We first introduce the context of interventional cardiology and percutaneous coronary interventions. We also justify the use of a robot for those procedures. We then link the guidewire navigation problem to the control of soft robots and review the existing literature.Our study starts with building realistic, dynamic simulations of the arteries, heart and their interaction with the instruments. This required the gathering of patient data, understanding how the heart beat motion deforms the heart and how these deformations impact guidewire navigation. Percutaneous coronary interventions are also very contact dense which is a challenge for both simulation speed and accuracy.We then used these simulations to frame the problem of navigating the coronaries as a sequential decision making problem. We determined the corresponding state and action spaces and devised a reward. We tested a variety of control schemes on the navigation problem, including inverse model based control and tree-search based control and proposed a novel combination of both which achieves good performance in navigating the coronaries.Finally, we design a vision algorithm and registration scheme to transfer the algorithms to a physical robot. The closed loop control scheme was tested on two different silicone phantoms and achieves the same success rate as navigating by hand or by tele-operating the robot
De, Quatrebarbes Céline. "Un modèle d'équilibre général calculable pour questionner la TVA dans les pays en développement : les cas du Niger et du Sénégal." Thesis, Clermont-Ferrand 1, 2015. http://www.theses.fr/2015CLF10461.
Full textIn theory, VAT has always been considered as a consumption tax (Lauré, 1957). Liable producers transfer to the government the difference between the VAT collected on sales and the VAT paid on their inputs. VAT is therefore a tax on final consumption born by the consumer and collected by the producer. With tax abatement principle, VAT seems adapted to the principals of an optimal indirect tax for the maximization of the collective wellbeing. However, if VAT exemptions are implemented or if the tax administration is inefficient in issuing refunds for VAT credits or simply due to non-liable producers, VAT increases producer’s tax burden and viewing the VAT only as a consumption tax becomes inaccurate. In order to take into account these complexities, we built the first Computable General Equilibrium Model in order to shed some light on resources allocation and income distributions of the tax in Niger and Senegal. Simulation results show that an analysis of the VAT’s impact cannot rely only on a common line, neither from the consumer’s nor the producer’s point of view
Cavallera, Philippe. "Simulation de pannes temporelles dans les circuits séquentiels." Montpellier 2, 1997. http://www.theses.fr/1997MON20043.
Full textMagnier, Janine. "Représentation symbolique et vérification formelle de machines séquentielles." Montpellier 2, 1990. http://www.theses.fr/1990MON20134.
Full textMarquer, Yoann. "Caractérisation impérative des algorithmes séquentiels en temps quelconque, primitif récursif ou polynomial." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1121/document.
Full textColson and Moschovakis results cast doubt on the ability of the primitive recursive model to compute a value by any means possible : the model may be complete for functions but there is a lack of algorithms. So the Church thesis express more what can be computed than how the computation is done. We use Gurevich thesis to formalize the intuitive idea of sequential algorithm by the Abstract States Machines (ASMs).We formalize the imperative programs by Jones' While language, and a variation LoopC of Meyer and Ritchie's language allowing to exit a loop if some condition is fulfilled. We say that a language characterizes an algorithmic class if the associated models of computations can simulate each other using a temporal dilatation and a bounded number of temporary variables. We prove that the ASMs can simulate While and LoopC, that if the space is primitive recursive then LoopC is primitive recursive in time, and that its restriction LoopC_stat where the bounds of the loops cannot be updated is in polynomial time. Reciprocally, one step of an ASM can be translated into a program without loop, which can be repeated enough times if we insert it onto a program in While for a general complexity, in LoopC for a primitive recursive complexity, and in LoopC_stat for a polynomial complexity.So While characterizes the sequential algorithms, LoopC the algorithms in primitive recursive space and time, and LoopC_stat the polynomial time algorithms
Winter, Audrey. "Modèles d'appariement du greffon à son hôte, gestion de file d'attente et évaluation du bénéfice de survie en transplantation hépatique à partir de la base nationale de l'Agence de la Biomédecine." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS024/document.
Full textLiver transplantation (LT) is the only life-saving procedure for liver failure. One of the major impediments to LT is the shortage of organs. To decrease organ shortage, donor selection criteria were expanded with the use of extended criteria donor (ECD). However, an unequivocal definition of these ECD livers was not available. To address this issue, an American Donor Risk Index (DRI) was developed to qualify those grafts. But to whom should those ECD grafts be given? Indeed, a proper use of ECD grafts could reduce organ shortage. The aim of this thesis is to establish a new graft allocation system which would allow each graft to be transplanted in the candidate whose LT will allow the greatest survival benefit; and to evaluate the matching between donors and recipients taking into account ECD grafts.The first step was the external validation of the DRI as well as the resultant Eurotransplant-DRI score. However, calibration and discrimination were not maintained on the French database. A new prognostic donor score: the DRI-Optimatch was then developed using a Cox donor model with adjustment on recipient covariates. The model was validated by bootstrapping with correction of the performance by the optimism.The second step was to explore the matching between donors and recipients in order to allocate ECD grafts optimally. Consideration should be given to the donor and recipient criteria, as assessed by the DRI-Optimatch and the Model for End-stage Liver Disease (MELD), respectively. The sequential stratification method retained is based on the randomized controlled trial principle. We then estimated, through hazard ratios, the survival benefit for different categories of MELD and DRI-Optimatch compared against the group of candidates remaining on the wait list (WL) and waiting for a transplant with a graft of better quality (lower DRI-Optimatch).In the third step, we have developed an allocation system based on survival benefit combining the two main principles in graft allocation; urgency and utility. In this system, a graft is allocated to the patient with the greatest difference between the predicted post-transplant life and the estimated waiting time for a specific donor. This model is mainly based on two Cox models: pre-LT and post-LT. In these two models the event of interest being the death of the patient, for the pre-graft model, the dependent censoring was taken into account. Indeed, on the WL, death is often censored by another event: transplantation. A method derived from Inverse Probability of Censoring Weighting was used to weight each observation. In addition, longitudinal data and survival data were also used. A partly conditional model, to estimate the effect of time-dependent covariates in the presence of dependent censoring, was therefore used for the pre-LT model.After developing a new allocation system, the fourth and final step was to evaluate it through Discrete Event Simulation (DES)
Saadi, Toufik. "Résolution séquentielles et parallèles des problèmes de découpe / placement." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2008. http://tel.archives-ouvertes.fr/tel-00354737.
Full textGoffinet, Étienne. "Clustering multi-blocs et visualisation analytique de données séquentielles massives issues de simulation du véhicule autonome." Thesis, Paris 13, 2021. http://www.theses.fr/2021PA131090.
Full textAdvanced driving-assistance systems validation remains one of the biggest challenges car manufacturers must tackle to provide safe driverless cars. The reliable validation of these systems requires to assess their reaction’s quality and consistency to a broad spectrum of driving scenarios. In this context, large-scale simulation systems bypass the physical «on-tracks» limitations and produce important quantities of high-dimensional time series data. The challenge is to find valuable information in these multivariate unlabelled datasets that may contain noisy, sometimes correlated or non-informative variables. This thesis propose several model-based tool for univariate and multivariate time series clustering based on a Dictionary approach or Bayesian Non Parametric framework. The objective is to automatically find relevant and natural groups of driving behaviors and, in the multivariate case, to perform a model selection and multivariate time series dimension reduction. The methods are experimented on simulated datasets and applied on industrial use cases from Groupe Renault Coclustering
Tomas, Jean. "Caractérisation par simulation de la métastabilité des circuits séquentiels : application à des structures VLSI." Bordeaux 1, 1988. http://www.theses.fr/1988BOR10580.
Full textStroh, Rémi. "Planification d’expériences numériques en multi-fidélité : Application à un simulateur d’incendies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC049/document.
Full textThe presented works focus on the study of multi-fidelity numerical models, deterministic or stochastic. More precisely, the considered models have a parameter which rules the quality of the simulation, as a mesh size in a finite difference model or a number of samples in a Monte-Carlo model. In that case, the numerical model can run low-fidelity simulations, fast but coarse, or high-fidelity simulations, accurate but expensive. A multi-fidelity approach aims to combine results coming from different levels of fidelity in order to save computational time. The considered method is based on a Bayesian approach. The simulator is described by a state-of-art multilevel Gaussian process model which we adapt to stochastic cases in a fully-Bayesian approach. This meta-model of the simulator allows estimating any quantity of interest with a measure of uncertainty. The goal is to choose new experiments to run in order to improve the estimations. In particular, the design must select the level of fidelity meeting the best trade-off between cost of observation and information gain. To do this, we propose a sequential strategy dedicated to the cases of variable costs, called Maximum Rate of Uncertainty Reduction (MRUR), which consists of choosing the input point maximizing the ratio between the uncertainty reduction and the cost. The methodology is illustrated in fire safety science, where we estimate probabilities of failure of a fire protection system
Benhallam, Abdelahad. "Contribution à l'étude de l'algorithme à pile pour le décodage séquentiel des codes convolutionnels : conception d'un logiciel de simulation du décodeur." Toulouse, INPT, 1988. http://www.theses.fr/1988INPT083H.
Full textGalvao, Ferreira Afonso. "Contributions à la recherche dans des ensembles ordonnés : du séquentiel au parallèle." Phd thesis, Grenoble INPG, 1990. http://tel.archives-ouvertes.fr/tel-00336447.
Full textMinvielle-Larrousse, Pierre. "Méthodes de simulation stochastique pour le traitement de l’information." Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3005.
Full textWhen a quantity of interest is not directly observed, it is usual to observe other quantities that are linked by physical laws. They can provide information about the quantity of interest if it is able to solve the inverse problem, often ill posed, and infer the value. Bayesian inference is a powerful tool for inversion that requires the computation of high dimensional integrals. Sequential Monte Carlo (SMC) methods, a.k.a. interacting particles methods, are a type of Monte Carlo methods that are able to sample from a sequence of probability densities of growing dimension. They are many applications, for instance in filtering, in global optimization or rare event simulation.The work has focused in particular on the extension of SMC methods in a dynamic context where the system, governed by a hidden Markov process, is also determined by static parameters that we seek to estimate. In sequential Bayesian estimation, the determination of fixed parameters causes particular difficulties: such a process is non-ergodic, the system not forgetting its initial conditions. It is shown how it is possible to overcome these difficulties in an application of tracking and identification of geometric shapes by CCD digital camera. Markov Monte Carlo Chain (MCMC) sampling steps are introduced to diversify the samples without altering the posterior distribution. For another material control application, which mixes static and dynamic parameters, we proposed an original offline approach. It consists of a Particle Marginal Metropolis-Hastings (PMMH) algorithm that integrates Rao-Blackwellized SMC, based on a bank of interacting Ensemble Kalman filters.Other information processing works has been conducted: particle filtering for atmospheric reentry vehicle tracking, 3D radar imaging by sparse regularization and image registration by mutual information
Guglielmetti, Aurore. "Étude numérique du soudage par impulsion magnétique." Thèse, Compiègne, 2012. http://constellation.uqac.ca/2531/1/030358614.pdf.
Full textPéron, Samuel. "Nature et contrôle des systèmes fluviatiles du domaine Ouest Péri-Téthysien au trias inférieur : sédimentologie de faciès, reconstitutions paléoenvironnementales et simulations climatiques." Rennes 1, 2005. http://www.theses.fr/2005REN1S074.
Full textLachat, Cédric. "Conception et validation d'algorithmes de remaillage parallèles à mémoire distribuée basés sur un remailleur séquentiel." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00932602.
Full textMeinvielle, Marion. "Ajustement optimal des paramètres de forçage atmosphérique par assimilation de données de température de surface pour des simulations océaniques globales." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00681484.
Full textBerkane, Bachir. "Vérification des systèmes matériels numériques séquentiels synchrones : application du langage Lustre et de l'outil de vérification Lesar." Phd thesis, Grenoble INPG, 1992. http://tel.archives-ouvertes.fr/tel-00340909.
Full textMakhlouf, Azmi. "Régularité fractionnaire et analyse stochastique de discrétisations ; Algorithme adaptatif de simulation en risque de crédit." Phd thesis, Grenoble INPG, 2009. http://tel.archives-ouvertes.fr/tel-00460269.
Full textMakhlouf, Azmi. "Régularité fractionnaire et analyse stochastique de discrétisations ; Algorithme adaptatif de simulation en risque de crédit." Phd thesis, Grenoble INPG, 2009. http://www.theses.fr/2009INPG0154.
Full textThis thesis deals with three issues from numerical probability and mathematical finance. First, we study the L2-time regularity modulus of the Z-component of a Markovian BSDE with Lipschitz-continuous coefficients, but with irregular terminal function g. This modulus is linked to the approximation error of the Euler scheme. We show, in an optimal way, that the order of convergence is explicitly connected to the fractional regularity of g. Second, we propose a sequential Monte-Carlo method in order to efficiently compute the price of a CDO tranche, based on sequential control variates. The recoveries are supposed to be i. I. D. Random variables. Third, we analyze the tracking error related to the Delta-Gamma hedging strategy. The fractional regularity of the payoff function plays a crucial role in the choice of the trading dates, in order to achieve optimal rates of convergence
Rabineau, Marina. "UN MODELE GEOMETRIQUE ET STRATIGRAPHIQUE DES SEQUENCES DE DEPÔTS QUATERNAIRES SUR LA MARGE DU GOLFE DU LION : ENREGISTREMENT DES CYCLES CLIMATIQUES DE 100 000 ANS." Phd thesis, Université Rennes 1, 2001. http://tel.archives-ouvertes.fr/tel-00136517.
Full textLa zone d'étude d'environ 40 x 40 km est située à la charnière de la plate-forme externe et du haut de pente, dans la partie occidentale du Golfe, au large du Languedoc, où la plate-forme atteint 70 km de large. La base de données est constituée 1) d'un levé sismique Très Haute Résolution (Sparker) dense (espacement de 400 à 1000 m) et 2) d'une dizaine de carottages atteignant 10 m de longueur. L'approche retenue comprend un volet analytique des données géologiques (interprétation des profils sismiques et des carottes par Stratigraphie Sismique et Séquentielle) et un volet de modélisation géométrique et stratigraphique permettant de caractériser, de quantifier et de simuler les séquences sédimentaires reconnues. Cette double approche analytique et modélisatrice, menée de façon concomitante, permet des allers-retours entre la donnée d'origine, le modèle géologique, la valeur des paramètres et les simulations stratigraphiques qui testent plusieurs scenarii géologiques.
L'analyse des géométries 3D permet de dégager un motif élémentaire de dépôt constitué par un couple de prismes PI (à clinoformes faiblement pentus <1° déposés en amont de la plate-forme) et PII (à clinoformes fortement pentus d'environ 4 ° déposés en aval de la plate-forme). Ce motif de dépôt, fondamentalement horizontal, est récurrent et sert à la hiérarchisation des unités sismiques en grandes séquences de dépôt. Il correspond à l'enregistrement d'un cycle glacioeustatique de 100 000 ans (interglaciaire-glaciaire) : le prisme PI correspond aux dépôts prodeltaïques de « haut à moyen niveau marin » ; le prisme PII, aux dépôts littoraux de plus bas niveau du maximum glaciaire du cycle. Cette interprétation rejoint celle proposée par Aloïsi en 1986. Au total, cinq séquences correspondant aux cinq derniers cycles glacio-eustatiques sont donc enregistrées sur la plate-forme occidentale du Golfe du Lion. La base des séries étudiées remonte à 540 000 ans (stade isotopique 12). Les cycles très courts (20 000 ans ou moins) ne sont pas clairement exprimés dans les géométries de dépôt. Les conditions hydrodynamiques exercent un contrôle sur les géométries des séquences de dépôts qui peut brouiller ou effacer l'empreinte des cycles climatiques dans les séquences sédimentaires. Les canyons de l'Aude et de l'Hérault sont globalement inactifs pendant les hauts niveaux marins. Pendant les bas niveaux, leur fonctionnement est variable (transport, érosion et dépôt).
Les simulations stratigraphiques et la restitution des couples de prismes observés et exceptionnellement bien préservés dans cette zone du Golfe du Lion donnent une signification géologique et valident la transformation « simple » des courbes isotopiques O18/O16 en courbes eustatiques. Cette étude donne aussi la première quantification du taux de subsidence moyenne (totale) de la plate-forme du Golfe du Lion pendant le Pléistocène supérieur à moyen (540 ka). La subsidence prend la forme d'un basculement vers le large qui atteint 250 m/Ma à 70 km de la côte actuelle (en bordure de plate-forme). On a calculé, à partir des géométries visibles sur les profils sismiques pétroliers, un taux de subsidence identique sur l'ensemble du Plioquaternaire ; le taux de subsidence est donc constant depuis 5,3 Ma, il ne crée pas de séquence mais permet la préservation, l'empilement et la reconnaissance des séquences glacioeustatiques successives.
Voon, Lew Yan, and Lew Fock Chong. "Contribution au test intégré déterministe : structures de génération de vecteurs de test." Montpellier 2, 1992. http://www.theses.fr/1992MON20035.
Full textLiang, Yan. "Mise en œuvre d'un simulateur en OCCAM pour la conception d'architectures parallèles à base d'une structure multiprocesseur hiérarchique." Compiègne, 1989. http://www.theses.fr/1989COMPD176.
Full textThe simulation has become an indispensable phase for conception of parallel processing systems, and enables to avoid construction of expensive prototypes. In this paper, a parallel process-oriented simulator written in OCCAM language has been developed. Our objective is to conceive a simulator adapted to a network of transputers for prototyping parallel processing systems by connecting directly the serial transputer channels. As a simulation example, a parallel processor system (coprocessor) based on hierarchical structure : master-slave has been realized at the processor-memory-switch level. The performance analysis is obtained via two queuing models : the former as independent M/M/1 systems and the latter as a M/M/s system. The experimental performance is measured respectively based on the independent tasks and the sequential tasks. The comparison of analytic and experimental results enables us to constate the advantage and limit of the coprocessor and to encourage us to its implementation
Robert, Yves. "Algorithmique parallèle : réseaux d'automates, architectures systoliques, machines SIMD et MIMD." Habilitation à diriger des recherches, Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00319876.
Full textCorbier, Franck. "Modélisation et émulation de la partie opérative pour la recette en plateforme d'équipements automatisés." Nancy 1, 1989. http://www.theses.fr/1989NAN10505.
Full textDuff, Christopher. "Codage d'automates et théorie des cubes intersectants." Phd thesis, Grenoble INPG, 1991. http://tel.archives-ouvertes.fr/tel-00339918.
Full textHatoum, Rana. "Removal of micropollutants with activated sludge : proposition of a new kinetic model and special focus on hydrodynamic reactor configuration and local mixing." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0331.
Full textOne of the main challenges currently facing environmental conservation is to reduce the discharge of micropollutants (MPs) from wastewater treatment plants outflows (WWTPs). Biological aerobic processes are the most commonly used treatment, but conventional systems were not conceived with the objective of eliminating micropollutants (MPs). In this work, the removal enhancement of seven micropollutants compounds (pharmaceuticals and industrial chemical) which cover various degree of biodegradability was studied using activated sludge process. To achieve this goal, an experimental methodology followed by a simulation strategy were developed. The systematic study of the impact of WWTPs operational parameters, such as sludge retention time (SRT), hydraulic retention time (HRT) and biomass concentration (CTSS), on the MPs removal was experimentally quantified in nine Sequencing Batch Reactor (SBR) setups. The enhancement of the SRT increased the removal of all the MPs except for two macrolides antibiotics. Application of a higher HRT did also improve MPs removal as could be expected from the measured removal rate constants. More interestingly, our results indicated that, contrary to what is postulated, the removal rate of micropollutants is not proportional to biomass concentration, especially for high ones. Compared to removal rates at 3 to 5 gTSS L- 1 of biomass, an additional increase at 8 gTSS L-1 produced only an unexpected moderate effect. New kinetics of the power law have been studied to cover the full range of biomass concentration conditions in activated sludge systems. It takes into account the limitation of the transport of micropollutants to biomass when its concentration is high. In this case, it has also been shown that the transport of micropollutants is based on molecular diffusion. This process has been added to the proposed kinetic equation. A simulation program of a wastewater treatment plant (SBR and continuous) has been developed. The numerical results using the new kinetics corresponded well to the experimental results. The simulations showed that, on the one hand, the improvement of the local mixing of the mixture greatly favors the elimination, on the other hand, the tiering of the reactor configuration by giving it a hydrodynamic close to a very large cascade of reactors also contributes to greatly improve the elimination of MPs. This can be achieved by building baffle reactors with a width of 2 m and an extended length of more than 200 m. Finally, the combination of these two local and global processes makes it possible to considerably increase the elimination of moderately and slowly biodegradable MPs
Bacchus, Alexandre. "Représentativité de la modélisation aux éléments finis pour le diagnostic de machines synchrones de grande puissance." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10014/document.
Full textAn identification method of rotor inter-turn short-circuits and static eccentricities of an operating turboalternator is developed. This approach provides the type of the machine functional state such as its seriousness and location. The electromotive force obtained from a radial flux sensor is considered in order to identify the machine faults. Some previous works have shown that learning methods are efficient to characterize precisely the fault of a machine. Thus, a fault signatures data set, i.e. prototypes matrix, is built thanks to simulations using finite element method applied to the machine considering a great number of functional states. Therefore, the goal of this work is to identify the class (fault) of experimental measurements using the simulated output. To do that, a small scale alternator is modelled and simulation outputs are compared to the experimental measurements. The application of supervised classification method chosen beyond the shape of data shows that a good classification rate of 79 % for short-circuits and 93 % for eccentricities can be achieved thanks to specific features and simulation output treatments. An identification rule of the fault type is designed by considering a hierarchical classification approach. It achieves excellent results since all experimental measurements are assigned to the right type of fault. Finally, the use of a finite state automaton achieves better results for short-circuits identification by taking into account the temporal evolution of the machine functional state
Piteros, Panagiotis. "De l'utilisation de méta-modèles pour la modélisation et l'analyse de la réponse radar des forêts." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC029/document.
Full textIn this work, a new approach to conduct the radar observations of forests is proposed. It combines statistical methods for sensitivity analysis and adaptive design of simulation experiments and a numerical code simulating the the forest backscattering for the use of a approximate model (metamodel) with less computational cost. The introduction of these mathematical tools has as an objective to assist the design and the execution of radar simulations and at the organization and the analysis of their results. On the one hand, the sensitivity analysis techniques were applied in order to classify the input parameters by means of their importance and to identify the most significant forest parameters as well as their effects on the radar signal. On the other hand, the construction of an adaptive metamodel accelerates the simulation model, while keeping the physics of the phenomenom. The operational frame of this approximate model serves finally in the introduction of the cognitive radar principle in our strategy. In that case, a fast analysis of the received signal is necessary to design, in real time, the new signal to be emitted. That way, the simulated radar observations take into account in real time the effect of the illuminated environment, thanks to the more focused and fast simulations
Brégère, Margaux. "Stochastic bandit algorithms for demand side management Simulating Tariff Impact in Electrical Energy Consumption Profiles with Conditional Variational Autoencoders Online Hierarchical Forecasting for Power Consumption Data Target Tracking for Contextual Bandits : Application to Demand Side Management." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM022.
Full textAs electricity is hard to store, the balance between production and consumption must be strictly maintained. With the integration of intermittent renewable energies into the production mix, the management of the balance becomes complex. At the same time, the deployment of smart meters suggests demand response. More precisely, sending signals - such as changes in the price of electricity - would encourage users to modulate their consumption according to the production of electricity. The algorithms used to choose these signals have to learn consumer reactions and, in the same time, to optimize them (exploration-exploration trade-off). Our approach is based on bandit theory and formalizes this sequential learning problem. We propose a first algorithm to control the electrical demand of a homogeneous population of consumers and offer T⅔ upper bound on its regret. Experiments on a real data set in which price incentives were offered illustrate these theoretical results. As a “full information” dataset is required to test bandit algorithms, a consumption data generator based on variational autoencoders is built. In order to drop the assumption of the population homogeneity, we propose an approach to cluster households according to their consumption profile. These different works are finally combined to propose and test a bandit algorithm for personalized demand side management
Rohmer, Tom. "Deux tests de détection de rupture dans la copule d'observations multivariées." Thèse, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5933.
Full textRitter, G. "Nouvelle méthodologie de simulation symbolique, permettant la vérification des circuits séquentiels décrits à des niveaux d'abstraction différents." Phd thesis, 2001. http://tel.archives-ouvertes.fr/tel-00163429.
Full textFiset, Stéphanie. "Traitement séquentiels et parallèles dans la dyslexie lettre-par-lettre : étude de cas et simulation chez le lecteur normal." Thèse, 2004. http://hdl.handle.net/1866/14724.
Full textMorin, Léonard Ryo. "A dynamic sequential route choice model for micro-simulation." Thèse, 2012. http://hdl.handle.net/1866/9829.
Full textIn transportation modeling, a route choice is a model describing the selection of a route between a given origin and a given destination. More specifically, it consists of determining the sequence of arcs leading to the destination in a network composed of vertices and arcs, according to some selection criteria. We propose a novel route choice model, based on approximate dynamic programming. The technique is applied sequentially, as every time a user reaches an intersection, he/she is supposed to consider the utility of a certain number of future arcs, followed by an approximation for the rest of the path leading up to the destination. The route choice model is implemented as a component of a traffic simulation model, in a discrete event framework. We conduct a numerical experiment on a real traffic network model in order to analyze its performance.
Lavoie, Patrick. "Contribution d'un débriefing au jugement clinique d'étudiants infirmiers lors de simulations de détérioration du patient." Thèse, 2016. http://hdl.handle.net/1866/18588.
Full text