Dissertations / Theses on the topic 'Reconstruction et simulations de flux'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction et simulations de flux.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Negre, Delphine. "Rationalisation de l’Accès aux Produits Naturels Fongiques par une Approche OSMAC in silico : Cas d’étude avec la modélisation du métabolisme de Penicillium rubens." Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU4038.
Full textGiven the pressing issue of increasing antibiotic resistance threatening public health, new biologically active molecule research is urgent. Filamentous fungi are charcterised by their ability to synthesise a wide range of natural products, driven by biosynthetic gene clusters (BGCs) that orchestrate the production of specialised metabolites. However, many products derived from these BGCs remain uncharacterised, and their chemodiversity is underexplored due to the inability to activate their full potential in laboratory settings. The OSMAC (One Strain Many Compounds) approach seeks to harness this potential through culture condition variations. Nevertheless, this method remains complex and costly due to its randomness and vast number of experiments required. Therefore, optimising these processes needs the integration of more rational and efficient strategies. Using systems biology approaches, genome-scale metabolic networks (GSMNs) provide detailed modeling of metabolic pathways, involved enzymes, and associated genes, offering a precise overview of metabolism. In this context, we propose an alternative strategiy: in silico OSMAC. By reconstructing an updated GSMN for Penicillium rubens , we studied its metabolic responses under various nutritional scenarios. This modelling enabled us to assess the influence of different carbon and nitrogen sources on growth and the production of specialised metabolites, thereby opening new prospects for optimising the production of natural products
Delestre, Olivier. "Simulation du ruissellement d'eau de pluie sur des surfaces agricoles." Phd thesis, Université d'Orléans, 2010. http://tel.archives-ouvertes.fr/tel-00587197.
Full textNtemos, George. "GPU-accelerated high-order scale-resolving simulations using the flux reconstruction approach." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/59135.
Full textRidel, Mélissa. "Reconstruction du flux d'énergie et recherche de squarks et gluinos dans l'expérience D0." Phd thesis, Université Paris Sud - Paris XI, 2002. http://tel.archives-ouvertes.fr/tel-00002927.
Full textUne simulation des chaînes de lecture et de calibration de chaque voie du calorimètre a été réalisée. Son résultat dépend de 8 grandeurs caractéristiques qui ont été extraites par traitement du signal de mesures de réflectométrie temporelle. Elle permettra de définir une stratégie de calibration du calorimètre.
Une clusterisation des dépôts d'énergie calorimétrique a été réalisée (cel1NN) basée sur la cellule et non la tour et exploitant au maximum la granularité du calorimètre notamment en débutant dans le Sème compartiment électromagnétique, 4 fois plus granulaire que les autres. L'information longitudinale permet de séparer les particules électromagnétiques et hadroniques superposées. Ainsi, tous les éléments indispensables â la reconstruction individuelle des gerbes sont mis en oeuvre.
Puis, l'energy flow combine les clusters ce11NN et les traces reconstruites dans la cavité centrale pour conserver la meilleure mesure de l'énergie et améliorer ainsi la reconstruction du flux d'énergie de chaque événement.
L'efficacité des déclenchements calorimétriques actuels a été déterminée et utilisée pour une recherche de squarks et gluinos utilisant des événements Monte Carlo dans le cadre de mSUGRA. Une limite inférieure sur les masses des squarks et des gluinos qu'atteindra DO avec 100 pb-1 de luminosité est prédite â partir d'outils de reconstruction standards; elle pourra être améliorée grâce à l'utilisation de l'energy flow.
Ridel, Mélissa. "Reconstruction du flux d'énergie et recherche de squarks et gluinos dans l'expérience DØ." Paris 11, 2002. http://www.theses.fr/2002PA112101.
Full textThe DØ experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow DØ extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied bath software and hardware ways to improve the energy measurement which is crutial for jets and for missing transverse energy. For each calorimeter channel, the physics and the calibration signal has been simulated based on a database of the 8 parameters that describe each channel. The parameters have been extracted from time reflectometry measurements. The calibration strategy can be defined using this simulation. Energy deposits in the calorimeter has been clustered with celINN, at the cellievei instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granulariry compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that DO will be able to put on squarks and gluinos masses with a 100 pb^(-1) integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit
Marchand, Mathieu. "Flux financiers et endettement de l'État : simulations par modèle d'équilibre général calculable (MEGC)." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24520/24520.pdf.
Full textDécossin, Étienne. "Ébullition et assèchement dans un lit de particules avec production interne de chaleur : premières expériences et simulations numériques en situation multidimensionnelle." Toulouse, INPT, 2000. http://www.theses.fr/2000INPT004H.
Full textDidorally, Sheddia. "Prévision des flux de chaleur turbulents et pariétaux par des simulations instationnaires pour des écoulements turbulents chauffés." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0015/document.
Full textThe improvement of aerothermal predictions is a major concern for aeronautic manufacturers. In line with this issue, SAS approaches are assessed on the prediction of wall and turbulent heat fluxes for heated-turbulent flows. This study also aims at evaluating these advanced URANS methods in regard to DRSM models and hybrid RANS/LES approaches as ZDES. Firstly, we proposed to combine the SAS approach and a DRSM model in order to better reproduce both resolved and modelled Reynolds stresses. This new model, called SAS-DRSM, was implemented in ONERA Navier-Strokes code elsA. Unsteady simulations of two heated turbulent flows encountered in an aircraft engine compartment were then performed to evaluate all the SAS models available in the code. These numerical studies demonstrated that SAS approaches improve prediction of the flows compared to classical URANS models. They lead to full 3D flows with many turbulent structures. These structures favour turbulent mixing and thus induce a better prediction of the wall heat fluxes. Moreover, the numerical simulations showed that SAS methods are more accurate than classical URANS models without increasing significantly calculation costs. SAS approaches are not able to resolve the smallest turbulent structures in relation to ZDES which provides better predictions. Finally, the investigation of the turbulent heat flux suggested that the constant turbulent Prendtl number assumption, that is characteristic of classical URANS models, may not be valid in some regions of the flow
Frasson, Thomas. "Flux de chaleur hétérogène dans des simulations de convection mantellique : impact sur la géodynamo et les inversions magnétiques." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALU027.
Full textThe Earth’s magnetic field is generated within the Earth’s core, where convective motions ofthe electrically conducting liquid iron result in a dynamo action. This process, called the geodynamo,has been maintaining a magnetic field for billion of years. Paleomagnetic evidence showsthat the behaviour of the geodynamo has changed during geological times. These behaviourchanges are visible through variations in the strength and stability of the magnetic dipole. Variationsin the heat flux at the core-mantle boundary (CMB) due to mantle convection have beensuggested as one possible mechanism capable of driving such a change of behaviour.Numerical models of mantle convection and of the geodynamo have made significant improvementsin the recent years. Coupling mantle convection models and geodynamo models cangive insights into how the geodynamo reacts to variations in the CMB heat flux. Our current understandingof this thermal coupling between the mantle and the core is nonetheless restricted bylimitations in numerical models on both the mantle and core side. On the mantle side, the orientationof the mantle with respect to the spin axis has to be better constrained in order to exploitrecent simulations reproducing about 1 Gyr of mantle convection. Constraining this orientationrequires to align the maximum inertia axis of the mantle with the spin axis of the Earth, causingsolid-body rotations of the mantle called true polar wander (TPW). On the core side, numericalsimulations are still far from the parameter regime of the Earth, and it is not clear whether thereversing mechanism observed in these models is relevant for the Earth’s core.This work aims at acquiring a more complete understanding of how lateral heterogeneitiesof the CMB heat flux affect the geodynamo. In a first part, we explore the impact of TPW onthe CMB heat flux using two recently published mantle convection models: one model drivenby a plate reconstruction and a second that self-consistently produces a plate-like behaviour. Wecompute the geoid in both models to correct for TPW. An alternative to TPW correction is used forthe plate-driven model by simply repositioning the model in the original paleomagnetic referenceframe of the plate reconstruction. We find that in the plate-driven mantle convection model, themaximum inertia axis does not show a long-term consistency with the position of the magneticdipole inferred from paleomagnetism. TPW plays an important role in redistributing the CMBheat flux, notably at short time scales (≤ 10 Myr). Those rapid variations modify the latitudinaldistribution of the CMB heat flux. A principal component analysis (PCA) is computed to obtainthe dominant CMB heat flux patterns in the models.In a second part, we study the impact of heterogeneous heat flux conditions at the top of thecore in geodynamo models that expands towards more Earth-like parameter regimes than previouslydone. We especially focus on the heat flux distribution between the poles and the equator.More complex patterns extracted from the mantle convection models are also used. We show thatan equatorial cooling of the core is the most efficient at destabilizing the magnetic dipole, while apolar cooling of the core tends to stabilize the dipole. The observed effects of heterogeneous heatflux patterns are explained through the compatibility of thermal winds generated by the heat fluxpattern with zonal flows. Notably, heat flux patterns have a more moderate effect when westwardzonal flows are strong, with a destabilization of the dipole only for unrealistically large amplitudes.A parameter controlling the strength and stability of the magnetic dipole that is consistentwith the reversing behaviour of the geodynamo is suggested.i
Solminihac, Florence de. "Effets de perturbations magnétiques sur la dynamique de la barrière de transport dans un Tokamak : modélisation et simulations numériques." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4743/document.
Full textIn this PhD thesis we study the impact of resonant magnetic perturbations on the transport barrier dynamics in a tokamak. In this goal we have performed turbulence tridimensional numerical simulations in the edge plasma of a tokamak, which reproduced the experimental results observed in different tokamaks. In the improved confinement regime (H mode), the transport barrier is not stable : it does relaxation oscillations, which share common features with the ``Edge Localized Modes'' (ELMs). These ELMs both have advantages and drawbacks. On the one hand, they enable to push away the impurities present in the plasma core. But on the other hand, the thermal load induced on the wall during an ELM can damage the first wall materials. For this reason, they must be controlled. This PhD thesis belongs to the frame of the ITER project, which is today in construction in France. On ITER the ELMs control will be compulsory due to the quantity of energy released. Among the different ways of controlling the ELMs, the resonant magnetic perturbations (RMPs) seem promising. These resonant magnetic perturbations are created by external coils. We consider the TEXTOR tokamak case and we consider two configurations for the external coils : first, a resonant magnetic perturbation with several harmonics, which enables to have a stochastic zone at the plasma edge when the magnetic island chains overlap ; then, a resonant magnetic perturbation with a single harmonic, which therefore creates a single magnetic island chain. In this PhD thesis, we focus on the non-axisymmetric equilibrium created in the plasma by the resonant magnetic perturbation
Breton, Catherine. "Reconstruction de l’histoire de l’olivier (Olea europaea subsp. Europaea) et de son processus de domestication en région méditerranéenne, étudiés sur des bases moléculaires." Aix-Marseille 3, 2006. http://www.theses.fr/2006AIX30066.
Full textHow has the olive been domesticated from its wild form, the oleaster? Molecular diversity at 14 nuclear and 2 chloroplast loci was investigated for comparison amongst a set of more than 1500 individuals from the entire Mediterranean basin. To document the history of the two taxa, a Bayesian method was used to reconstruct the ancestral lines. Diversity analysis with classic genetic tools revealed tendencies but did not provide clear-cut conclusions because the individuals fell along a continuum. Indeed, gene flow from cultivars that were displaced from East to West during human migrations has disturbed the genetic structure on both the continent and various islands. The Bayesian method enabled us to reconstruct the ancestral lineages a posteriori, and so to screen the trees individually in order to reveal those with only a single ancestor in one lineage, rather than “hybrids” with several ancestors in several lineages. We determined 11 ancestral oleaster lineages that originate from ten geographical areas, some of which are recognised as refuge zones for plant and animal species. Cultivars studied separately from oleasters proved to possess 9 ancestral lineages. Cultivars from each lineage delimit a geographic origin corresponding to a domestication event and, while some of these have already been described (Israel, Spain), others were unknown (Corsica, Cyprus, France, and Tunisia). The Bayesian method used was, therefore, complementary with diversity analyses in assigning and admixing each individual to one or several ancestral lineages. Thus, the diverse methods used contributed to providing more robust results. The olive has, therefore, undergone multilocal domestication which is incomplete, as many trees remain unclassifiable
Yureidini, Ahmed. "Reconstruction robuste des vaisseaux sanguins pour les simulations médicales interactives à partir de données patients." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2014. http://tel.archives-ouvertes.fr/tel-01010973.
Full textStute, Simon. "Modélisation avancée en simulations Monte Carlo de tomographie par émission de positons pour l'amélioration de la reconstruction et de la quantification." Paris 11, 2010. http://www.theses.fr/2010PA112264.
Full textPositron emission tomography (PET) is a medical imaging technique that plays a major role in cancer diagnosis and cancer patient monitoring. However, PET images suffer from a modest spatial resolution and high noise. As a result, there is no consensus on how the metabolically active tumour volume and the tumour uptake should be characterized. Ln the meantime, research groups keep producing new methods for tumour characterization based on PET images, which need to be assessed in realistic conditions. A Monte Carlo simulation based method has been developed to produce PET images of cancer patients that are indistinguishable from clinical images, and for which all parameters are known. Fourteen quantification methods were compared in realistic conditions using a group of patient data simulated using this method. Ln addition, the proposed method was extended to simulate serial PET scans in the context of patient monitoring, including modelling of tumour changes and variability over time of non-tumour physiological uptake. Ln a second part of the work, Monte Carlo simulations were used to study the detection probability inside the crystals of the tomograph. A model of the crystal response was derived and included in the system matrix involved in tomographic reconstruction. The resulting reconstruction method was compared with other sophisticated methods proposed in the literature. Using simulated data, we demonstrated that the proposed method improved the noise/resolution trade-off over equivalent approaches. We illustrated the robustness of the method using clinical data. The proposed method might lead to an improved accuracy in tumour characterization from PET images
Moriceau, Brivaëla. "Étude de la dissolution de la silice biogénique des agrégats : utilisation dans la reconstruction des flux de sédimentation de la silice biogénique et du carbonne dans la colonne d'eau." Brest, 2005. http://www.theses.fr/2005BRES2034.
Full textThe dramatic increase of carbon concentration in the atmosphere is limited by the action of some oceanic areas that act like a “sink of carbon” mainly thanks to the biological pump. Because of the strong participation of diatoms to the primary production and their ability to aggregate, we study the role of diatoms and the impact of aggregation on the biological pump. Laboratory experiments determined that the BSiO2 dissolution rate of aggregated cells is lower than the one of freely suspended cells. The model of aggregate used to better understand aggregate internal parameters that provoke the decrease of the dissolution rate, confirmed that the dissolution is lower in aggregates and added that the DSi diffusion in aggregate is lower than in seawater. The decrease of the BSiO2 dissolution rate is attributed to the strong DSi concentrations measured into aggregates and to the higher viability of aggregated cells. Experimental results were then combined with in situ measurements of BSiO2 fluxes in nine areas cf the ocean, intc a simple model that reconstruct BSiO2 fluxes in the water column. This model allows to calculate the repartition of the BSiO2 between large particles and freely suspended cells and to better understand particles dynamic. The BSiO2 fluxes reconstructed using the model, were then associated with a relation between Si/C ratios and water column depth to determine the real importance of diatoms in the biological pump cf carbon. We then calculated the carbon fluxes at the maximum depth of the mixing layer. The use of the Si fluxes as a proxy of the carbon fluxes allow to ignore difficulties due to the cornplexity of the carbon chemistry. The role of diatoms in the export and transfer of carbon strongly depend on the way used to calculate the export out of the surface layer
Garreau, Morgane. "Simulations hémodynamiques pour l'IRM : contrôle qualité, optimisation et intégration à la pratique clinique." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS040.
Full textThe study of hemodynamics, i.e. the dynamics of blood flow, is considered by the medical community as an essential biomarker to characterize the onset and the development of cardiovascular pathologies. Historically, magnetic resonance imaging (MRI), a non-invasive and non-ionizing technique, allows to reconstruct morphological images of the biological tissues. Recent progresses have made it possible to access the temporal evolution of the blood velocity field in the three spatial directions. This technique, known as 4D flow MRI, is still little used in the clinical practice due to its low spatiotemporal resolution and its long scan time.This thesis aims at studying how the 4D flow MRI sequence performs. To begin with, the impact of accelerated sequences (GRAPPA, compressed sensing) on reconstructed velocity fields is studied in a framework combining experimental measurements in a flow phantom and computational fluid dynamics (CFD) simulations. It is shown that the highly accelerated sequence with compressed sensing is in good agreement with numerical simulation as long as appropriate corrections are applied, namely with respect to the eddy currents. Then, the impact of a sequence parameter, namely partial echo, is investigated. The study is conducted thanks to a methodology coupling the simulation of the MR acquisition process with CFD and allowing to reconstruct synthetic MR images (SMRI). This configuration is freed from experimental errors and allows to only focus on the errors intrinsic to the MRI process. Two realistic constructor sequences, without and with partial echo, are simulated for two types of flow in a numerical flow fantom. For both flows, the sequence with partial echo results in overall better results. It suggests that the mitigation of the displacement artifacts made possible by the partial echo has a greater impact than the reduced MR signal acquired that it induces. Furthermore, the coupled MRI-CFD simulation appears as a tool of interest in the context of sequence design and optimization. It could be expanded to other types of MR sequences
Tendeng, Ndéye Léna. "Étude de modèles de transmission de la Schistosomiase : analyse mathématique, reconstruction des variables d'état et estimation des paramètres." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0110/document.
Full textThe aim oh this thesis is the mathematical analysis and the estimation of the parameters of some metapopulation models for bilharzia transmission. We explain how the metapopulation models are built and give a full analysis of their stability. We compute the basic reproduction number R0. We show that if R0 is less than 1 then the Disease Free Equilibrium(DFE) is globally asymptotically stable. In case R0 is higher than 1, we prove the existence and the uniqueness of an endemic equilibrium which is globally asymptotically stable. At last,we suggest methods for the estimation of the states and the parameters for models. We build a numerical observer using the Moving Horizon State Estimation(MHSE) and an analitic one by the High Gain observer method. Applications of thes methods will be done on the Macdonald transmission model of bilharzia
Gagnon, Sandra. "Pseudomonas aeruginosa souche LESB58 : Étude préliminaire pour la reconstruction métabolique in silico et analyse de la distribution de flux métaboliques à l'état stationnaire." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28835/28835.pdf.
Full textBriton, Jean-Philippe. "Simulations numériques de la diffusion multiple de la lumière par une méthode de Monte-Carlo et applications." Rouen, 1989. http://www.theses.fr/1989ROUES040.
Full textLachkar, Zouhair. "Rôle des tourbillons de méso-échelle océaniques dans la distribution et les flux air-mer de CO2 anthropique à l'échelle globale." Paris 6, 2007. http://www.theses.fr/2007PA066036.
Full textTendeng, Léna. "Etude de modèles de transmission de la Schistosomiase: Analyse mathématique, reconstruction des variables d'état et estimation des paramètres." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00843687.
Full textBenoit, Didier. "Conception, reconstruction et évaluation d'une géométrie de collimation multi-focale en tomographie d'émission monophotonique préclinique." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00949951.
Full textTendeng, Ndéye Léna. "Étude de modèles de transmission de la Schistosomiase : analyse mathématique, reconstruction des variables d'état et estimation des paramètres." Electronic Thesis or Diss., Université de Lorraine, 2013. http://www.theses.fr/2013LORR0110.
Full textThe aim oh this thesis is the mathematical analysis and the estimation of the parameters of some metapopulation models for bilharzia transmission. We explain how the metapopulation models are built and give a full analysis of their stability. We compute the basic reproduction number R0. We show that if R0 is less than 1 then the Disease Free Equilibrium(DFE) is globally asymptotically stable. In case R0 is higher than 1, we prove the existence and the uniqueness of an endemic equilibrium which is globally asymptotically stable. At last,we suggest methods for the estimation of the states and the parameters for models. We build a numerical observer using the Moving Horizon State Estimation(MHSE) and an analitic one by the High Gain observer method. Applications of thes methods will be done on the Macdonald transmission model of bilharzia
Gasnault, Olivier. "MESURES DE LA COMPOSITION DES SURFACES PLANETAIRES PAR SPECTROMETRIE GAMMA ET NEUTRONIQUE -- Etudes préparatoires pour Mars et pour la Lune par simulations numériques --." Phd thesis, Université Paul Sabatier - Toulouse III, 1999. http://tel.archives-ouvertes.fr/tel-00602978.
Full textNgadjeu, Djomzoue Alain narcisse. "Etude des effets de gaine induites par une antenne de chauffage à la fréquence cyclotronique ionique (FCI, 30-80 MHz) et de leur impact sur les mesures par sondes dans les plasmas de fusion." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10118/document.
Full textThis work investigates the problematic of probe measurements in RF environment. DC currents flowing along magnetic field lines connected to powered ICRF antennas have been observed experimentally. Negative (i.e. net electron) current is collected on the powered ICRF antenna structure, while positive (i.e. net ion) current is collected by magnetically connected Langmuir probes. An asymmetric model based upon a double probe configuration was developed. The ICRF near field effect is mimicked by a ?driven? RF electrode at one extremity of an "active" open magnetic flux tube, where a purely sinusoidal potential is imposed. The other connection point is maintained at ground potential to model a collecting probe. This "active" flux tube can exchange transverse RF currents with surrounding "passive" tubes, whose extremities are grounded. With simple assumptions, an analytical solution is obtained. We can thus explain how DC currents are produced from RF sheaths. This model also makes it possible to model the characteristics DC Current' DC Voltage of a probe in the presence of RF and thus to evaluate some plasma properties. In this case the electrode at ground potential (probe) is polarized at a given potential. Analytical results are found within certain limits
Aubree, Flora. "Adaptation dans un monde en mouvement - adaptation des communautés et relations biodiversité-fonctionnement des écosystèmes, hétérogénéité spatiale et évolution de la tolérance au stress, migration pulsée et adaptation locale." Electronic Thesis or Diss., Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ6023.
Full textThe world is changing at an unprecedented rate in many interconnected aspects, and ecosystems are primarily concerned. The current shift in environmental conditions is accompanied by an increase in the temporal variability of environmental processes, which is also driven by anthropogenic activities. This work is part of the effort to understand how variability in key environmental processes impacts ecosystem composition and ecological and evolutionary functioning at different scales. The focus is made in particular on the interplay between such variability and the process of adaptation, which is a key aspect of ecosystem dynamics. Adaptation is integral to the functioning of ecosystems, yet it is still relatively little considered. In this thesis, three biological scales are considered – the scale of the community, the scale of the species, and the scale of populations. A theoretical modeling approach is used to introduce some aspects of variability and investigate how ecological and evolutionary dynamics are impacted.At the community scale, the impact that changes in the species co-adaptation level may have on some biodiversity-ecosystem functioning (BEF) relationships (diversity-productivity, diversity-stability and diversity-response to invasion relationships) is questioned. Random and co-adapted communities are compared using adaptive dynamics methods. Results show that species co-adaptation impacts most BEF relationships, sometimes inverting the slope of the relationship. At the species scale, the evolution of stress tolerance under a tolerance-fecundity trade-off model is explored using adaptive dynamics as well. The evolutionary outcomes are determined under different trade-offs and different stress distributions. The most critical parameters in determining the evolutionary outcomes (ESS trait value, branching) are highlighted, and they evidence that stress level heterogeneity is more critical than average stress level. At the population scale, gene flow between sub-populations of the same species is an important determinant of evolutionary dynamics. The impact that temporally variable migration patterns have on gene flow and local adaptation is questioned using both mathematical analyses and stochastic simulations of a mainland-island model. In this model, migration occurs as recurrent “pulses”. This migration pulsedness is found to not only decrease, but also increase, the effective migration rate, depending on the type of selection. Overall, migration pulsedness favors the fixation of deleterious alleles and increases maladaptation. Results also suggest that pulsed migration may leave a detectable signature across genomes. To conclude, these results are put into perspective, and elements are proposed for possible tests of the predictions with observational data. Some practical consequences they may have for ecosystem management and biological conservation are also discussed
Moriceau, Brivaëla. "Etude de la dissolution de la silice biogénique des agrégats : Utilisation dans la reconstruction des flux de sédimentation de la silice biogénique et du carbone dans la colonne d'eau." Phd thesis, Université de Bretagne occidentale - Brest, 2005. http://tel.archives-ouvertes.fr/tel-00353481.
Full textLes échanges de CO2 entre océan et atmosphère sont régis par les lois physicochimiques et répondent à des besoins biologiques. Les processus physicochimiques vont faire tendre les teneurs en carbone de l'océan et de l'atmosphère vers un équilibre variant selon la température et la surface de mélange qui est liée à l'intensité du vent. Des processus biologiques interviennent aussi dans les échanges de carbone entre l'océan et l'atmosphère. La couche de surface océanique reste sous-saturée en carbone car les floraisons algales consomment une partie du carbone dissous, par photosynthèse. Une partie de cette biomasse sédimente et emporte ainsi du carbone vers les couches océaniques profondes. A l'issue de la sédimentation une partie du carbone est séquestrée dans les eaux profondes et une moyenne de 0.3 % du carbone produit en surface est intégrée aux sédiments. C'est le principe de la pompe biologique qui augmente le transfert de CO2 vers l'océan profond.
Certaines algues unicellulaires favorisent plus que d'autres la sédimentation du carbone. Ces microalgues peuvent être lestées par des minéraux ou s'intégrer à des particules plus grosses qui sédimentent rapidement. Il existe deux types de ballasts biogéniques, la silice biogénique (BSiO2) principalement formée par les diatomées et le carbonate de calcium (CaCO3) formé majoritairement par les coccolithophoridés. La domination de la production primaire océanique par les diatomées, leur capacité à intégrer de grosses particules ainsi que leur position à la base d'une chaîne alimentaire saine, semblent faire des diatomées le participant majeur de la pompe biologique de carbone. Ceci est en contradiction avec les récentes exploitations des bases de données de flux de particules dans l'océan qui attribuent ce transfert aux coccolithophoridés. Cette incertitude a, en grande partie, motivé le présent travail.
Les diatomées ont besoin de l'acide orthosilicique (DSi) pour construire leur frustule, or, la disponibilité de la DSi dans l'océan global dépend essentiellement de la profondeur de recyclage de la BSiO2. A la fin d'un bloom, de nombreuses diatomées sédimentent sous la forme d'agrégats. L'agrégation des diatomées influence non seulement le recyclage de la BSiO2 dans les eaux océaniques de surface, mais aussi la sédimentation et la préservation de la BSiO2 sur le plancher océanique. Les diatomées agrégées sédimentent en effet rapidement le long de la colonne d'eau, ce qui laisse peu de temps à la dissolution. Les expériences en laboratoire présentées dans cette étude, ont exploré l'influence de l'agrégation sur la vitesse de dissolution de la BSiO2. Des agrégats monospécifiques ont été formés à partir de trois espèces différentes de diatomées, des Chaetoceros decipiens, des Skeletonema costatum, et des Thalassiosira weissflogii. Les vitesses de dissolution de la BSiO2 des cellules de diatomées de la même culture ont été mesurées pour des cellules agrégées et libres, puis comparées. Les vitesses de dissolution initiales des frustules de diatomées étaient significativement plus faibles pour les cellules agrégées (4.6 an-1) que pour les cellules libres (14 an-1). Les vitesses de dissolution plus lentes de la BSiO2 des agrégats ont été attribuées (1) aux concentrations élevées en DSi dans les agrégats (entre 9 et 230 µM) comparativement au milieu environnant les cellules libres, (2) à une plus forte viabilité des cellules agrégées et (3) à un nombre de bactéries par diatomées plus faible dans les agrégats. Les variations des vitesses de dissolution entre les différents agrégats semblent s'expliquer par des concentrations en TEP variables selon les agrégats.
Les processus biogéochimiques internes des agrégats sont fort peu connus. La diminution de la vitesse de sédimentation observée dans les expériences de laboratoire, pourrait n'être qu'apparente si seule la diffusion de l'acide orthosilicique (DSi) depuis l'intérieur vers l'extérieur de l'agrégat était ralentie. En effet de fortes concentrations en DSi ont été mesurées à l'intérieur des agrégats. Nous présentons un modèle qui décrit la dissolution de la BSiO2 dans un agrégat ainsi que la diffusion de la DSi depuis l'intérieur de l'agrégat vers le milieu environnant. Ce modèle simule l'évolution des concentrations internes en DSi et BSiO2, ainsi que l'accumulation de DSi dans le milieu environnant l'agrégat. La vitesse de dissolution est dépendante de l'écart à l'équilibre, qui décroît avec le temps à mesure que la concentration interne en DSi augmente suite au processus de dissolution, ainsi que de la viabilité des cellules, puisque seules les cellules mortes se dissolvent. Ce modèle permet de montrer que seule la combinaison d'une dissolution réellement ralentie et d'une diffusion également ralentie, permet de reproduire les concentrations internes et externes en DSi. Il est suggéré que le ralentissement de la diffusion pourrait être dû à une association étroite entre la DSi et les TEP. Le ralentissement de la dissolution est quant à lui, attribué pour 16 – 33% à la forte teneur en DSi au sein de l'agrégat et pour 33-66%, à une viabilité plus longue des diatomées agrégées.
Au vu de l'importance des diatomées dans les processus de sédimentation de la matière organique (carbone et BSiO2), nous avons utilisé les résultats précédents dans le but d'établir un modèle simplifié des flux de BSiO2 dans la colonne d'eau. Le flux de sédimentation est décrit comme étant majoritairement composé d'agrégats, mais aussi de cellules libres et de pelotes fécales, les résultats expérimentaux de mesures de vitesses de dissolution de la BSiO2 dans les cellules libres, les agrégats et les pelotes fécales sont ainsi combinés avec des mesures in situ de production de BSiO2 et de flux de BSiO2 dans les eaux profondes de neuf provinces biogéochimiques de l'océan. La comparaison des sorties du modèle et des mesures in situ permet de déterminer la composition du flux de sédimentation en qualité (vitesse de sédimentation) et en quantité (répartition de la BSiO2 entre les cellules libres et les grosses particules). Nous déterminons ainsi que 40% à 90% de la BSiO2 produite en surface, se dissout avant la profondeur maximale de la couche de mélange. Le recyclage domine dans tous les sites quelle que soit la vitesse de sédimentation calculée. L'intensité du recyclage en surface est attribuée à la capacité des cellules à rester libres. Indépendamment de leurs ballasts (BSiO2), les diatomées qui ne sédimentent pas sous la forme d'agrégats ou de pelotes fécales de gros brouteurs vont se dissoudre à de faibles profondeurs. Le modèle permet d'obtenir des informations sur la dynamique des particules puisque nous avons pu déterminer que 200 m est une profondeur maximum optimale pour la couche de mélange, à laquelle les processus de terminaison des blooms tels que l'agrégation et le broutage semblent favorisés.
Notre aptitude à comprendre et à prévoir le rôle de l'océan dans le cycle global du carbone et sa réponse aux changements climatiques, dépend fortement de notre capacité à modéliser le fonctionnement de la pompe biologique à l'échelle globale. En dépit des nombreux progrès réalisés, les mystères entourant la pompe biologique de carbone sont loin d'être éclaircis. Dans cette thèse, la pompe biologique correspond à l'ensemble des mécanismes qui assurent le transfert d'une partie de la production primaire marine vers des profondeurs excédant la profondeur de la couche de mélange hivernale, de sorte que le carbone ne sera plus échangé avec l'atmosphère avant quelques décennies ou quelques siècles, c'est-à-dire sur des échelles de temps relevant de celle associée au changement climatique.
La profondeur de la couche de mélange hivernale se situe, selon les régions, entre 50 et 500 mètres avec quelques pointes vers 800 m (Levitus, 1994). Il s'agit là des profondeurs correspondant à la zone mesopélagique dont nous savons peu de choses puisque les flux de matière in situ sont étudiés à l'aide des pièges à particules qui fonctionnent très mal dans cette zone. Ces incertitudes dans les estimations des flux se reflètent dans les hypothèses sur les mécanismes de la pompe biologique et sur ses variations spatio-temporelles, comme le montre notre démonstration fondée sur les concepts d'export hors de la couche de mélange et d'efficacité de transfert à travers la zone mesopélagique.
Dans cette étude, nous avons voulu évaluer le véritable rôle des diatomées dans la pompe biologique de carbone. Toute la question est de savoir à quelle profondeur le carbone transporté par les diatomées est reminéralisé : au dessus ou au dessous de la couche hivernale de mélange ?
Récemment, les modèles globaux ont incorporé une description plus mécanistique des flux, en remplaçant les exponentielles décroissantes par une compétition entre la vitesse de chute des particules et leur vitesse de reminéralisation. Nous présentons dans cette thèse, une approche dans laquelle les flux de carbone au bas de la couche hivernale de mélange ont été calculés à partir de la reconstitution des flux de BSiO2 présentée précédemment et d'une relation empirique décrivant l'évolution du rapport Si:C avec la profondeur dans différentes provinces biogéochimiques. En combinant les flux de BSiO2 avec cette équation, il est possible de reconstruire les flux de carbone à n'importe quelle profondeur et de calculer des efficacités d'export ou de transfert à travers la zone mesopélagique. L'idée est simple : l'utilisation des flux de Si comme traceur des flux de C permet de s'affranchir des difficultés liées à la chimie complexe à laquelle est soumis le C. Par ailleurs, s'il s'avère impossible de reconstruire les flux de C à partir des flux de Si dans l'océan moderne, l'utilisation de la BSiO2 des sédiments comme paleotraceur de la productivité passée sera d'autant plus compromise.
Les flux reconstruits à partir de cette approche semi-mécanistique sont plus faibles que ceux dérivés des algorithmes précédemment publiés et la proportion de carbone exporté diminue lorsque la productivité augmente. Cette reconstruction rappelle l'importance de la saisonnalité. Elle a des implications pour notre compréhension du fonctionnement de la pompe biologique dans l'océan actuel et pour nos interprétations des enregistrements paléocéanographiques sur son fonctionnement dans l'océan passé. Le rôle attribué aux diatomées ou aux coccolithophoridés dans l'export ou le transfert plus profond de carbone, est fortement dépendant de la façon avec laquelle l'export hors de la couche de surface est estimé.
Macko, Miroslav. "Expérience SuperNEMO : Études des incertitudes systématiques sur la reconstruction de traces et sur l'étalonnage en énergie. Evaluation de la sensibilité de la 0nbb avec émission de Majoron pour le Se-82." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0368/document.
Full textPresented thesis is composed of variety of projects which I performed within theconstruction phase of SuperNEMO demonstrator during the period 2015-2018.SuperNEMO experiment, located at underground laboratory LSM, is designed to searchfor 0nbb of 82Se. Its technology, which takes advantage of particle tracking, is unique inthe field of double beta decay experiments. Event topology reconstruction is powerful toolfor suppression of naturally-occurring background radiation.Part of the thesis is dedicated to experimental work. I took part in assembly and testingof optical modules - the integral part of SuperNEMO calorimeter. Results of tests afterassembly of 520 optical modules are presented in the thesis. Furthermore, I present resultsof complete mapping of 207Bi sources performed using pixel detectors. I also present precisemeasurements of their activities for which I used HPGe detectors. These 207Bi sources willbe used for calibration of the calorimeter. Study played a key role in choice of 42 sourceswhich were installed in the demonstrator and will take part in calibration of the demonstrator.Another part of the thesis contains projects focused on Monte Carlo simulations. In firstof them, I studied a vertex reconstruction precision achievable by reconstruction algorithmdeveloped for SuperNEMO experiment. Precision is evaluated using different statisticalmethods in variety of different conditions (magnetic field, energy of electrons, angles ofemission, etc.). Factors influencing the precision, based on the achieved results are discussed.In 2018, I also performed simulations of neutron shielding. Variety of shielding materialswith different thicknesses were (in the simulation) exposed to realistic neutron spectrumfrom LSM and the fluxes behind the shielding were estimated. It was shown that the partsof the detector made of Iron should be expected to capture vast majority of neutrons passingthe shielding. I also discuss a problem with simulation of deexcitation gamma radiation,emitted after thermal neutron capture, which arises in standard software packages. I proposednew extended generator capable to resolve the problem and demonstrate the conceptin analytically solvable example.Along with standard 0nbb, SuperNEMO will be capable of searching for more exoticmodes of the decay. In the thesis, I present possible half-life limits achievable by SuperNEMOfor 0nbb with emission of one or two Majorons. The study is performed asa function of activity of internal contamination from 208Tl and 214Bi isotopes. Measurementperiod after which SuperNEMO should be able to improve half-life limits of NEMO-3 (incase the decay would not be observed) are estimated
Predkladaná dizertaˇcná práca je zložená z projektov rôzneho charakteru, na ktorýchsom pracoval vo fáze výstavby SuperNEMO demonštrátora v období rokov 2015-2018.Experiment SuperNEMO, umiestnený v podzemnom laboratóriu LSM, je zameraný nahl’adanie 0nbb v 82Se. Experiment je založený na technológii rekonštrukcie dráh elektrónovvznikajúcich v rozpade. Tento prístup je jedineˇcný v oblasti 0nbb experimentov.Rekonštrukcia topológie udalostí je silným nástrojom na potlaˇcenie pozad’ovej aktivity vyskytujúcejsa v laboratóriu, ako aj v konštrukˇcných materiáloch detektora.Cˇ ast’ práce je venovaná experimentálnym úlohám. Zúcˇastnil som sa na konštrukciioptických modulov - súˇcasti hlavného kalorimetra. Práca obsahuje výsledky prípravy atestovania 520 optických modulov, a takisto výsledky kompletného mapovania kalibraˇcných207Bi zdrojov vykonaného za pomoci pixelových detektorov. V tejto ˇcasti sú odprezentovanéaj výsledky merania ich aktivít za pomoci HPGe detektorov. Štúdia zohrávala kl’úˇcovúúlohu pri výbere 42 zdrojov, ktoré boli nainštalované do prvého SuperNEMO modulu, dodemonštrátora, a budú použité na jeho energetickú kalibráciu.ˇ Dalšiu ˇcast’ práce tvoria úlohy zamerané na Monte Carlo simulácie. Prvým z nich,je štúdia presnosti rekonštrukcie vertexu dvojitého beta rozpadu. Rozpadové vertexy súrekonštruované tzv. CAT (Cellular Automaton Tracker) algoritmom vyvinutým pre experimentSuperNEMO. V štúdii sú porovnávané viaceré spôsoby definovania presnosti rekonštrukcie.Presnost’ je skúmaná v závislosti na magnetickom poli v detektore, energii elektrónov,uhlov ich emisie atd’. Na základe výsledkov sú v štúdii pomenované faktory, ktoré ovplyvˇnujú presnost’ rekonštrukcie vertexov dvojitého beta rozpadu.V roku 2018 som takisto vypracoval štúdie neutrónového tienenia. Oˇcakávané toky neutrónovza tienením boli odhadnuté pomocou Monte Carlo simulácie. Kvalita odtienenia neutrónovz realistickéh pozad’ového spektra, nameraného v LSM, bola skúmana pre tri rôznemateriály rôznych hrúbok. Výsledky ukázali, že neutrónový tok prechádzajúci tienenímbude primárne zachytávaný na komponentoch detektora zhotoveného zo železa. V rámcištúdie neutrónového tienenia je takisto diskutovaný problém simulácie deexcitaˇcných gamakaskád, produkovaných jadrami, po záchyte termálnych neutrónov. Štandardné simulaˇcnésoftvérové balíˇcky využívajú generátory gama kaskád nepostaˇcujúce pre potreby štúdie.Navrhol som nový generátor, ktorý je schopný tieto problémy vyriešit’. Funkˇcnost’ generátorabola preukázaná na príklade jednoduchého systému.Okrem štandardného 0nbb je SuperNEMO experiment schopný hl’adat’ aj jeho exotickejšieverzie. V práci sa nachádzajú odhady limitov ˇcasu polpremeny 0nbb s emisiou jednéhoalebo dvoch Majorónov, dosiahnutel’né SuperNEMO demonštrátorom. Tieto limity sú študovanév závislosti na aktivite izotopov 208Tl a 214Bi, ktoré kontaminujú zdrojovú 82Se fóliu.Bola odhadnuá doba merania, za ktorú bude SuperNEMO schopný vylepšit’ limity ˇcasu polpremeny,pre dva spomenuté rozpadové módy, dosiahnutých experimentom NEMO-3
Sajjad, Saeeda. "Développement d'outils de simulation et de reconstruction de gerbes de particules pour l'astronomie gamma avec les futurs imageurs Tcherenkov." Phd thesis, Montpellier 2, 2007. http://www.theses.fr/2007MON20249.
Full textSajjad, Saeeda. "Développement d'outils de simulation et de reconstruction de gerbes de particules pour l'astronomie gamma avec les futurs imageurs Tcherenkov." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00408835.
Full textCloquet, Christophe. "Optimiser l'utilisation des données en reconstruction TEP: modélisation de résolution dans l'espace image et contribution à l'évaluation de la correction de mouvement." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209887.
Full textLorsque le tableau clinique présenté par un patient n'est pas clair, de nombreuses techniques d'imagerie médicale permettent d'affiner le diagnostic, de préciser le pronostic et de suivre l'évolution des maladies au cours du temps. Ces mêmes techniques sont également utilisées en recherche fondamentale pour faire progresser la connaissance du fonctionnement normal et pathologique du corps humain. Il s'agit par exemple de l'échographie, de l'imagerie par résonance magnétique, de la tomodensitométrie à rayons X ou encore de la tomographie par émission de positrons (TEP).
Certaines de ces techniques mettent en évidence le métabolisme de molécules, comme le glucose et certains acides aminés. C'est le cas de la tomographie par émission de positrons, dans laquelle une petite quantité de molécules marquées avec un élément radioactif est injectée au patient. Ces molécules se concentrent de préférence dans les endroits du corps humain où elles sont utilisées. Instables, les noyaux radioactifs se désintègrent en émettant un anti-électron, encore appelé positron. Chaque positron s'annihile ensuite à proximité du lieu d'émission avec un électron du corps du patient, provoquant l'émission simultanée de deux photons de haute énergie dans deux directions opposées. Après avoir traversé les tissus, ces photons sont captés par un anneau de détecteurs entourant le patient. Sur base de l'ensemble des événements collectés, un algorithme de reconstruction produit enfin une image de la distribution du traceur radioactif.
La tomographie par émission de positrons permet notamment d'évaluer l'efficacité du traitement des tumeurs avant que la taille de celles-ci n'ait changé, ce qui permet d'aider à décider de poursuivre ou non le traitement en cours. En cardiologie, cette technique permet de quantifier la viabilité du muscle cardiaque après un infarctus et aide ainsi à évaluer la pertinence d'une intervention chirurgicale.
Plusieurs facteurs limitent la précision des images TEP. Parmi ceux-ci, on trouve l'effet de volume partiel et le mouvement du coeur.
L'effet de volume partiel mène à des images floues, de la même manière qu'un objectif d'appareil photo incorrectement mis au point produit des photographies floues. Deux possibilités s'offrent aux photographes pour éviter cela :soit améliorer la mise au point de leur objectif, soit retoucher les images après les avoir réalisées ;améliorer la mise au point de l'objectif peut s'effectuer dans l'espace des données (ajouter une lentille correctrice avant l'objectif) ou dans l'espace des images (ajouter une lentille correctrice après l'objectif).
Le mouvement cardiaque provoque également une perte de netteté des images, analogue à l'effet de flou sur une photographie d'une voiture de course réalisée avec un grand temps de pose. Classiquement, on peut augmenter la netteté d'une image en diminuant le temps de pose. Cependant, dans ce cas, moins de photons traversent l'objectif et l'image obtenue est plus bruitée.
On pourrait alors imaginer obtenir de meilleurs images en suivant la voiture au moyen de l'appareil photo.
De cette manière, la voiture serait à la fois nette et peu corrompue par du bruit, car beaucoup de photons pourraient être détectés.
En imagerie TEP, l'effet de volume partiel est dû à de nombreux facteurs dont le fait que le positron ne s'annihile pas exactement à l'endroit de son émission et que le détecteur frappé par un photon n'est pas toujours correctement identifié. La solution passe par une meilleure modélisation de la physique de l'acquisition au cours de la reconstruction, qui, en pratique est complexe et nécessite d'effectuer des approximations.
La perte de netteté due au mouvement du coeur est classiquement traitée en figeant le mouvement dans plusieurs images successives au cours d'un battement cardiaque. Cependant, une telle solution résulte en une diminution du nombre de photons, et donc en une augmentation du bruit dans les images. Tenir compte du mouvement de l'objet pendant la reconstruction TEP permettrait d'augmenter la netteté en gardant un bruit acceptable. On peut également penser à superposer différentes images recalées au moyen du mouvement.
Au cours de ce travail, nous avons étudié des méthodes qui tirent le meilleur parti possible des informations fournies par les événements détectés. Pour ce faire, nous avons choisi de baser nos reconstructions sur une liste d'événements contenant la position exacte des détecteurs et le temps exact d'arrivée des photons, au lieu de l'histogramme classiquement utilisé.
L'amélioration de résolution passe par la connaissance de l'image d'une source ponctuelle radioactive produite par la caméra.
À la suite d'autres travaux, nous avons mesuré cette image et nous l'avons modélisée, pour la première fois, au moyen d'une fonction spatialement variable, non-gaussienne et asymétrique. Nous avons ensuite intégré cette fonction dans un algorithme de reconstruction, dans l'espace image. C'est la seule possibilité pratique dans le cas d'acquisitions en mode liste. Nous avons ensuite comparé les résultats obtenus avec un traitement de l'image après la reconstruction.
Dans le cadre de la correction de mouvement cardiaque, nous avons opté pour l'étude de la reconstruction simultanée de l'image et du déplacement, sans autres informations externes que les données TEP et le signal d'un électrocardiogramme. Nous avons ensuite choisi d'étudier la qualité de ces estimateurs conjoints intensité-déplacement au moyen de leur variance. Nous avons étudié la variance minimale que peut atteindre un estimateur conjoint intensité-mouvement, sur base des données TEP uniquement, au moyen d'un outil appelé borne de Cramer-Rao. Dans ce cadre, nous avons étudié différentes manières existantes d'estimer la borne de Cramer-Rao et nous avons proposé une nouvelle méthode d'estimation de la borne de Cramer-Rao adaptée à des images de grande dimension. Nous avons enfin mis en évidence que la variance de l'algorithme classique OSEM était supérieure à celle prédite par la borne de Cramer-Rao. En ce qui concerne les estimateurs combinés intensité-déplacement, nous avons observé la diminution de la variance minimale possible sur les intensités lorsque le déplacement était paramétrisé sur des fonctions spatiales lisses.
Ce travail est organisé comme suit. Le chapitre théorique commence par brosser brièvement le contexte historique de la tomographie par émission de positrons. Nous avons souhaité insister sur le fait que l'évolution des idées n'est romantique et linéaire qu'à grande échelle. Nous abordons ensuite la description physique de l'acquisition TEP. Dans un deuxième chapitre, nous rappelons quelques éléments de la théorie de l'estimation et de l'approximation et nous traitons des problèmes inverses en général et de la reconstruction TEP en particulier.
La seconde partie aborde le problème du manque de netteté des images et la solution que nous avons choisi d'y apporter :une modélisation dans l'espace image de la réponse impulsionnelle de la caméra, en tenant compte de ses caractéristiques non gaussienne, asymétrique et spatialement variable. Nous présentons également le résultat de la comparaison avec une déconvolution post-reconstruction. Les résultats présentés dans ce chapitre ont fait l'objet d'une publication dans la revue Physics in Medicine and Biology.
Dans un troisième volet, nous abordons la correction de mouvement. Une premier chapitre brosse le contexte de la correction de mouvement en TEP et remet en perspective les différentes méthodes existantes, dans un cadre bayésien unificateur.
Un second chapitre aborde ensuite l'estimation de la qualité des images TEP et étudie en particulier la borne de Cramer-Rao.
Les résultats obtenus sont enfin résumés et replacés dans leur contexte dans une conclusion générale.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Auer, Benjamin. "Modélisation et caractérisation d'un système TEMP à collimation sténopée dédié à l'imagerie du petit animal." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAE001/document.
Full textMy thesis focuses on the development of several quantitative reconstruction methods dedicated to small animal Single Photon Emission Computed Tomography. The latter is based on modeling the acquisition process of a 4-heads pinhole SPECT system using Monte Carlo simulations.The system matrix approach, combined with the OS-EM iterative reconstruction algorithm, enabled to characterize the system performances and to compare it to the state of the art. Sensitivity of about 0,027% in the center of the field of view combined with a tomographic spatial resolution of 0,87 mm were obtained.The major drawbacks of Monte Carlo methods led us to develop an efficient and simplified modeling of the physical effects occurring in the subject. My approach based on a system matrix decomposition, associated to a scatter pre-calculated database method, demonstrated an acceptable time for a daily imaging subject follow-up (1h), leading to a personalized imaging approach. The inherent approximations of the scatter pre-calculated approach have a moderate impact on the recovery coefficients results, nevertheless a correction of about 10% was achieved
Alvarez, Areiza Diego. "Réflexions sur la reconstruction prothétique de l’Articulation Temporo-Mandibulaire (ATM) à travers une étude biomécanique comparative entre sujets asymptomatique et pathologique." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0335/document.
Full textThis thesis deals with a biomechanical study of the Temporo-Mandibular Joint (TMJ); one of the objectives of this work is the definition of a complete approach, using modern tools, allowing the design of a personalized TMJ prosthesis. First of all, a tribometer reproducing the TMJ physiological conditions was designed and built, in order to study the interactions between porcine bone and a prosthetic material and to quantify their respective wears. Through this device, the relationships between the contact parameters and bone wear were determined. Personalized prosthetic design needs first to carry out “a state-of-the-art”. We defined a non invasive protocol for TMJ characterization. It corresponds to the acquisition of its current geometry and of the mandible elementary motions. In a second step, numerical simulations of rigid bodies and/or finite elements were achieved to obtain the mechanical quantities, such as stresses and strains, necessary for the prosthesis design. The entire protocol was conducted on two subjects: an asymptomatic one and a second one with condylar resorption. Personalized numerical models were built for each case. These models allowed us study the joint functioning of each subject. We made comparisons between these subjects and significant differences were found. It was proved that the changes produced by bone resorption have an impact on muscle activity, as well as contact forces in joints. This work allowed enhancing the fundamental knowledge regarding TMJ operating conditions. It also enabled to validate evaluation tools of the functional state of the joint. The approach developed during this thesis is applied by OBL society, specialized in the design of customed maxillofacial prostheses. The approach proposed in this work can also be used as an evaluation tool of existing prosthetic solutions, as well as future solutions
Nguyen, Bang Giang. "Classification en espaces fonctionnels utilisant la norme BV avec applications aux images ophtalmologiques et à la complexité du trafic aérien." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2473/.
Full textIn this thesis, we deal with two different problems using Total Variation concept. The first problem concerns the classification of vasculitis in multiple sclerosis fundus angiography, aiming to help ophthalmologists to diagnose such autoimmune diseases. It also aims at determining potential angiography details in intermediate uveitis in order to help diagnosing multiple sclerosis. The second problem aims at developing new airspace congestion metric, which is an important index that is used for improving Air Traffic Management (ATM) capacity. In the first part of this thesis, we provide preliminary knowledge required to solve the above-mentioned problems. First, we present an overview of the Total Variation and express how it is used in our methods. Then, we present a tutorial on Support Vector Machines (SVMs) which is a learning algorithm used for classification and regression. In the second part of this thesis, we first provide a review of methods for segmentation and measurement of blood vessel in retinal image that is an important step in our method. Then, we present our proposed method for classification of retinal images. First, we detect the diseased region in the pathological images based on the computation of BV norm at each point along the centerline of the blood vessels. Then, to classify the images, we introduce a feature extraction strategy to generate a set of feature vectors that represents the input image set for the SVMs. After that, a standard SVM classifier is applied in order to classify the images. Finally, in the third part of this thesis, we address two applications of TV in the ATM domain. In the first application, based on the ideas developed in the second part, we introduce a methodology to extract the main air traffic flows in the airspace. Moreover, we develop a new airspace complexity indicator which can be used to organize air traffic at macroscopic level. This indicator is then compared to the regular density metric which is computed just by counting the number of aircraft in the airspace sector. The second application is based on a dynamical system model of air traffic. We propose a method for developing a new traffic complexity metric by computing the local vectorial total variation norm of the relative deviation vector field. Its aim is to reduce complexity. Three different traffic situations are investigated to evaluate the fitness of the proposed method
Zein, Sara. "Simulations Monte Carlo des effets des photons de 250 keV sur un fantôme 3D réaliste de mitochondrie et évaluation des effets des nanoparticules d'or sur les caractéristiques des irradiations." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC036/document.
Full textIn the field of radiobiology, damage to nuclear DNA is extensively studied since it is considered as a sensitive target inside cells. Mitochondria are starting to get some attention as sensitive targets as well since they control many functions important to the cell’s survival. They are double membraned organelles mainly in charge of energy production as well as reactive oxygen species regulation, cell signaling and apoptosis control. Some experiments have shown that after exposure to ionizing radiation the mitochondrial contents are altered and their functions are affected. That is why we are interested in studying the effects of ionizing radiation on mitochondria. At the microscopic scale, Monte Carlo simulations are helpful in reproducing the tracks of ionizing particles for a close study. Therefore, we produced 3D phantoms of mitochondria starting from microscopic images of fibroblast cells. These phantoms are easily uploaded into Geant4 as tessellated and tetrahedral meshes filled with water representing the realistic geometry of these organelles. Microdosimetric analysis is performed to deposited energy by 250keV photons inside these phantoms. The Geant4-DNA electromagnetic processes are used to simulate the tracking of the produced secondary electrons. Since clustered damages are harder to repair by cells, a clustering algorithm is used to study the spatial clustering of potential radiation damages. In radiotherapy, it is a challenge to deliver an efficient dose to the tumor sites without affecting healthy surrounding tissues. The use of gold nanoparticles as radio-sensitizers seems to be promising. Their high photon absorption coefficient compared to tissues deposit a larger dose when they are preferentially absorbed in tumors. Since gold has a high atomic number, Auger electrons are produced abundantly. These electrons have lower range than photoelectrons enabling them to deposit most of their energy near the nanoparticle and thus increasing the local dose. We studied the radio-sensitizing effect of gold nanoparticles on the mitochondria phantom. The effectiveness of this method is dependent on the number, size and spatial distribution of gold nanoparticles. After exposure to ionizing radiation, reactive oxygen species are produced in the biological material that contains abundant amount of water. In this study, we simulate the chemical species produced inside the mitochondria phantom and their clustering is estimated. We take advantage of the Geant4-DNA chemistry processes libraries that is recently included in the Geant4.10.1 release to simulate the spatial distribution of the chemicals and their evolution with time
Belmajdoub, Fouad. "Développement d'une méthode de reconstruction 3D du tronc scoliotique par imagerie numérique stéréoscopique et modélisation des calculs par réseaux de Pétri à flux de données en vue d'une implémentation sur une architecture parallèle." Aix-Marseille 3, 1993. http://www.theses.fr/1993AIX30087.
Full textPetitjean, Caroline. "Mesures in situ et simulations des flux de N²0 émis par les sols : Cas du changement d’usage des terres en Guyane : déforestation par la méthode ‘chop-and-mulch’ suivie de la mise en valeur agricole." Thesis, Antilles-Guyane, 2013. http://www.theses.fr/2013AGUY0610/document.
Full textThis study investigates the effects of the conversion of tropical forest to cultivation on soil n2o emissions. The study was carried out over a complete crop cycle at the experimental site combi (french guianese coast). Nitrous oxide fluxes were obtained in the field and by conducting simulations with the noe model. Undisturbed tropical rainforest was compared to rainforest that had been converted to agricultural land using the ‘chop-and-mulch’ method. The ‘chop-and-mulch’ method is a fire-free method used for vegetation clearing combining the mechanical felling of trees with the mulching of small vegetation. Agricultural land included either mowed grassland or soybean/fertilised maize crop rotation. For croplands the two cultivation practices employed were: conventional seeding (using an offset disc harrow, without cover plants) or direct seeding (no till, with cover plants).The main results of this study are: rainforest soil at combi produced low n2o emissions; rainforest converted to mowed grassland using the 'chop-and-mulch’ method did not lead to a significant increase in n2o emissions between the 19th and 31st months after conversion; the conversion of rainforest to croplands induced a significant increase in soil n2o emissions due to the application of fertiliser and the modification of soil parameters (bulk density, temperature, volumetric moisture); n2o emissions from agricultural practices with no-till were no higher than those produced by conventional agricultural practices using an offset disc harrow; and, the introduction of an hydric hysteresis into the noe model constitutes a promising improvement to estimate in situ n2o emissions
Ducousso, Nicolas. "Coexistence et interactions de la circulation décennale moyenne et des ondes et tourbillons transitoires dans l'océan Atlantique Nord." Brest, 2011. http://www.theses.fr/2011BRES2055.
Full textObservations of the ocean circulation reveal that the large-scale general circulation coexists with transient waves and coherent vortices. This thesis aims at characterizing some aspects of the interaction mechanisms, which link these kinds of motion. This objective encompasses two tasks: the first task is a critical review of the theoretical frameworks previously published in the literature; the second task consists of the analysis of a numerical simulation provided by the DRAKKAR team. The diagnostic s performed within the Temporal Residual Mean framework, as detailed by McDougall and Mclntosh (2001). Indeed, this framework provides a valuable perspective, as the constraint of the stratification and the quasi-adiabaticity of the ocean interior on the mean circulation and the interaction mechanisms is made explicit. The analysis illustrates these constraints by showing two things. First, the residual mean velocity is nearly aligned with the isopycnal surfaces and its diapycnal component is very weak. Second, the residual eddy fluxes of heat and salt are nearly aligned with the isopycnal surfaces and their intensity are significant in frontal areas only. Eddy diffusivities evaluated from the residual eddy fluxes are in partial accordance with values commonly found in the literature
Fakih, Hussein. "Étude mathématique et numérique de quelques généralisations de l'équation de Cahn-Hilliard : applications à la retouche d'images et à la biologie." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2275/document.
Full textThis thesis is situated in the context of the theoretical and numerical analysis of some generalizations of the Cahn-Hilliard equation. We study the well-possedness of these models, as well as the asymptotic behavior in terms of the existence of finite-dimenstional (in the sense of the fractal dimension) attractors. The first part of this thesis is devoted to some models which, in particular, have applications in image inpainting. We start by the study of the dynamics of the Bertozzi-Esedoglu-Gillette-Cahn-Hilliard equation with Neumann boundary conditions and a regular nonlinearity. We give numerical simulations with a fast numerical scheme with threshold which is sufficient to obtain good inpainting results. Furthermore, we study this model with Neumann boundary conditions and a logarithmic nonlinearity and we also give numerical simulations which confirm that the results obtained with a logarithmic nonlinearity are better than the ones obtained with a polynomial nonlinearity. Finally, we propose a model based on the Cahn-Hilliard system which has applications in color image inpainting. The second part of this thesis is devoted to some models which, in particular, have applications in biology and chemistry. We study the convergence of the solution of a Cahn-Hilliard equation with a proliferation term and associated with Neumann boundary conditions and a regular nonlinearity. In that case, we prove that the solutions blow up in finite time or exist globally in time. Furthermore, we give numericial simulations which confirm the theoritical results. We end with the study of the Cahn-Hilliard equation with a mass source and a regular nonlinearity. In this study, we consider both Neumann and Dirichlet boundary conditions
Audusse, Emmanuel. "Modelisation hyperbolique et analyse numerique pour les ecoulements en eaux peu profondes." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00008047.
Full textNous nous consacrons d'abord a l'analyse numerique du systeme de Saint-Venant avec termes sources. Nous presentons un schema volumes finis bidimensionnel d'ordre 2, conservatif et consistant, qui s'appuie sur une interpretation cinetique du systeme et une methode de reconstruction hydrostatique des variables aux interfaces. Ce schema preserve la positivite de la hauteur d'eau et l'etat stationnaire associe au lac au repos.
Nous etendons ensuite l'interpretation cinetique au couplage du systeme avec une equation de transport. Nous construisons un schema volumes finis a deux pas de temps, qui permet de prendre en compte les differentes vitesses de propagation de l'information presentes dans le probleme. Cette approche preserve les proprietes de stabilite du systeme et reduit sensiblement la diffusion numerique et les temps de calcul.
Nous proposons egalement un nouveau modele de Saint-Venant multicouche, qui permet de retrouver des profils de vitesse non constants, tout en preservant le caractere invariant et bidimensionnel du domaine de definition. Nous presentons sa derivation a partir des equations de Navier-Stokes et une etude de stabilite - energie, hyperbolicite. Nous etudions egalement ses relations avec d'autres modeles fluides et sa mise en oeuvre numerique, la encore basee sur l'utilisation des schemas cinetiques.
Enfin nous etablissons un theoreme d'unicite pour les lois de conservation scalaires avec flux discontinus. La preuve est basee sur l'utilisation d'une nouvelle famille d'entropies, qui constituent une adaptation naturelle des entropies de Kruzkov classiques au cas discontinu. Cette methode permet de lever certaines hypotheses classiques sur le flux - convexite, existence de bornes BV, nombre fini de discontinuites - et ne necessite pas l'introduction d'une condition d'interface.
Nguyen, Thanh Don. "Impact de la résolution et de la précision de la topographie sur la modélisation de la dynamique d’invasion d’une crue en plaine inondable." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0093/document.
Full textWe analyze in this thesis various aspects associated with the modeling of free surface flows in shallow water approximation. We first study the system of Saint-Venant equations in two dimensions and its resolution with the numerical finite volumes method, focusing in particular on aspects hyperbolic and conservative. These schemes can process stationary equilibria, wetdry interfaces and model subcritical, transcritical and supercritical flows. After, we present the variational data assimilation method theory fitted to this kind of flow. Its application through sensitivity studies is fully discussed in the context of free surface water. After this theoretical part, we test the qualification of numerical methods implemented in the code Dassflow, developed at the University of Toulouse, mainly at l'IMT, but also at IMFT. This code solves the Shallow Water equations by finite volume method and is validated by comparison with analytical solutions for standard test cases. These results are compared with another hydraulic free surface flow code using finite elements in two dimensions: Telemac2D. A significant feature of the Dassflow code is to allow variational data assimilation using the adjoint method for calculating the cost function gradient. The adjoint code was obtained using the automatic differentiation tool Tapenade (INRIA). Then, the test is carried on a real hydraulically complex case using different qualities of Digital Elevation Models (DEM) and bathymetry of the river bed. This information are provided by either a conventional database types IGN or a very high resolution LIDAR information. The comparison of the respective influences of bathymetry, mesh size, kind of code used on the dynamics of flooding is very finely explored. Finally we perform sensitivity mapping studies on parameters of the Dassflow model. These maps show the respective influence of different parameters and of the location of virtual measurement points. This optimal location of these points is necessary for an efficient data assimilation in the future
Peng, Yong. "In-depth accident investigation of pedestrian impact dynamics and development of head injury risk functions." Thesis, Strasbourg, 2012. http://www.theses.fr/2012STRAD024.
Full textPedestrians are regarded as an extremely vulnerable and high-risk group of road users since they are unprotected in vehicle impacts. More than 1.17 million people throughout the world are killed in road traffic accidents each year. Where, about 65% of deaths involve pedestrians. The head injuries in vehicle-pedestrian collisions accounted for about 30% of all reported injuries on different body regions, which often resulted in a fatal consequence. Such injuries can result in disabilities and long-term sequence, which lead to significant social costs. It is therefore important to study the characteristics of pedestrian accidents and understand the head injury mechanism of the pedestrian so as to improve vehicle design for pedestrian protection. The aim of this study is to investigate pedestrian dynamic response and develop head injury risk functions.In order to investigate the effect of pedestrian gait, vehicle front geometry and impact velocity on the dynamic responses of the head, the multi-body dynamic (MBD) models were used to simulate the head responses in vehicle to pedestrian collisions with different vehicle types in terms of head impact point measured with Wrap Around Distance (WAD), head relative velocity and impact angle. A simulation matrix is established using five vehicle types, and two mathematical models of the pedestrians represented a 50th male adult and a 6 year old child as well as seven pedestrian gaits based on typical postures in pedestrian accidents. In order to simulate a large range of impact conditions, four vehicle velocities (30 km/h, 40 km/h, 50 km/h and 60 km/h) are considered for each pedestrian position and vehicle type.A total of 43 passenger car versus pedestrian accidents were selected from In-depth Investigation of Vehicle Accidents in Changsha, China (IVAC) and German In-Depth Accident Study (GIDAS) database for simulation study. According to real-world accident investigation, accident reconstructions were conducted using multi-body system (MBS) pedestrian and car models under MADYMO simulation environment to calculate head impact conditions, in terms of head impact velocity, head position and head orientation. In order to study kinematics of adult pedestrian, relationship curves: head impact time, throw distance, head impact velocity and vehicle impact velocity, were computed and logistic regression models: head impact velocity, resultant angular velocity, HIC value, head contact force and head injuries, were developed based on the results from accident reconstructions.The automobile windshield, with which pedestrians come into frequent contact, has been identified as one of the main contact sources for pedestrian head injuries. In order to investigate the mechanical behavior of windshield laminated glass in the caseof pedestrian head impact, windshield FE models were set up using different combination for the modeling of glass and PVB, with various connection types and two mesh sizes (5 mm and 10 mm). Each windshield model was impacted with a standard adult headform impactor in an LS-DYNA simulation environment, and the results were compared with the experimental data reported in the literatures.In order to assess head injury risks of adult pedestrians, accident reconstructions were carried out by using Hybrid III head model based on the real-world pedestrian accidents. The impact conditions were obtained from the MBS simulation, including head impact velocity, head position and head orientation. They were used to set the initial conditions in a simulation of a Hybrid III FE head model striking a windshield FE model. Logistic regression models, Skull Fracture Correlate (SFC), head linear acceleration, Head Impact Power (HIP), HIC value, resultant angular acceleration and head injuries, were developed to study brain injury risk.{...]
Bresson, Damien. "Étude de l’écoulement sanguin dans un anévrysme intracrânien avant et après traitement par stent flow diverter : quantification par traitement d’images de séquences angiographiques 2D." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2308/document.
Full textIntracranial aneurysms treatment based on intra aneurismal flow modification tend to replace traditionally coiling in many cases and not only complex aneurysms for which they were initially designed. Dedicated stents (low porosity, high pores density stents) called “flow diverter” stents are deployed across the neck of the aneurysm to achieve this purpose. The summation of three different mechanisms tend to lead to the healing of the aneurysm: immediate flow alteration due to the mechanical screen effect of the stent, physiological triggering of acute or progressive thrombus formation inside the aneurysm’s pouch and long term biological response leading in neointima formation and arterial wall remodeling. This underlying sequence of processes is also supposed to decrease the recanalization rate. Scientific data supporting the flow alteration theory are numerous and especially computational flow dynamics (CFD). These approaches are very helpful for improving biomechanical knowledge of the relations between blood flow and pathology, but they do not fit in real-time treatments. Neuroendovascular treatments are performed under dynamic x-ray modality (digital subtracted angiography a DSA-).However, in daily practice, FD stents are sized to the patient’s 3D vasculature anatomy and then deployed. The flow modification is then evaluated by the clinician in an intuitive manner: the decision to deploy or not another stent is based solely on a visual estimation. The lack of tools available in the angioroom for quantifying in real time the blood flow hemodynamics should be pointed out. It would make sense to take advantage of functional data contained in contrast bolus propagation and not only anatomical data. Thus, we proposed to create flow software based on angiographic analysis. This software was built using algorithms developed and validated on 2D-DSA sequences obtained in a swine intracranial aneurysm model. This intracranial animal model was also optimized to obtain 3D vascular imaging and experimental hemodynamic data that could be used to realize realistic computational flow dynamic. In a third step, the software tool was used to analyze flow modification from angiographic sequences acquired during unruptured IA from patients treated with a FD stent. Finally, correlation between flow change and aneurysm occlusion at long term follow-up with the objective of identifying predictive markers of long term occlusion was performed
Simon, Nataline. "Développement des méthodes actives de mesures distribuées de température par fibre optique pour la quantification des écoulements souterrains : apports et limites pour la caractérisation des échanges nappe/rivière." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1B028.
Full textGroundwater/surface water interactions play a fundamental role in the functioning of aquatic ecosystems. However, their quantification is challenging because exchange processes vary both in time and space. Here, we propose an active distributed heat transport experiment in order to quantify the spatial and temporal variability of groundwater/surface water interactions. As a first step, we proposed a new approach to evaluate the spatial resolution of temperature measurements. Then, two interpretation methods of active-DTS experiments were developed and fully validated to estimate the distribution of porous media thermal conductivity and the groundwater fluxes in sediments. Based on numerical simulations and sandbox experiments, results demonstrated the potentiality of these methods for quantifying distributed groundwater fluxes with high accuracy. The large range of groundwater fluxes that can be investigated with the method makes specially promising the application of active experiments for many subsurface applications. Secondly, we conducted heat transport experiments within the streambed sediments of two different streams: in a first-order stream, then in a large flow-system located along an alluvial plain. These applications demonstrated the relevance of using active experiments to characterize the spatial complexity of stream exchanges. Finally, the comparison of results obtained for each experimental site allowed discussing the capabilities and limitations of using active-DTS field experiments to characterize groundwater/surface water interactions in different hydrological contexts
Gardes, Thomas. "Reconstruction temporelle des contaminations métalliques et organiques particulaires dans le bassin versant de l'Eure et devenir des sédiments suite à l'arasement d'un barrage. Reconstruction of anthropogenic activities in legacy sediments from the Eure River, a major tributary of the Seine Estuary (France) Flux estimation, temporal trends and source determination of trace metal contamination in a major tributary of the Seine estuary, France Temporal trends, sources, and relationships between sediment characteristics and polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs) in sediment cores from the major Seine estuary tributary, France Impacts à court-terme de l’arasement d’un barrage sur la morphologie du cours d’eau et la remobilisation de sédiments contaminés par les métaux traces Bioaccessibility of polycyclic aromatic compounds (PAHs, PCBs) and trace elements: Influencing factors and determination in a river sediment core." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMR038.
Full textThe anthropogenic impact on rivers has significantly increased following the industrial revolutioninitiated by Western countries. Thus, changes in the geomorphology of rivers for water storage andnavigation, the conversion of land for agricultural, industrial and urbanization purposes illustrate thisenvironmental pressure, which results, among other things, in an increase in discharges of variouscontaminants into environmental compartments, particularly rivers. Therefore, part of these dischargescan end up in suspended particulate matter, which is then considered as storage wells, which transit inrivers. River development, particularly the construction of dams, encourages the sedimentation of these contaminated particles over time. These sediments of anthropogenic origin, also called legacy sediments, are therefore witnesses to human activities and make it possible to reconstruct the temporal trajectories of contamination within watersheds. The Eure River, a major tributary of the Seine estuary, has experienced significant anthropogenic pressures since the twentieth century. The temporal reconstruction of anthropogenic pressures has required the combination of different methodological approaches: (i) a diachronic analysis of the morphological modifications of the river was carried out, in conjunction with (ii) an analysis of the sedimentary dynamics and the nature of the sediment deposits by coupling geophysical, sedimentological and geochemical methods, and (iii) the setting up of a network for monitoring the hydro-sedimentary behaviour with continuous sampling of suspended particulate matter. Significant geomorphological changes have occurred in the lower reaches of the watershed, with the main consequences being an outlet moved some ten kilometres in the direction of a dam and the formation of hydraulic annexes favouring the accumulation of sediments as early as the 1940s. These made it possible to show that the Eure River watershed had experienced significant contamination, the consequences of which are still being recorded despite the cessation of activities or uses. The temporal trends of trace metal and metalloid elements showed strong contaminations in As in the 1940s and contaminations of industrial origin in Cr, Co, Ni, Cu, Zn, Ag and Cd in the 1960s and 1970s, as well as contaminations in Sb and Pb in 1990–2000. The latter are still recorded despite the cessation of the activities responsible for the discharges, as evidenced by the results from the suspended particulate matter currently collected in the river. Like most trace metals, organic contaminants such as PAHs showed significant contamination during the 1940–1960s, with signatures indicating a predominantly pyrogenic origin. PCBs showed significant contamination during the period 1950–1970, in connection with the production and national uses of mixtures composed mainly of low chlorinated congeners. Finally, interest in a third family of persistent organic contaminants, organochlorine pesticides, showed the use of lindane and DDT, particularly during the 1940–1970 period, and highlighted the post-ban use of lindane and the presence of a metabolite of DDT several decades after the cessation of its use, in connection with the increase in erosion of cultivated soils
Marchand, Mathieu. "Flux financiers et endettement de l'État : simulations par modèle d'équilibre général calculable (MEGC) /." 2007. http://www.theses.ulaval.ca/2007/24520/24520.pdf.
Full textDidorally, S. "Prévision des flux de chaleur turbulents et pariétaux par des simulations instationnaires pour des écoulements turbulents chauffés." Phd thesis, 2014. http://tel.archives-ouvertes.fr/tel-01055867.
Full textBeauchemin, Cynthia. "Turbulence de surface pour des simulations de fluides basées sur un système de particules." Thèse, 2016. http://hdl.handle.net/1866/18755.
Full textAccurately simulating the behaviour of fluids remains a difficult problem in computer graphics, and performing these simulations at a high level of detail is particularly challenging due to the complexity of the underlying dynamics of a fluid. A recent and significant body of work targets this problem by trying to augment the apparent resolution of an underlying, lower-resolution simulation, instead of performing a more costly simulation at the full-resolution. Adaptive or multi-scale methods in this area have proven successful for simulations of smoke and liquids, but no comprehensive solution exists. The goal of this thesis is to devise a new multi-scale detail-augmentation technique suitable for application atop existing particle-based fluid simulators. Particle simulations of fluid dynamics are a popular, heavily-used alternative to grid-based simulations due to their ability to better preserve energy, and no detail-augmentation techniques have been devised for this class of simulator. As such, our work would permit digital artists to perform more efficient lower-resolution particle simulations of a liquid, and then layer-on a detailed secondary simulation at a negligible cost. To do so, we present a method for reconstructing the surface of a liquid, during the particle simulation, in a manner that is amenable to high-frequency detail injection due to higher-resolution surface tension effects. Our technique detects potentially under-resolved regions on the initial simulation and synthesizes turbulent dynamics with novel multi-frequency oscillators. These dynamics result in a high-frequency wave simulation that is propagated over the (reconstructed) liquid surface. Our algorithm can be applied as a post-process, completely independent of the underlying simulation code, and so it is trivial to integrate in an existing 3D digital content creation pipeline.
Tremblay, Benoit. "Reconstruction des mouvements du plasma dans une région active solaire à l'aide de données d'observation et d'une minimisation Lagrangienne." Thèse, 2015. http://hdl.handle.net/1866/12531.
Full textTo this day, the various methods proposed for the reconstruction of plasma motions at the Sun’s surface are all based on ideal MHD (Welsch et al., 2007). However, Chae & Sakurai (2008) have shown the existence of an eddy magnetic diffusivity at the photosphere. We introduce a generalization of the Minimum Energy Fit (MEF; Longcope, 2004) for resistive plasmas. The Resistive Minimum Energy Fit (MEF-R; Tremblay & Vincent, 2014) infers velocity fields and an eddy magnetic diffusivity which solve the resistive magnetic induction equation and minimize an energy-like functional. A sequence of magnetograms and Dopplergrams documenting the active regions AR 9077 and AR 12158 are used as input in MEF-R to reconstruct plasma motions at the Sun’s surface. Time series of the inferred velocities and eddy magnetic diffusivities are compared to the soft X-ray flux observed by GOES-15. We find a positive correlation between significant eddy magnetic diffusivities and microturbulent velocities for weak magnetic fields in AR 12158.
Benrhaiem, Rania. "Méthodes d’analyse de mouvement en vision 3D : invariance aux délais temporels entre des caméras non synchronisées et flux optique par isocontours." Thèse, 2016. http://hdl.handle.net/1866/18469.
Full textIn this thesis we focused on two computer vision subjects. Both of them concern motion analysis in a dynamic scene seen by one or more cameras. The first subject concerns motion capture using unsynchronised cameras. This causes many correspondence errors and 3D reconstruction errors. In contrast with existing material solutions trying to minimize the temporal delay between the cameras, we propose a software solution ensuring an invariance to the existing temporal delay. We developed a method that finds the good correspondence between points regardless of the temporal delay. It solves the resulting spatial shift and finds the correct position of the shifted points. In the second subject, we focused on the optical flow problem using a different approach than the ones in the state of the art. In most applications, optical flow is used for real-time motion analysis. It is then important to be performed in a reduced time. In general, existing optical flow methods are classified into two main categories: either precise and dense but computationally intensive, or fast but less precise and less dense. In this work, we propose an alternative solution being at the same time, fast and precise. To do this, we propose extracting intensity isocontours to find corresponding points representing the related optical flow. By addressing these problems we made two major contributions.
Pirot, Dorian. "Reconstruction des structures magnéto-convectives solaires sous une région active, par l’utilisation conjointe d’un modèle de convection anélastique et d’une méthode d’assimilation de données." Thèse, 2012. http://hdl.handle.net/1866/8662.
Full textWe use a data assimilation technique, together with an anelastic convection model, in order to reconstruct the convective patterns below a solar active region. Our results yield information about the magnetic field emergence through the convective zone and the mechanisms of active region formation. The solar data we used are taken from the instrument MDI on board the spatial observatory SOHO on July 2000 the 14th for the event called ”bastille day event”. This specific event leads to a solar flare followed by a coronal mass ejection. Assimilated data (magnetograms, temperature maps and vertical velocity maps) cover an area of 175 Mm × 175 Mm at photospheric level. The data assimilation technique we used, the ”Nudging Back and Forth”, is a Newtonian re- laxation technique similar to the ”quasi linear inverse 3D”. Such a technique does not require computation of the adjoint equations. Thus, simplicity of this method is a numerical advantage. Our study shows with a simple test case the applicability of this method to a convection model treated with the anelastic approximation. We show the efficiency of the NBF technique and we detail its potential for solar data assimi- lation. In addition, to ensure mathematical unicity of the obtained solution, a regularization has been imposed in the whole simulation domain. This is a new approach. Finally, we show that the interest of such a technique is not limited to the reconstruction of convective patterns but that it also allows optimal interpolation of photospheric magnetograms and predictions.