Dissertations / Theses on the topic 'Analyse à grande échelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Analyse à grande échelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Vargas-Magaña, Mariana. "Analyse des structures à grande échelle avec SDSS-III/BOSS." Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00726113.
Full textBenmerzoug, Fateh. "Analyse, modélisation et visualisation de données sismiques à grande échelle." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30077.
Full textThe main goal of the oil and gas industry is to locate and extract hydrocarbon resources, mainly petroleum and natural gas. To do this efficiently, numerous seismic measurements are conducted to gather up as much data as possible on terrain or marine surface area of interest. Using a multitude of sensors, seismic data are acquired and processed resulting in large cube-shaped data volumes. These volumes are then used to further compute additional attributes that helps in the understanding of the inner geological and geophysical structure of the earth. The visualization and exploration, called surveys, of these volumes are crucial to understand the structure of the underground and localize natural reservoirs where oil or gas are trapped. Recent advancements in both processing and imaging technologies enables engineers and geoscientists to perform larger seismic surveys. Modern seismic measurements yield large multi-hundred gigabytes of data volumes. The size of the acquired volumes presents a real challenge, both for processing such large volumes as well as their storage and distribution. Thus, data compression is a much- desired feature that helps answering the data size challenge. Another challenging aspect is the visualization of such large volumes. Traditionally, a volume is sliced both vertically and horizontally and visualized by means of 2-dimensional planes. This method necessitates the user having to manually scrolls back and forth be- tween successive slices in order to locate and track interesting geological features. Even though slicing provides a detailed visualization with a clear and concise representation of the physical space, it lacks the depth aspect that can be crucial in the understanding of certain structures. Additionally, the larger the volume gets, the more tedious and repetitive this task can be. A more intuitive approach for visualization is volume rendering. Rendering the seismic data as a volume presents an intuitive and hands on approach. By defining the appropriate color and opacity filters, the user can extract and visualize entire geo-bodies as individual continuous objects in a 3-dimensional space. In this thesis, we present a solution for both the data size and large data visualization challenges. We give an overview of the seismic data and attributes that are present in a typical seismic survey. We present an overview of data compression in a whole, discussing the necessary tools and methods that are used in the industry. A seismic data compression algorithm is then proposed, based on the concept of ex- tended transforms. By employing the GenLOT , Generalized Lapped Orthogonal Trans- forms we derive an appropriate transform filter that decorrelates the seismic data so they can be further quantized and encoded using P-SPECK, our proposed compression algorithm based on block-coding of bit-planes. Furthermore, we proposed a ray-casting out-of-core volume rendering framework that enables the visualization of arbitrarily large seismic cubes. Data are streamed on-demand and rendered using the user provided opacity and color filters, resulting in a fairly easy to use software package
Guedj, Mickaël. "Méthodes Statistiques pour l’analyse de données génétiques d’association à grande échelle." Evry-Val d'Essonne, 2007. http://www.biblio.univ-evry.fr/theses/2007/2007EVRY0015.pdf.
Full textThe increasing availability of dense Single Nucleotide Polymorphisms (SNPs) maps due to rapid improvements in Molecular Biology and genotyping technologies have recently led geneticists towards genome-wide association studies with hopes of encouraging results concerning our understanding of the genetic basis of complex diseases. The analysis of such high-throughput data implies today new statistical and computational problematic to face, which constitute the main topic of this thesis. After a brief description of the main questions raised by genome-wide association studies, we deal with single-marker approaches by a power study of the main association tests. We consider then the use of multi-markers approaches by focusing on the method we developed which relies on the Local Score. Finally, this thesis also deals with the multiple-testing problem: our Local Score-based approach circumvents this problem by reducing the number of tests; in parallel, we present an estimation of the Local False Discovery Rate by a simple Gaussian mixed model
Virazel, Arnaud. "Test intégré des circuits digitaux : analyse et génération de séquences aléatoires adjacentes." Montpellier 2, 2001. http://www.theses.fr/2001MON20094.
Full textAlbeau, Karine. "Analyse à grande échelle des textures des séquences protéiques via l'approche Hydrophobic Cluster Analysis (HCA)." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2005. http://tel.archives-ouvertes.fr/tel-00011139.
Full textBland, Céline. "Innovations pour l'annotation protéogénomique à grande échelle du vivant." Thesis, Montpellier 1, 2013. http://www.theses.fr/2013MON13508.
Full textProteogenomics is a recent field at the junction of genomics and proteomics which consists of refining the annotation of the genome of model organisms with the help of high-throughput proteomic data. Structural and functional errors are still frequent and have been reported on several occasions. Innovative methodologies to prevent such errors are essential. N-terminomics enables experimental validation of initiation codons and certification of the annotation data. With this objective in mind, two innovative strategies have been developed combining: i) selective N-terminal labeling of proteins, ii) multienzymatic digestion in parallel, and iii) specific enrichment of most N-terminal labeled peptides using either successive liquid chromatography steps or immunocapture directed towards the N-terminal label. Efficiency of these methodologies has been demonstrated using Roseobacter denitrificans as bacterial model organism. After enrichment with chromatography, 480 proteins were validated and 46 re-annotated. Several start sites for translation initiation were detected and homology driven annotation was challenged in some cases. After immunocapture, 269 proteins were characterized of which 40% were identified specifically after enrichment. Three novel genes were also annotated for the first time. Complementary results obtained after tandem mass spectrometry analysis allows easier data interpretation to reveal real start sites of translation initiation of proteins and to identify novel expressed products. In this way, the re-annotation process may become automatic and systematic to improve protein databases
Flaounas, Emmanouil. "Analyse de la mise en place de la mousson Africaine : dynamique régionale ou forçage de grande échelle ?" Phd thesis, Paris 6, 2010. http://www.theses.fr/2010PA066625.
Full textMerroun, Omar. "Traitement à grand échelle des données symboliques." Paris 9, 2011. http://www.theses.fr/2011PA090027.
Full textSymbolic Data Analysis (SDA) proposes a generalization of classical Data Analysis (AD) methods using complex data (intervals, sets, histograms). These methods define high level and complex operators for symbolic data manipulation. Furthermore, recent implementations of the SDA model are not able to process large data volumes. According to the classical design of massive data computation, we define a new data model to represent and process symbolic data using algebraic operators that are minimal and closed by composition. We give some query samples to emphasize the expressiveness of our model. We implement this algebraic model, called LS-SODAS, and we define the language XSDQL to express queries for symbolic data manipulation. Two cases of study are provided in order to show the potential of XSDQL langage expressiveness and the data processing scalability
Bolze, Raphaël. "Analyse et déploiement de solutions algorithmiques et logicielles pour des applications bioinformatiques à grande échelle sur la grille." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2008. http://tel.archives-ouvertes.fr/tel-00344249.
Full textAljr, Hasan. "Influence de la tranchée sur les chaussées en milieu urbain : Analyse des données d’une expérimentation à grande échelle." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10168/document.
Full textThe trench is a major cause of deterioration of urban pavements. It considerably reduces the service life of these pavements. The present work aims to study through a large-scale experimentation the influence of the trench on urban pavements. It has been conducted within a collaboration between Lille Metropolis, Civil and geo-Environmental Engineering laboratory and Eurovia. It includes 4 parts. The first part contains a literature review concerning the influence of the trench on urban pavements as well as its economic impact. The second part concerns the analysis of the performances of the instrumentation system used to monitor the behavior of the pavement and the trench built at the entrance of Lille1 campus. The third part consists in the analysis of structural behavior of the pavement and the trench using deflection tests. The last presents an analysis of the responses of the pavement and the trench under traffic and the influence of the temperature on these responses
Van, Waerbeke Ludovic. "Analyse de la distribution de matière à grande échelle par les effets de lentille gravitationnelle et les quasars." Paris 11, 1997. http://www.theses.fr/1997PA112168.
Full textChevalier, Cyril. "Contribution au test intégré : générateurs de vecteurs de test mixtes déterministes et pseudo-aléatoires." Montpellier 2, 1994. http://www.theses.fr/1994MON20141.
Full textVrard, Mathieu. "Analyse sismique des géantes rouges : une vision détaillée à grande échelle de la structure du coeur et de l'enveloppe." Observatoire de Paris, 2015. https://theses.hal.science/tel-02167421.
Full textRed giant stars correspond to one of the last evolutionary states in the life of solar-like stars. They undergo severe changes in their internal structure when they evolve. These changes remain poorly constrained due to the difficulty to model the internal structure of red giants, especially during the last stages. A recent way to observationally constrain the internal structure of the stars derives from the analysis of their pulsations. This method is called asteroseismology. During three years, I have used ultra-precise photometric data obtained by the Kepler satellite (NASA) which has observed 15000 red giants during more than four years (from 2009 to 2013). The study of these continuous and extremely precise light-curves allow the precise characterization of their stellar structure. The first study I have conducted corresponds to the characterization of the properties of the region of second helium ionization. The location of this region has been measured and different properties between the different evolutionary states of the stars were put into light. Secondly, I have created an automated method to measure a seismic parameter directly linked to the size of the stars radiative core. I applied this method on the 15000 red giants stars present in the Kepler public data. This study bring new informations on the way the stars climb the red giant branch and evolve towards later evolutionary states depending on their mass and metallicity. The results allowed me to reveal in several stars the signature of structure discontinuities in their radiative core
Sridhar, Srivatsan. "Analyse statistique de la distribution des amas de galaxies à partir des grands relevés de la nouvelle génération." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4152/document.
Full textI aim to study to which accuracy it is actually possible to recover the real-space to-point correlation function from cluster catalogues based on photometric redshifts. I make use of cluster sub-samples selected from a light-cone simulated catalogue. Photometric redshifts are assigned to each cluster by randomly extracting from a Gaussian distribution having a dispersion varied in the range σ (z=0) = 0.005 à 0.050. The correlation function in real-space is computed through deprojection method. Four masse ranges and six redshifts slices covering the redshift range 0
Fernandez-Abrevaya, Victoria. "Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D." Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.
Full textData-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task
Veber, Philippe. "Modélisation grande échelle de réseaux biologiques : vérification par contraintes booléennes de la cohérence des données." Phd thesis, Université Rennes 1, 2007. http://tel.archives-ouvertes.fr/tel-00185895.
Full textHamdi-Larbi, Olfa. "Etude de la distribution, sur système à grande échelle, de calcul numérique traitant des matrices creuses compressées." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0018.
Full textSeveral scientific applications often use kernels performing computations on large sparse matrices. For reasons of efficiency in time and space, specific compression formats are used for storing such matrices. Most of sparse scientific computations address sparse linear algebra problems. Here two fundamental problems are often considered i. E. Linear systems resolution (LSR) and matrix eigen-values/vector computation (EVC). In this thesis, we address the problem of distributing, onto a Large Scale Distributed System (LSDS), computations performed in iterative methods for both LSR and EVC. The sparse matrix-vector product (SMVP) constitutes a basic kernel in such iterative mathods. Thus, our problem reduces to the SMVP distribution study on an LSDS. In principle, three phases are required for achieving this kind of applications, namely, pre -processing, processing and post-processing. In phase 1, we first proceed to the optimization of four versions of the SMVP algorithm corresponding to four specific matrix compressing formats, then study their performances on sequential target machines. In addition, we focus on the study of load balancing in the procedure of data (i. E. The sparse matrix rows) distribution on a LSDS. Concerning the processing phase, it consists in validating the previous study by a series of experimentations achieved on a volunteer distributed system we installed through using XtremWeb-CH middleware. As to the post-processing phase, it consists in interpreting the experimental results previously obtained in order to deduce adequate conclusions
Leduc, Mélissa. "Analyse de l’apprentissage de formateurs et d’entraîneurs participant au Programme national de certification des entraîneurs." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20667.
Full textBrault, Florence. "Recherche et analyse de lentilles gravitationnelles fortes à échelle galactique dans les grands relevés." Paris 6, 2012. http://www.theses.fr/2012PA066567.
Full textThis thesis focuses on the development of a novel detector of strong galaxy-galaxy lenses based on the massive modeling of candidates in wide-field ground-based imaging data. Indeed, not only are these events rare in the Universe, but they are at the same time very valuable to understand galaxy formation and evolution in a cosmological context. We use parametric models, which are optimized by MCMC in a bayesian framework, so that we know the distribution of errors. We first generate several training samples : a hundred lenses simulated in HST and CFHT conditions, along with 325 observed lens candidates resulting from a series of preselections on the CFHTLS-Wide galaxies, and that we classify according to their credibility. The whole challenge in designing this detector lies in a subtle balance between the quality of models and the execution time. We massively run the modeling on our samples, beginning with ideal application conditions that we make more complex by stages so as to get closer to the observation conditions and save time. We show that a 7-parameter model assuming a spherical source can recover the Einstein radius from the CFHT simulations with a precision of ~7%. We apply a mask to the input data that noticeably enhances the robustness of the models facing environment problems, with a median convergence time of 4 minutes that could be easily reduced by a factor of 10 with more direct optimization techniques. From our results, we define selection contours in the parameter space, resulting in a completeness of ~38% and a purity of ~55% for the sample of 51 candidates accepted by our robot among the 325 preselected systems
Emery, Charlotte. "Contribution de la future mission altimétrique à large fauchée SWOT pour la modélisation hydrologique à grande échelle." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30034/document.
Full textScientific objective of this PhD work is to improve water fluxes estimation on the continental surfaces, at interanual and interseasonal scale (from few years to decennial time period). More specifically, it studies contribution of remotely-sensed measurements to improve hydrology model. Notably, this work focuses on the incoming SWOT mission (Surface Water and Ocean Topography, launch scheduled for 2021) for the study of the continental water cycle at global scale, and using the land surface model ISBA-TRIP. In this PhD work, I explore the potential of satellite data to correct both input parameters of the river routing scheme TRIP and its state variables. To do so, a data assimilation platform has been set to assimilate SWOT virtual observation as well as discharge estimated from real nadir altimetry data. Beforehand, it was necessary to do a sensibility analysis of TRIP model to its parameters. The aim of such study was to highlight what are the most impacting parameters on SWOT-observed variables and therefore select the ones to correct via data assimilation. The sensibility analysis (ANOVA) has been led on TRIP main parameters. The study has been done over the Amazon basin. The results showed that the simulated water levels are sensitive to local geomorphological parmaters exclusively. On the other hand, the simulated discharges are sensitive to upstream parameters (according to the TRIP river routing network) and more particularly to the groundwater time constant. Finally, water anomalies present sensitivities similar to those of the water levels but with more pronounced temporal variations. These results also lead me to do some choices in the implementation of the assimilation scheme and have been published. Therefore, in the second part of my PhD, I focused on developing a data assimilation platform which consists in an Ensemble Kalman Filter (EnKF). It could either correct the model input parameters or directly its state. A series of twin experiments is used to test and validate the parameter estimation module of the platform. SWOT virtual-observations of water heights and anomalies along SWOT tracks are assimilated to correct the river manning coefficient, with the possibility to easily extend to other parameters. First results show that the platform is able to recover the "true" Manning distribution assimilating SWOT-like water heights and anomalies. In the state estimation mode, daily assimilation cycles are realized to correct TRIP river water storage initial state by assimilating ENVISAT-based discharge. Those observations are derived from ENVISAT water elevation measures, using rating curves from the MGB-IPH hydrological model (calibrated over the Amazon using in situ gages discharge). Using such kind of observation allows going beyond idealized twin experiments and also to test contribution of a remotely-sensed discharge product, which could prefigure the SWOT discharge product. The results show that discharge after assimilation are globally improved : the root-mean-square error between the analysis discharge ensemble mean and in situ discharges is reduced by 28 \%, compared to the root-mean-square error between the free run and in situ discharges (RMSE are respectively equal to 2.79 x 103 m3/s and 1.98 x 103 m3/s)
Daher, Petra. "Analyse spatio-temporelle des structures à grande échelle dans les écoulements confinés : cas de l'aérodynamique interne dans un moteur à allumage commandé." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR100/document.
Full textThe unsteady evolution of three-dimensional large scale flow structures can often lead to a decrease in the performance of energetic systems. This is the case of cycle-to-cycle variations occurring in the internal combustion engine. Despite the substantial advancement made by numerical simulations in fluid mechanics, experimental measurements remain a requirement to validate any numerical model of a physical process. In this thesis, two types of particle image velocimetry (PIV) were applied and adapted to the optical engine test bench of the Coria laboratory in order to study the in-cylinder flow with respect to six operating conditions. First, the Time-Resolved PIV (2D2C) allowed obtaining a temporal tracking of the in-cylinder flow and identifying cyclic variabilities. Then tomographic PIV (3D3C) allowed extending the measured data to the three-dimensional domain. The Tomo-PIV setup consisted of 4 cameras in angular positioning, visualizing a confined environment with restricted optical access and important optical deformations. This required a particular attention regarding the 3D calibration process of camera models. 2D and 3D conditional analyses of the flow were performed using the proper orthogonal decomposition (POD) allowing to separate the different scales of flow structures and the Γ criterion allowing the identification of vortices centres
Gasser, Jean-Luc. "Analyse de signature des circuits intégrés complexes par test aléatoire utilisant les méthodes de traitement du signal : application à un microprocesseur." Toulouse, INPT, 1986. http://www.theses.fr/1986INPT079H.
Full textVanhauwaert, Pierre. "Analyse de sûreté par injection de fautes dans un environnement de prototypage à base de FPGA." Grenoble INPG, 2008. http://www.theses.fr/2008INPG0039.
Full textTechnology downscaling increases the sensitivity of integrated circuits faced to perturbations (particles strikes, lose of signal integrity…). The erroneous behaviour of a circuit can be unacceptable and a dependability analysis at a high abstraction level enables to select the most efficient protections and to limit timing overhead induced by a possible rework. This PhD aims at developing a methodology and an environment which improves the dependability analysis of digital integrated circuits. The proposed approach uses a hardware prototype of an instrumented version of the design to be analyzed. The environment includes three levels of execution including an embedded software level that enables to speed-up the experiments while keeping an important flexibility: the user can obtain the best trade-off between the complexity of the analysis and the duration of the experiments. We also propose new techniques for the instrumentation and for the injection control in order to improve the performances of the environment. A predictive evaluation of the performances informs the designer on the most influent parameters and on the analysis duration for a given design and a given implementation of the environment. Finally the methodology is applied on the analysis of two significant systems including a hardware/software system built around a SparcV8 processor
Aouad, Lamine. "Contribution à l'algorithmique matricielle et évaluation de performances sur les grilles de calcul, vers un modèle de programmation à grande échelle." Lille 1, 2005. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2005/50376-2005-Aouad.pdf.
Full textRombouts, Isabelle. "Variation géographique à grande échelle de la diversité des copépodes par rapport à la variabilité environnementale." Paris 6, 2009. http://www.theses.fr/2009PA066607.
Full textLimoge, Claire. "Méthode de diagnostic à grande échelle de la vulnérabilité sismique des Monuments Historiques : Chapelles et églises baroques des hautes vallées de Savoie." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN014/document.
Full textThe aim of this thesis is to propose a seismic vulnerability assessment method well suited to the study of a complete historical heritage, regardless of the prestige of each building. Indeed the great seismic vulnerability of the historical heritage, often in masonry, requires to act preventively in order to avoid irreparable damage. Our approach must tackle three main requirements: to develop large-scale tools of choice to prioritize the needs, to provide relevant analysis of seismic behavior on the structural scale even in the first study, and to manage the large number of uncertainties characterizing the old buildings structural assessment. To this aim, we study the baroque churches and chapels in the high valleys of the French Savoie. They witness to a particularly prosperous period in the history of Savoy and a unique artistic movement adapted to a harsh environment. In this context we have therefore developed or adapted different tools in order to handle the peculiarities of the old buildings. This way we can use the today proposed techniques for modern buildings to study these ancient buildings in rustic masonry: non-linear temporal dynamics numerical modeling, vibratory in situ measurements, non-linear multi modal analysis
Richard, Angélique. "Analyse de la variabilité de l’expression génique et du métabolisme glycolytique au cours du processus de différenciation érythrocytaire : de l’analyse à grande échelle aux questions mécanistiques." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1058/document.
Full textThe meaning of cell decision making consists in the capacity of every living cell to integrate environmental information and to transform it in a coherent biological response. Nowadays it is increasingly demonstrated that cell populations present a significant quantitative and qualitative heterogeneity that could be involved in living organisms functions. Thus, the first part of my thesis consisted in studying gene expression variability at the single-cell level during the differentiation process of primary avian erythroid progenitor cells. The expression of 92 genes was analyzed using RT-qPCR in cells isolated at different differentiation time-points. The main results of this study showed that gene expression variability, as measured by Shannon entropy, reached a maximal level, simultaneously to a drop in the number of correlated genes, at 8-24h of differentiation. This increase of the gene expression variability preceded the irreversible commitment of cells into differentiation, identified between 24h and 48h. This analysis also highlighted the potential importance ofLDHA(Lactate dehydrogenase A) encoding a glycolytic enzyme, in erythroid progenitors self-renewal and at the critical differentiation time-point 8-24h. Therefore the second part of my thesis consisted in analyzing the role of LDHA in erythroid progenitors self-renewal and the variations of glucose metabolism during the differentiation process. Our first results suggested that erythroid differentiation might be accompanied with a metabolic change, corresponding to a switch from anaerobic glycolysisdepending upon LDHA, toward aerobic energy production, relying upon oxidative phosphorylation
Ammari, Abdelaziz. "Analyse de sûreté des circuits complexes décrits en langage de haut niveau." Grenoble INPG, 2006. https://tel.archives-ouvertes.fr/tel-00101622.
Full textThe probability of transient faults increases with the evolution of the technologies. Several approaches have been proposed to early analyze the impact of these faults in a digital circuit. It is in particular possible to use an approach based on the injection of faults in a RT-Level VHDL description. In this thesis, we make several contributions to this type of analysis. A first considered aspect is to take into account the digital circuit's environment during the injection campaigns. So, an approach based on multi-level dependability analysis has been developed and applied to an example. The injections are performed in the digital circuit described at the RT-Level while the rest of the system is described at a higher level of abstraction. The results' analysis shows that failures appearing at circuit's level have in fact no impact on the system. We then present the advantages of the combination of two types of analyses : classification of faults with respect to their effects, and a more detailed analysis of error configurations activated in the circuit. An injection campaign of SEU-like faults was performed on a 8051 microcontroller described at RT-Level. The results show that the combination of the two type analyses allows a designer to localize the critical points, facilitating the hardening stage. They also show that, in the case of a general processor, the error configurations can be dependent on the executed program. This study also demonstrates that injecting a very small percentage of the possible faults gives useful information to the designer. The same methodology has been used to validate the robustness obtained with a software hardening. The results show that some faults are not detected by the implemented mechanisms although those were previously validated by fault injections based on an instruction set simulator. The last aspect of this thesis concerns the fault injection in analog blocks. In fact very few works cover this subject. We thus propose a global analysis flow for digital, analog or mixed circuits, described at behavioral level. The possibility to inject faults in analog blocks is discussed. The results obtained on a PLL, chosen as case study, have been analysed and show the feasibility of fault injections in analog blocks. To validate this flow, fault injections were also performed at transistor level and compared to those performed at high level. It appears a good correlation between the results obtained at the two levels
Al, Shaer Ali. "Analyse des déformations permanentes des voies ferrées : approche dynamique." Marne-la-vallée, ENPC, 2005. https://pastel.archives-ouvertes.fr/pastel-00001592.
Full textFaucher, Florian. "Contributions à l'imagerie sismique par inversion des formes d’onde pour les équations d'onde harmoniques : Estimation de stabilité, analyse de convergence, expériences numériques avec algorithmes d'optimisation à grande échelle." Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3024/document.
Full textIn this project, we investigate the recovery of subsurface Earth parameters. Weconsider the seismic imaging as a large scale iterative minimization problem, anddeploy the Full Waveform Inversion (FWI) method, for which several aspects mustbe treated. The reconstruction is based on the wave equations because thecharacteristics of the measurements indicate the nature of the medium in whichthe waves propagate. First, the natural heterogeneity and anisotropy of the Earthrequire numerical methods that are adapted and efficient to solve the wavepropagation problem. In this study, we have decided to work with the harmonicformulation, i.e., in the frequency domain. Therefore, we detail the mathematicalequations involved and the numerical discretization used to solve the waveequations in large scale situations.The inverse problem is then established in order to frame the seismic imaging. Itis a nonlinear and ill-posed inverse problem by nature, due to the limitedavailable data, and the complexity of the subsurface characterization. However,we obtain a conditional Lipschitz-type stability in the case of piecewise constantmodel representation. We derive the lower and upper bound for the underlyingstability constant, which allows us to quantify the stability with frequency andscale. It is of great use for the underlying optimization algorithm involved to solvethe seismic problem. We review the foundations of iterative optimizationtechniques and provide the different methods that we have used in this project.The Newton method, due to the numerical cost of inverting the Hessian, may notalways be accessible. We propose some comparisons to identify the benefits ofusing the Hessian, in order to study what would be an appropriate procedureregarding the accuracy and time. We study the convergence of the iterativeminimization method, depending on different aspects such as the geometry ofthe subsurface, the frequency, and the parametrization. In particular, we quantifythe frequency progression, from the point of view of optimization, by showinghow the size of the basin of attraction evolves with frequency. Following the convergence and stability analysis of the problem, the iterativeminimization algorithm is conducted via a multi-level scheme where frequencyand scale progress simultaneously. We perform a collection of experiments,including acoustic and elastic media, in two and three dimensions. Theperspectives of attenuation and anisotropic reconstructions are also introduced.Finally, we study the case of Cauchy data, motivated by the dual sensors devicesthat are developed in the geophysical industry. We derive a novel cost function,which arises from the stability analysis of the problem. It allows elegantperspectives where no prior information on the acquisition set is required
Gehan, Charlotte. "Évolution de la rotation du cœur des étoiles sur la branche des géantes rouges : des mesures à grande échelle vers une caractérisation du transport de moment cinétique." Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEO021.
Full textAsteroseismology consists in probing stellar interiors through the detection of seismic waves. Red giants are evolved low-mass stars that have exhausted hydrogen in their core. These stars are solar-type pulsators presenting mixed modes that allow us to have a direct access to the physical properties of their core. The available seismic measurements indicate that one or several mechanisms that remain poorly understood counterbalance the acceleration ofthe core rotation, resulting from its contraction, by transporting angularmomentum. The greatest part of this PhD thesis was devoted to the development of a method allowing a measurement as automated as possible of the mean core rotation of stars on the red giant branch that were observed by the Kepler satellite (NASA). The measurements that were derived for almost 900 stars highlight that the core rotation is almost constant along the red giant branch, with values largely independent of the stellar mass. The second part of this PhD thesis is devoted to the interpretation of these results based on stellar modelling. The challenge consists in using the large-scale measurements obtainedin the first part to characterise the quantity of angular momentum that has to be extracted from each layer of the core, at different timesteps on the red giant branch, for different stellar masses
Peloton, Julien. "Data analysis and scientific exploitation of the CMB B-modes experiment, POLARBEAR." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC154.
Full textOver the last two decades cosmology has been transformed from a data-starved to a data-driven, high precision science. N This transformation happened thanks to improved observational techniques, allowing to collect progressively bigger and more powerful data sets. Studies of the Cosmic Microwave Background (CMB) anisotropies have played, and continue on doing so, a particularly important and impactful role in this process. The huge data sets produced by recent CMB experiments pose new challenges for the field due to their volumes and complexity. Its successful resolution requires combining mathematical, statistical and computational methods aIl of which form a keystone of the modern CMB data analysis. In this thesis, I describe data analysis of the first data set produced by one of the most advanced, current CMB experiments, POLARBEAR and the major results it produced. The POLARBEAR experiment is a leading CMB B-mode polarization experiment aiming at detection and characterization of the so-called B-mode signature of the CMB polarization. This is one of the most exciting topics in the current CMB research, which only just has started yielding new insights onto cosmology in part thanks to the results discussed hereafter. In this thesis I describe first the modern cosmological model, focusing on the physics of the CMB, and in particular its polarization properties, and providing an overview of the past experiments and results. Subsequently, I present the POLARBEAR instrument, data analysis of its first year data set and the scientific results drawn from it, emphasizing my major contributions to the overall effort. In the last chapter, and in the context of the next generation CMB B-mode experiments, I present a more systematic study of the impact of the presence of the so-called E-to-B leakage on the performance forecasts of CMB B-modes experiments, by comparing several methods including the pure pseudospectrum method and the minimum variance quadratic estimator. In particular, I detail how the minimum variance quadratic estimator in the case of azimuthally symmetric patches can be used to estimate efficiently parameters, and I present an efficient implementation based on existing parallel algorithms for computing Spherical Harmonic Transforms
Beck, Dominic. "Challenges in CMB Lensing Data Analysis and Scientific Exploitation of Current and Future CMB Polarization Experiments." Thesis, Université de Paris (2019-....), 2019. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=3973&f=25502.
Full textNext-generation cosmic microwave background (CMB) measurements will further establish the field of cosmology as a high-precision science and continue opening new frontiers of fundamental physics. Cosmic-variance limited measurements not only of the CMB temperature but also its polarization down to arcminute scales will allow for precise measurements of our cosmological model, which is sensitive to the elusive physics of dark matter, dark energy and neutrinos. Furthermore, a large-scale measurement of B-mode CMB polarization permits a determination of the power of primordial gravitational waves, generated by processes potentially happening in the very early universe at energies close to the scale of the Grand Unified Theory. Entering a new sensitivity regime entails the necessity to improve our physical understanding and analysis methods of astronomical and instrumental systematics.This thesis presents within this context several analyses of potential astronomical and instrumental systematics, primarily focusing on CMB measurements related to weak gravitational lensing. The latter distorts the path of the primary CMB's photons, such that the statistical properties of the measured signal deviate from the primary signal and, hence, has to be accounted for. This thesis describes the underlying physics, analysis methods and applications to current data sets of the POLARBEAR CMB experiment in the context of CMB lensing science.This thesis shows that future high-precision measurements of CMB lensing have to account for the high complexity of this effect, primarily caused by multiple deflections within an evolving, non-linear large-scale structure distribution. Furthermore, the impact of higher-order correlations introduced by galactic foregrounds and CMB lensing when jointly analyzing CMB data sets on both large and small scales is investigated, showing the need for small-scale multi-frequency observations and foreground removal techniques to obtain an unbiased estimate of the tensor-to-scalar ratio
Al, shaer Ali. "Analyse des déformations permanentes des voies ferrées ballastées - Approche dynamique." Phd thesis, Ecole des Ponts ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001592.
Full textGehan, Charlotte. "Évolution de la rotation du cœur des étoiles sur la branche des géantes rouges : des mesures à grande échelle vers une caractérisation du transport de moment cinétique." Thesis, Paris Sciences et Lettres, 2018. http://www.theses.fr/2018PSLEO021/document.
Full textAsteroseismology consists in probing stellar interiors through the detection of seismic waves. Red giants are evolved low-mass stars that have exhausted hydrogen in their core. These stars are solar-type pulsators presenting mixed modes that allow us to have a direct access to the physical properties of their core. The available seismic measurements indicate that one or several mechanisms that remain poorly understood counterbalance the acceleration ofthe core rotation, resulting from its contraction, by transporting angularmomentum. The greatest part of this PhD thesis was devoted to the development of a method allowing a measurement as automated as possible of the mean core rotation of stars on the red giant branch that were observed by the Kepler satellite (NASA). The measurements that were derived for almost 900 stars highlight that the core rotation is almost constant along the red giant branch, with values largely independent of the stellar mass. The second part of this PhD thesis is devoted to the interpretation of these results based on stellar modelling. The challenge consists in using the large-scale measurements obtainedin the first part to characterise the quantity of angular momentum that has to be extracted from each layer of the core, at different timesteps on the red giant branch, for different stellar masses
Lamarche-Perrin, Robin. "Analyse macroscopique des grands systèmes : émergence épistémique et agrégation spatio-temporelle." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00933186.
Full textLoko, Houdété Odilon. "Analyse de l'impact d'une intervention à grande échelle avec le modèle de risques proportionnels de Cox avec surplus de zéros : application au projet Avahan de lutte contre le VIH/SIDA en Inde." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30546/30546.pdf.
Full textVan, Den Beek Marius. "Piwi-dependent transcriptional silencing and Dicer-2-dependent post-transcriptional silencing limit transposon expression in adult heads of Drosophila Melanogaster." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066153/document.
Full textTransposable elements are major components of eukaryotic genomes and have been proposed as important drivers of gene network evolution, as they can move or “transpose” in their host genome, creating gene duplications, gene inactivations or altogether altering gene function. Nevertheless, uncontrolled high-rate transposition leads to DNA damage and genomic instabilities, and therefore needs to be kept at a low level. In the fruitfly Drosophila melanogaster, transposition is counteracted by multiple mechanisms, amongst which the generation of small interfering RNAs (siRNAs) and Piwi-interacting RNAs (piRNAs). siRNAs and piRNAs belong to the category of small RNAs, and these are involved in negative regulation of complementary target RNAs abundance, but siRNAs and piRNAs have distinct mechanisms of biogenesis, target recognition and mechanisms of target regulation. Notably, piRNAs are only abundant in gonads and are transmitted to the embryo. By sequencing small RNAs and normal transcripts in adult heads, I conclude that, while piRNAs are likely absent in adult heads, they induce a repressive state on TEs. If this repressive state is lost, the siRNA pathway can compensate and limit Transposable element levels. If siRNAs are lost, the repressive state induced by piRNAs suffices to limit Transposable element levels. If both piRNAs and siRNAs are lost, the expression level of Transposable elements increases, and flies have a shorter life span. The requirement to analyse large-scale sequencing data led to the development of multiple tools for the reproducible research platform Galaxy
Conard, Didier. "Traitement d'images en analyse de défaillances de circuits intégrés par faisceau d'électrons." Grenoble INPG, 1991. http://tel.archives-ouvertes.fr/tel-00339510.
Full textFerrigno, Julie. "Caractérisation de circuits intégrés par émission de lumière statique et dynamique." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13719/document.
Full textVLSI (”Very Large Scale Integration”) et ULSI (”Ultra Large Scale Integration”) take the most important place in semi-conductor domain. Their complexi?cation is growing and is due to the bigger and bigger request from the manufacturers such as automotive domain or space application. However, this complexicity generates a lot of defects inside the components. We need to predict or to detect and analyze these defects in order to stop these phenomena. Lot of failure analyzis techniques were developped inside the laboratories and are still used. Nevertheless, we developped a new approach for failure analysis process : the faults simulation for CMOS integrated circuits. This particular kind of approach allows us to reach the analysis in more e?ective and easier way than usual. But the simulations play a predictive role for structures of MOS transistors
Calandriello, Daniele. "Efficient sequential learning in structured and constrained environments." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10216/document.
Full textThe main advantage of non-parametric models is that the accuracy of the model (degrees of freedom) adapts to the number of samples. The main drawback is the so-called "curse of kernelization": to learn the model we must first compute a similarity matrix among all samples, which requires quadratic space and time and is unfeasible for large datasets. Nonetheless the underlying effective dimension (effective d.o.f.) of the dataset is often much smaller than its size, and we can replace the dataset with a subset (dictionary) of highly informative samples. Unfortunately, fast data-oblivious selection methods (e.g., uniform sampling) almost always discard useful information, while data-adaptive methods that provably construct an accurate dictionary, such as ridge leverage score (RLS) sampling, have a quadratic time/space cost. In this thesis we introduce a new single-pass streaming RLS sampling approach that sequentially construct the dictionary, where each step compares a new sample only with the current intermediate dictionary and not all past samples. We prove that the size of all intermediate dictionaries scales only with the effective dimension of the dataset, and therefore guarantee a per-step time and space complexity independent from the number of samples. This reduces the overall time required to construct provably accurate dictionaries from quadratic to near-linear, or even logarithmic when parallelized. Finally, for many non-parametric learning problems (e.g., K-PCA, graph SSL, online kernel learning) we we show that we can can use the generated dictionaries to compute approximate solutions in near-linear that are both provably accurate and empirically competitive
Van, Den Beek Marius. "Piwi-dependent transcriptional silencing and Dicer-2-dependent post-transcriptional silencing limit transposon expression in adult heads of Drosophila Melanogaster." Electronic Thesis or Diss., Paris 6, 2015. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2015PA066153.pdf.
Full textTransposable elements are major components of eukaryotic genomes and have been proposed as important drivers of gene network evolution, as they can move or “transpose” in their host genome, creating gene duplications, gene inactivations or altogether altering gene function. Nevertheless, uncontrolled high-rate transposition leads to DNA damage and genomic instabilities, and therefore needs to be kept at a low level. In the fruitfly Drosophila melanogaster, transposition is counteracted by multiple mechanisms, amongst which the generation of small interfering RNAs (siRNAs) and Piwi-interacting RNAs (piRNAs). siRNAs and piRNAs belong to the category of small RNAs, and these are involved in negative regulation of complementary target RNAs abundance, but siRNAs and piRNAs have distinct mechanisms of biogenesis, target recognition and mechanisms of target regulation. Notably, piRNAs are only abundant in gonads and are transmitted to the embryo. By sequencing small RNAs and normal transcripts in adult heads, I conclude that, while piRNAs are likely absent in adult heads, they induce a repressive state on TEs. If this repressive state is lost, the siRNA pathway can compensate and limit Transposable element levels. If siRNAs are lost, the repressive state induced by piRNAs suffices to limit Transposable element levels. If both piRNAs and siRNAs are lost, the expression level of Transposable elements increases, and flies have a shorter life span. The requirement to analyse large-scale sequencing data led to the development of multiple tools for the reproducible research platform Galaxy
Ahmad, Ahmad. "Caractérisation globale et locale de l'écoulement à surface libre et en charge de systèmes liquide-liquide : application au procédé d’injection pariétale pour le transport des pétroles bruts." Brest, 2010. http://www.theses.fr/2010BRES2013.
Full textThe present dissertation reports on investigations on open-channel flows and Poiseuille flows of liquid/liquid systems. The first part of the dissertation considers the propagation of a gravity current over a denser ambient miscible liquid. A controlled flow rate of fresh water and of polymer solutions were released upon the free surface of an ambient salty water at rest in a basin, in order to characterize with te help of a method based on image analysis and the exploitation of spatio-temporal diagrams, the effect of polymer shear-thinning property on the temporal evolution of front progress and spreading of gravity current in ambient liquid and of mixing layer depth as well. A local study consisting in the development of a large scale PIV, aiming at describe to hydrodynamic fields existing in both fluids completed the previous global study. The second part of the dissertation considers a co-current water/oil flow in a duct, in order to simulate the lubricated pipelining of heavy crude oils which were being represented by oils gifted with high viscosity and a viscoplastic rheological behaviour. The effect of bed slope and flow rates ratio on global pressure drop were characterized in order to define the conditions of process optimal efficiency. A local characterization of the interfacial instabilities completed the previous global investigation
Marteau, Julie. "Caractérisation multi-échelle et analyse par essai d'indentation instrumentée de matériaux à gradient générés par procédés mécaniques et thermochimiques de traitement de surface." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00937956.
Full textVoon, Lew Yan, and Lew Fock Chong. "Contribution au test intégré déterministe : structures de génération de vecteurs de test." Montpellier 2, 1992. http://www.theses.fr/1992MON20035.
Full textSavart, Denis. "Analyse de défaillances de circuits VLSI par testeur à faisceau d'électrons." Phd thesis, Grenoble INPG, 1990. http://tel.archives-ouvertes.fr/tel-00337865.
Full textBeaussier, Thomas. "Évaluation économique et environnementale du développement régional d’une filière en interaction multi-secteur et multi-échelle : le cas de la filière forêt-bois du Grand Est." Thesis, Université de Lorraine, 2020. http://www.theses.fr/2020LORR0138.
Full textThe objective of this thesis is to develop a method for quantitative assessments of the economic and environmental performance of regional development strategies, applied to the forestry sector in the region Grand Est. To this end, we adopt an approach based on the coupling of modelling tools from economics and environmental sciences. In chapter 1, we analyse couplings between 5 economic models and 3 environmental assessment tools from the existing literature. A dedicated criteria grid allows to compare their relevance to provide integrative assessments at the meso scale. Couplings between equilibrium models on the one hand, and Life Cycle Assessment (LCA) on the other hand, best meet the defined objectives. Chapter 2 details the methodological framework of the coupling between a partial equilibrium model of the French forest sector and LCA. The homogenisation of material flows between the two models makes it possible to produce economic and environmental indicators with a coherent perimeter, the ratio of which provides two eco-efficiency indicators. The first combines the economic surplus of the forest-based sector with its potential environmental impacts (Partial Eco-Efficiency, PEE); the second adds the environmental impacts avoided by substitution between wood-energy and fossil fuels, compared to a reference scenario (Full Eco-Efficiency, FEE). In Chapter 3, we use this framework to analyse different wood energy oriented bio-economy development strategies at the national level and at the regional level in the region Grand Est. For this purpose, we compare the FEE of scenarios constructed by combinations of different policies: subsidising wood energy demand, local supply, forest protection, energy crisis. Strategies integrating a stimulation of wood energy demand are the most eco-efficient, at regional and national level. This is based in particular on the benefits of avoided impacts through the substitution of wood energy for fossil fuels. The combination of the subsidy with protection measures and/or local procurement slightly increases or decreases its eco-efficiency depending on the scale of implementation. In addition, we have identified other factors determining most the eco-efficiency of a policy, such as the characteristics of the forest resource, the importance of the local wood sector, and the characteristics of neighbouring regions
Chen, Yuguang. "Modélisation du comportement mécanique des grands CFRD : Identification des caractéristiques des enrochements et comportement du masque d'étanchéité amont." Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00855689.
Full textCui, Yanwei. "Kernel-based learning on hierarchical image representations : applications to remote sensing data classification." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS448/document.
Full textHierarchical image representations have been widely used in the image classification context. Such representations are capable of modeling the content of an image through a tree structure. In this thesis, we investigate kernel-based strategies that make possible taking input data in a structured form and capturing the topological patterns inside each structure through designing structured kernels. We develop a structured kernel dedicated to unordered tree and path (sequence of nodes) structures equipped with numerical features, called Bag of Subpaths Kernel (BoSK). It is formed by summing up kernels computed on subpaths (a bag of all paths and single nodes) between two bags. The direct computation of BoSK yields a quadratic complexity w.r.t. both structure size (number of nodes) and amount of data (training size). We also propose a scalable version of BoSK (SBoSK for short), using Random Fourier Features technique to map the structured data in a randomized finite-dimensional Euclidean space, where inner product of the transformed feature vector approximates BoSK. It brings down the complexity from quadratic to linear w.r.t. structure size and amount of data, making the kernel compliant with the large-scale machine-learning context. Thanks to (S)BoSK, we are able to learn from cross-scale patterns in hierarchical image representations. (S)BoSK operates on paths, thus allowing modeling the context of a pixel (leaf of the hierarchical representation) through its ancestor regions at multiple scales. Such a model is used within pixel-based image classification. (S)BoSK also works on trees, making the kernel able to capture the composition of an object (top of the hierarchical representation) and the topological relationships among its subparts. This strategy allows tile/sub-image classification. Further relying on (S)BoSK, we introduce a novel multi-source classification approach that performs classification directly from a hierarchical image representation built from two images of the same scene taken at different resolutions, possibly with different modalities. Evaluations on several publicly available remote sensing datasets illustrate the superiority of (S)BoSK compared to state-of-the-art methods in terms of classification accuracy, and experiments on an urban classification task show the effectiveness of proposed multi-source classification approach
Tauvel, Claire. "Optimisation stochastique à grande échelle." Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00364777.
Full text