Rozprawy doktorskie na temat „Analyse de la trace d'exécution”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Analyse de la trace d'exécution”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Vigouroux, Xavier. "Analyse distribuée de traces d'exécution de programmes parallèles". Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0016.
Pełny tekst źródłaAmiar, Azzeddine. "Aide à l'Analyse de Traces d'Exécution dans le Contexte des Microcontrôleurs". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00978227.
Pełny tekst źródłaDosimont, Damien. "Agrégation spatiotemporelle pour la visualisation de traces d'exécution". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM075/document.
Pełny tekst źródłaTrace visualization techniques are commonly used by developers to understand, debug, and optimize their applications.Most of the analysis tools contain spatiotemporal representations, which is composed of a time line and the resources involved in the application execution. These techniques enable to link the dynamic of the application to its structure or its topology.However, they suffer from scalability issues and are incapable of providing overviews for the analysis of huge traces that have at least several Gigabytes and contain over a million of events. This is caused by screen size constraints, performance that is required for a efficient interaction, and analyst perceptive and cognitive limitations. Indeed, overviews are necessary to provide an entry point to the analysis, as recommended by Shneiderman's emph{mantra} - Overview first, zoom and filter, then details-on-demand -, a guideline that helps to design a visual analysis method.To face this situation, we elaborate in this thesis several scalable analysis methods based on visualization. They represent the application behavior both over the temporal and spatiotemporal dimensions, and integrate all the steps of Shneiderman's mantra, in particular by providing the analyst with a synthetic view of the trace.These methods are based on an aggregation method that reduces the representation complexity while keeping the maximum amount of information. Both measures are expressed using information theory measures. We determine which parts of the system to aggregate by satisfying a trade-off between these measures; their respective weights are adjusted by the user in order to choose a level of details. Solving this trade off enables to show the behavioral heterogeneity of the entities that compose the analyzed system. This helps to find anomalies in embedded multimedia applications and in parallel applications running on a computing grid.We have implemented these techniques into Ocelotl, an analysis tool developed during this thesis. We designed it to be capable to analyze traces containing up to several billions of events. Ocelotl also proposes effective interactions to fit with a top-down analysis strategy, like synchronizing our aggregated view with more detailed representations, in order to find the sources of the anomalies
Emteu, Tchagou Serge Vladimir. "Réduction à la volée du volume des traces d'exécution pour l'analyse d'applications multimédia de systèmes embarqués". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM051/document.
Pełny tekst źródłaThe consumer electronics market is dominated by embedded systems due to their ever-increasing processing power and the large number of functionnalities they offer.To provide such features, architectures of embedded systems have increased in complexity: they rely on several heterogeneous processing units, and allow concurrent tasks execution.This complexity degrades the programmability of embedded system architectures and makes application execution difficult to understand on such systems.The most used approach for analyzing application execution on embedded systems consists in capturing execution traces (event sequences, such as system call invocations or context switch, generated during application execution).This approach is used in application testing, debugging or profiling.However in some use cases, execution traces generated can be very large, up to several hundreds of gigabytes.For example endurance tests, which are tests consisting in tracing execution of an application on an embedded system during long periods, from several hours to several days.Current tools and methods for analyzing execution traces are not designed to handle such amounts of data.We propose an approach for monitoring an application execution by analyzing traces on the fly in order to reduce the volume of recorded trace.Our approach is based on features of multimedia applications which contribute the most to the success of popular devices such as set-top boxes or smartphones.This approach consists in identifying automatically the suspicious periods of an application execution in order to record only the parts of traces which correspond to these periods.The proposed approach consists of two steps: a learning step which discovers regular behaviors of an application from its execution trace, and an anomaly detection step which identifies behaviors deviating from the regular ones.The many experiments, performed on synthetic and real-life datasets, show that our approach reduces the trace size by an order of magnitude while maintaining a good performance in detecting suspicious behaviors
Zoor, Maysam. "Latency verification in execution traces of HW/SW partitioning model". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT037.
Pełny tekst źródłaWhile many research works aim at defining new (formal) verification techniques to check for requirements in a model, understanding the root cause of a requirement violation is still an open issue for complex platforms built around software and hardware components. For instance, is the violation of a latency requirement due to unfavorable real-time scheduling, to contentions on buses, to the characteristics of functional algorithms or hardware components?This thesis introduces a Precise Latency ANalysis approach called PLAN. PLAN takes as input an instance of a HW/SW partitioning model, an execution trace, and a time constraint expressed in the following format: the latency between operator A and operator B should be less than a maximum latency value. First PLAN checks if the latency requirement is satisfied. If not, the main interest of PLAN is to provide the root cause of the non satisfaction by classifying execution transactions according to their impact on latency: obligatory transaction, transaction inducing a contention, transaction having no impact, etc.A first version of PLAN assumes an execution for which there is a unique execution of operator A and a unique execution of operator B. A second version of PLAN can compute, for each executed operator A, the corresponding operator B. For this, our approach relies on tainting techniques.The thesis formalizes the two versions of PLAN and illustrates them with toy examples. Then, we show how PLAN was integrated into a Model-Driven Framework (TTool). The two versions of PLAN are illustrated with two case studies taken from the H2020 AQUAS project. In particular, we show how tainting can efficiently handle the multiple and concurrent occurrences of the same operator
Lesage, Benjamin. "Architecture multi-coeurs et temps d'exécution au pire cas". Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00870971.
Pełny tekst źródłaLopez, Cueva Patricia. "Debugging Embedded Multimedia Application Execution Traces through Periodic Pattern Mining". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-01006213.
Pełny tekst źródłaTouzeau, Valentin. "Analyse statique de caches LRU : complexité, analyse optimale, et applications au calcul de pire temps d'exécution et à la sécurité". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM041.
Pełny tekst źródłaThe certification of real-time safety critical programs requires bounding their execution time.Due to the high impact of cache memories on memory access latency, modern Worst-Case Execution Time estimation tools include a cache analysis.The aim of this analysis is to statically predict if memory accesses result in a cache hit or a cache miss.This problem is undecidable in general, thus usual cache analyses perform some abstractions that lead to precision loss.One common assumption made to remove the source of undecidability is that all execution paths in the program are feasible.Making this hypothesis is reasonable because the safety of the analysis is preserved when adding spurious paths to the program model.However, classifying memory accesses as cache hits or misses is still hard in practice under this assumption, and efficient cache analysis usually involve additional approximations, again leading to precision loss.This thesis investigates the possibility of performing an optimally precise cache analysis under the common assumption that all execution paths in the program are feasible.We formally define the problems of classifying accesses as hits and misses, and prove that they are NP-hard or PSPACE-hard for common replacement policies (LRU, FIFO, NRU and PLRU).However, if these theoretical complexity results legitimate the use of additional abstraction, they do not preclude the existence of algorithms efficient in practice on industrial workloads.Because of the abstractions performed for efficiency reasons, cache analyses can usually classify accesses as Unknown in addition to Always-Hit (Must analysis) or Always-Miss (May analysis).Accesses classified as Unknown can lead to both a hit or a miss, depending on the program execution path followed.However, it can also be that they belong to one of the Always-Hit or Always-Miss category and that the cache analysis failed to classify them correctly because of a coarse approximation.We thus designed a new analysis for LRU instruction that is able to soundly classify some accesses into a new category, called Definitely Unknown, that represents accesses that can lead to both a hit or a miss.For those accesses, one knows for sure that their classification does not result from a coarse approximation but is a consequence of the program structure and cache configuration.By doing so, we also reduce the set of accesses that are candidate for a refined classification using more powerful and more costly analyses.Our main contribution is an analysis that can perform an optimally precise analysis of LRU instruction caches.We use a method called block focusing that allows an analysis to scale by only analyzing one cache block at a time.We thus take advantage of the low number of candidates for refinement left by our Definitely Unknown analysis.This analysis produces an optimal classification of memory accesses at a reasonable cost (a few times the cost of the usual May and Must analyses).We evaluate the impact of our precise cache analysis on the pipeline analysis.Indeed, when the cache analysis is not able to classify an access as Always-Hit or Always-Miss, the pipeline analysis must consider both cases.By providing a more precise memory access classification, we thus reduce the state space explored by the pipeline analysis and hence the WCET analysis time.Aside from this application of precise cache analysis to WCET estimation, we investigate the possibility of using the Definitely Unknown analysis in the domain of security.Indeed, caches can be used as side-channel to extract some sensitive data from a program execution, and we propose a variation of our Definitely Unknown analysis to help a developer finding the source of some information leakage
Jahier, Erwan. "Analyse dynamique de programme : Mise en oeuvre automatisée d'analyseurs performants et spécifications de modèles d'exécution". Rennes, INSA, 2000. http://www.theses.fr/2000ISAR0009.
Pełny tekst źródłaSeveral studies show that most of the software production cost is spent during the maintenance phase. During that phase, to locate bugs, to optimize programs, or to add new functionalities, it is essential to understand programs, and in particular to understand their runtime behavior. Dynamic analysis tools such as debuggers, profilers, or monitors, are very useful in that respect. However, such tools are expensive to implement because: (1) it generally requires to modify the compiling system, which is tedious and not always possible; (2) the needs in dynamic analysis tools vary from one user to another, depending on its competence, on its experience of the programming system, and on its knowledge of the code to maintain; (3) such tools are generally difficult to reuse. It is therefore desirable that each user is able to specify easily the dynamic analyses he needs. Hence, we propose an architecture that eases dynamic analysis tools implementation. This architecture is based on: (1) a systematic instrumentation of the program which gives a detailed image of the execution, the trace; (2) a set of trace processing primitives that lets one analyse the trace efficiently. The resulting analysers have performance of the same order of magnitude that their equivalent implemented ``by hand'' by modifying the compiling system. They can be implemented by programmers without any knowledge of the compiling system. This architecture let them implement the tools they need, adapted to their level of comprehension of the code they are in charge to maintain. Furthermore, the modular structure of the proposed architecture should ease the analysers reuse. This work has been held within the context of the logical and functional programming language Mercury. However, the concepts we used do not depend on the programming paradigm. The trace on which we base the implementation of our dynamic analysis tools should reflect as much as possible the runtime behavior of programs. Therefore, we also propose a framework to specify execution traces. This framework is based on an operational semantics of the language to analyse. Such formal specifications of the trace let us experimentally validate tracers, and prove their correctness. This work have been held within the context of the logical programming language Prolog
Bourgade, Roman. "Analyse du temps d'exécution pire-cas de tâches temps-réel exécutées sur une architecture multi-cœurs". Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00746073.
Pełny tekst źródłaBourgade, Roman. "Analyse du temps d'exécution pire-cas de tâches temps-réel exécutées sur une architecture multi-coeurs". Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1740/.
Pełny tekst źródłaSoftware failures in hard real-time systems may have hazardous effects (industrial disasters, human lives endangering). The verification of timing constraints in a hard real-time system depends on the knowledge of the worst-case execution times (WCET) of the tasks accounting for the embedded program. Using multicore processors is a mean to improve embedded systems performances. However, determining worst-case execution times estimates on these architectures is made difficult by the sharing of some resources among cores, especially the interconnection bus that enables accesses to the shared memory. This document proposes a two-level arbitration scheme that makes it possible to improve executed tasks performances while complying with timing constraints. Described methods assess an optimal bus access priority level to each of the tasks. They also allow to find an optimal allocation of tasks to cores when tasks to execute are more numerous than available cores. Experimental results show a meaningful drop in worst-case execution times estimates and processor utilization
Colas, Damien. "Les annotations de chanteurs dans les matériels d'exécution des opéras de Rossini à Paris (1820-1860) : contribution à l'étude de la grammaire mélodique Rossinienne". Tours, 1997. http://www.theses.fr/1997TOUR2015.
Pełny tekst źródłaAs opposed to the idea expressed by Duprez (L'Art du chant, 1847), currently acknowledged until the beginning of the 20th century, Rossini did not abolish the singers' liberty of improvisation. His vocal music was conceived, on the contrary, in its phraseology as much as in its use of symetries on a large scale, in order to be varied and ornamented. The role pans of Rossini’s operas which are conserved in Paris contain many manuscript annotations from singers' hands, which have not been studied yet. The aim of the ph. D. Thesis has been twofold: first, to gather this fragmentary data in order to make it available for the public (transcription of recitatives' puntature, cadenzas, + interpolations; in the cantabili and variations in the re-entries, vol. IV), and, second, to extract from this material new elements of reflection about the way Rossini an melodic language works. This second phase of work consisted in the identification, classification and comparision of the formulaic elements of the ornamental language (vol. Iii). It appeared to me that the sentence segmentation signals are privileged positions for the ornamentation, characterized by specific modalities of melodic variation. Beside the cadenza, whose construction processes are partially described in the treatises, I individualized the + desinence; ending signal on the scale of the melodic verse, and its principles of variation. As far as I know, these aspects remained unnoticed and unexplored in the didactic literature of that time (chapters 1 & 2). For the main body of the melodic sentence, I individualized three basic techniques of ornamentation - the + local; ornamentation, the interpolation and the substitution - as well as the different archetypes of formula they are built on (chap. 3). This paradigmatic study of ornamentation will be completed in the near future by a syntagmatic approach
Bousse, Erwan. "Execution trace management to support dynamic V&V for executable DSMLs". Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S082/document.
Pełny tekst źródłaDynamic verification and validation (V&V) techniques are required to ensure the correctness of executable models. Most of these techniques rely on the concept of execution trace, which is a sequence containing information about an execution. Therefore, to enable dynamic V&V of executable models conforming to any executable domain-specific modeling language (xDSML), it is crucial to provide efficient facilities to construct and manipulate all kinds of execution traces. To that effect, we first propose a scalable model cloning approach to conveniently construct generic execution traces using model clones. Using a random metamodel generator, we show that this approach is scalable in memory with little manipulation overhead. We then present a generative approach to define multidimensional and domain-specific execution trace metamodels, which consists in creating the execution trace data structure specific to an xDSML. Thereby, execution traces of models conforming to this xDSML can be efficiently captured and manipulated in a domain-specific way. We apply this approach to two existing dynamic V&V techniques, namely semantic differencing and omniscient debugging. We show that such a generated execution trace metamodel provides good usability and scalability for dynamic early V&V support for any xDSML. Our work have been implemented and integrated within the GEMOC Studio, which is a language and modeling workbench resulting from the eponym international initiative
Mourou, Pascal. "Planification et contrôle d'exécution dans un monde multi-agent : copilote pour véhicule en circulation autoroutière". Toulouse 3, 1994. http://www.theses.fr/1994TOU30135.
Pełny tekst źródłaRuiz, Jordy. "Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30285/document.
Pełny tekst źródłaThe search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses
Brocchini, Ilaria. "Trace et disparition dans l'oeuvre de Walter Benjamin". Paris 1, 2005. http://www.theses.fr/2005PA010656.
Pełny tekst źródłaOuedraogo, Marie-Françoise. "Extension of the canonical trace and associated determinants". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00725230.
Pełny tekst źródłaJalbert, Emmanuelle. "Le signifiant Zaïmph, trace évanescente en quête d'imaginaire(s) : analyse sémiotique de Salammbô de Flaubert". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ40551.pdf.
Pełny tekst źródłaJalbert, Emmanuelle. "Le signifiant Zaïmph : trace évanescente en quête d'imaginaire(s) : analyse sémiotique de Salammbô de Flaubert /". Thèse, Chicoutimi : Université du Québec à Chicoutimi, 1998. http://theses.uqac.ca.
Pełny tekst źródłaCe mémoire a été réalisé à Chicoutimi dans le cadre du programme de maîtrise en études littéraires de l'Université du Québec à Trois-Rivières extensionné à l'Université du Québec à Chicoutimi. CaQCU Bibliogr.: p. [128-130]. Document électronique également accessible en format PDF. CaQCU
Pietrek, Artur. "TIREX : une représentation textuelle intermédiaire pour un environnement d'exécution virtuel, échanger des informations du compilateur et d'analyse du programme". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00780232.
Pełny tekst źródłaRuch, Claude. "Analyse d'elements traces par fluorescence x : nouveaux developpements". Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13107.
Pełny tekst źródłaBonneton, Nathalie. "Le développement des actes moteurs du jeune enfant : analyse comparée des gestes d'atteinte et de trace". Rouen, 2002. http://www.theses.fr/2002ROUEL420.
Pełny tekst źródłaThis study looks at the motor organization of pointing and drawing movements in the third year of age. Thirty seven children between the ages of 26 and 38 months performed a pointing and a drawing task on the horizontal plane. In both tasks, the child had to join two separate points. The location of the target was defined by its distance from the child (13 or 20 cm) and its direction with respect to the child's mid-body line (5 positions, ranging from 90° to the right and 90° to the left). A kinetic analysis can account for the organization of pointing and drawing movements during thie third year of life. First, these two tasks lead to conclude that similar mechanisms exist to integrate spatial information in a motor task : mean speed and amplitude of peak velocity are increased when the distance to cover is longer (isochrony). Second, our results show a differentiation between drawing and pointing concerning the planning of the trajectories. A third experience enables us to conclude that this differentiation is not attributable to the pen trace
Kamali, Yousef. "Filament-induced nonlinear fluorescence spectroscopy of trace gaseous pollutants in air". Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27545/27545.pdf.
Pełny tekst źródłaGeneves, Sylvain. "Etude de performances sur processeurs multicoeur : environnement d'exécution événementiel efficace et étude comparative de modèles de programmation". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00842012.
Pełny tekst źródłaPinto, Marcos Cunha. "Définition et utilisation de traces issues de plateformes virtuelles pour le débogage des MPSoCs". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM003/document.
Pełny tekst źródłaThe increasing complexity of Multiprocessor System on Chip (MPSoC) makes the engineers' life harder as bugs and inefficiencies can have a very broad range of sources. Hardware/software interactions can be one of these sources, their early identification and resolution being a priority for rapid system integration. Thus, due to the huge number of possible execution interleavings, reproducing the conditions of occurrence of a given error/performance issue is very difficult. One solution to this problem consists of tracing an execution for later analysis. Obtaining the traces from real platforms goes against the recent development processes, now broadly adopted by industry and academy, which rely on simulation to anticipate hardware/software integration. Multi/many core systems on chip tend to have specific memory hierarchies, to make the hardware simpler and predictable, at the cost of having the hardware percolate towards the high levels of the software stack. Despite the developers efforts, it is hard to make sure all preventive measures are taken to ensure a given property, such as lack of race conditions or data coherency. In this context, the debugging process is particularly tedious as it involves analyzing parallel execution flows. Executing a program many times is an integral part of the process in conventional debugging, but the non-determinism due to parallel execution often leads to different execution paths and different behaviors.This thesis details the challenges and issues behind the production and exploitation of "well formed" traces in a transaction accurate virtual prototyping environment that uses dynamic binary translation as processor simulation technology. These traces contain causality relations among events, which allow firstly to simplify the analysis, and secondly to avoid relying on timestamps. We propose a formalism to define the traces and detail an implementation to produce them in a non-intrusive manner. We use these traces to help identify and correct bugs on multi/many-core platforms. We firstly introduce a method to identify the potential cache coherence violations in non-cache-coherent platforms. Our method identifies potential violations which may occur during a given execution for write-through and write-back cache policies by analyzing the traces.We secondly focus on easing the debugging process of parallel software running on MPSoC using traces. To that aim, we propose a debugging process which replays a faulty execution using traces. We detail a strategy for providing forward and reverse execution features to avoid long simulation times during a debug session.We conducted experiments on MPSoC using parallel applications to quantify our proposal, and overall show that complex analysis and debug strategies can be implemented over traces, leading to deterministic results in shorter time than simulation alone
Louche, Laurence. "La phlorine, molécule trace pour le contrôle de l'authenticité des jus d'agrume". Aix-Marseille 3, 2002. http://www.theses.fr/2002AIX30092.
Pełny tekst źródłaA phenolic compound, 3,5-dihydroxyphenyl-β-D-glucopyranoside, known as phlorin has been isolated from aqueous extracts of orange peel. Its structural formula was confirmed by 1H and 13C NMR spectroscopies and mass chromatography. Aqueous extracts of orange peel were followed during more than 2 days at two temperatures (25 and 50ʿC) showing increasing content of phlorin in water following time. The phlorin content was determined in various parts of 2 orange varieties, Navel Late and Valencia. Phlorin has been searched in 45 species and varieties of Citrus and determined in juices and aqueous peel extracts. Multivariate analyses on several data (phlorin, sugars, flavonoi͏̈ds) were used to determine analytical profiles for each orange species and concentrate supplier
Vorapalawut, Nopparat. "Advances in trace element analysis of petroleum samples : insight into the speciation". Pau, 2011. http://www.theses.fr/2011PAUU3001.
Pełny tekst źródłaThe advent of inductively coupled plasma quadrupole mass spectrometry (ICP MS) largely contributed to reliable and fast multielement trace analysis of petroleum samples but the inherent problems related to plasma instability, carbon deposition on the sampler and skimmer cones and carbon-related polyatomic interferences are omnipresent. The goal of this work was to address several analytical tasks impossible to be successfully handled by ICP quadrupole ICP MS. They include the determination of non-metals, such as sulphur and silicon, simultaneous multielement trace analysis at the low ng g-1 levels, and insight into the molecular binding of the trace elements present. The method developed for the sulphur determination in gasoline was based on the formation of microemulsion introduced directly into the ICP. Quantification was carried out by ICP AES using external calibration which allowed high throughput analysis. The developed method was applied for total sulphur analysis in diesel samples from various gas stations in Thailand. Problems related to spectral interferences were alleviated by the use of a double-focusing sector field ICP MS optimized for the direct simultaneous determination of Ag, Al, Ba, Ca, Cd, Co, Cr, Cu, Fe, Mg, Mn, Mo, Ni, Pb, Sn, Ti, and V. Polyatomic interferences originating from the carbon-rich matrix were completely eliminated at a resolution of 4000 allowing the detection limits at the low pg g−1 level to be obtained (typically one order of magnitude lower than using a quadrupole ICP MS). A method for the routine comprehensive trace element analysis of xylene solutions of oil samples using external calibration was developed, validated by the analysis of CRMs and applied to the analysis of gas condensate and oil samples. Another method based on the micro-flow injection total consumption sample introduction was developed for the silicon determination allowing the detection limits down to 1 ng g-1. The effects of the sample matrix and of the chemical form of silicon on the sensitivity were investigated and alleviated when necessary by heating the spray chamber and sample dilution. Laser ablation-ICP-SF MS was developed for direct multielement analysis in crude oils and asphaltenes. A silica gel plate was impregnated for 30 min with a sample solution and analyzed by laser ablation-ICP MS. Carbon-related polyatomic interferences and matrix suppression effects were absent enabling quantitation by external calibration. The detection limits were in the low ng g−1 range. The method was validated by the analysis of NIST 1084a and 1085b certified reference materials (wear metals in lubricating oils) and applied to the analysis of crude oil and asphaltene samples. An insight into the chemical forms of Co, Cr, Fe, Ni, S, Si, V and Zn present in crude oil and oil vacuum distillation residue was gained by the coupling of microchromatography using permeation through gels with the increasing exclusion limit (5000, 400 000 and 20 000 000 Da) with high resolution (R = 4000) ICP MS. The method allowed the acquisition of chromatograms with high sensitivity competitive to the existing methods, showing element- and sample origin-dependent morphology. Normal phase HPLC-ICP MS and size-exclusion ICP MS were proposed to evaluate the purity of the silicon standard compounds, their reactivity with different petroleum-related matrices and speciation of silicon
Branchu, Colette. "Archéo-analyse de l'oeuvre : Le Petit Prince : l'écriture d'un secret ou la trace secrète d'une écriture hiéroglyphique". Phd thesis, Université Paul Valéry - Montpellier III, 2011. http://tel.archives-ouvertes.fr/tel-00804970.
Pełny tekst źródłaKnüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1239979718089-56362.
Pełny tekst źródłaDiese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der "Vollständigen Aufruf-Graphen" (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren > 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile
Colin, Alexis. "De la collecte de trace à la prédiction du comportement d'applications parallèles". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS020.
Pełny tekst źródłaRuntime systems are commonly used by parallel applications in order to efficiently exploit the underlying hardware resources. A runtime system hides the complexity of the management of the hardware and exposes a high-level interface to application developers. To this end, it makes decisions by relying on heuristics that estimate the future behavior of the application. We propose Pythia, a library that serves as an oracle capable of predicting the future behavior of an application, so that the runtime system can make more informed decisions. Pythia builds on the deterministic nature of many HPC applications: by recording an execution trace, Pythia captures the application main behavior. The trace can be provided for future executions of the application, and a runtime system can ask for predictions of future program behavior. We evaluate Pythia on 13 MPI applications and show that Pythia can accurately predict the future of most of these applications, even when varying the problem size. We demonstrate how Pythia predictions can guide a runtime system optimization by implementing an adaptive thread parallelism strategy in GNU OpenMP runtime system. The evaluation shows that, thanks to Pythia prediction, the adaptive strategy reduces the execution time of an application by up to 38%
Knüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis". Doctoral thesis, Technische Universität Dresden, 2008. https://tud.qucosa.de/id/qucosa%3A23611.
Pełny tekst źródłaDiese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der "Vollständigen Aufruf-Graphen" (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren > 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.
Ahumada, Bustamante Guido. "Analyse harmonique sur l'espace des chemins d'un arbre". Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37611090m.
Pełny tekst źródłaBerger, Gilles. "Etude expérimentale des premiers stades de l'altération hydrothermale de verres basaltiques et d'olivines : comportement des éléments en trace". Toulouse 3, 1987. http://www.theses.fr/1987TOU30201.
Pełny tekst źródłaNeami, Abdulwahid al. "Analyse quantitative des impuretes presentes a l'etat de traces dans cdte". Strasbourg 1, 1988. http://www.theses.fr/1988STR13206.
Pełny tekst źródłaAssal, Marouane. "Analyse spectrale des systèmes d'opérateurs h-pseudodifférentiels". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0586/document.
Pełny tekst źródłaIn this work, we are interested in the spectral analysis of systems of semiclassical pseudodifferentialoperators. In the first part, we study the extension of the long time semiclassical Egorovtheorem in the case where the quantum Hamiltonian which generates the time evolution andthe initial quantum observable are two semiclassical pseudodifferential operators with matrixvaluedsymbols. Under an hyperbolicity condition on the principal symbol of the Hamiltonianwhich ensures the existence of the semiclassical projections, and for a class of observable thatare "semi-classically" block-diagonal with respect to these projections, we prove an Egorov theoremvalid in a large time interval of order log(h-1) known as the Ehrenfest time. Here h & 0is the semiclassical parameter.In the second part, we are interested in the spectral and scattering theories for self-adjointsystems of pseudodifferential operators. We develop a stationary approach for the study of thespectral shift function (SSF) associated to a pair of self-adjoint semiclassical Schrödinger operatorswith matrix-valued potentials. We prove a Weyl-type asymptotics with sharp remainderestimate on the SSF, and under the existence of a scalar escape function, a pointwise completeasymptotic expansion on its derivative. This last result is a generalisation in the matrix-valuedcase of a result of Robert and Tamura established in the scalar case near non-trapping energies.Our time-independent method allows us to treat certain potentials with energy-level crossings
Savignan, Lionel. "Distribution d’éléments trace dans les sols de Nouvelle-Aquitaine et suivi de contaminants émergents (Ag, Pd, Pt, Rh)". Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3039.
Pełny tekst źródłaThe aim of this thesis is to assess the spatial distribution and origins of old and emerging trace elements in soils of the Nouvelle-Aquitaine region based on the French soil monitoring network (RMQS). Firstly, six old trace elements were targeted (As, Cd, Cu, Cr, Ni, Pb), their spatial distribution was estimated from the analyses of 356 samples from the first RMQS campaign. The median regional concentrations found are close to the national values. The comparison between the regional and national whisker values as anomaly threshold values showed anomalous regional concentrations. With the help of geostatistics and geographical information systems, the origins of the trace elements studied in the soils could be identified. Arsenic has mixed geogenic and anthropogenic origins, mainly related to mining activities. Cd, Cr and Ni are mainly of geogenic origin on a regional scale. Cu has a mainly anthropogenic origin due to its use as a fungicide in viticulture. Pb also has anthropogenic origins related to mining activities, leaded gasoline and hunting activities. As a second step, 4 emerging elements (Ag, Pd, Pt, Rh) were studied from 35 soil samples collected during the second RMQS campaign. The concentrations found indicate that these soils are slightly contaminated by these elements. Statistical analyses show that, on the one hand Ag, Pb, Rh and on the other hand Pd and Pt are correlated. The analysis of the spatial distribution, with the cross-referencing of geographical, geological and agricultural information, showed that automobile emissions are not a major source of PPGE in forest and agricultural soils. Rather, the distribution of Pd and Pt would be of natural origin with possible anthropogenic contributions coming from: i) the long-distance dispersion of Pd and Pt by particles suspended in the atmosphere; ii) inputs, notably mineral fertilisers, for agricultural soils. The origin of Ag and Rh would also be mainly of natural origin and the highest values would come from the proximity of Ag and Pb deposits and mining
Fopa, Léon Constantin. "Mise en contexte des traces pour une analyse en niveaux d'abstraction". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM077/document.
Pełny tekst źródłaApplications analysis and debugging techniques are increasingly challenging task in modern systems. Especially in systems based on embedded multiprocessor components (or MPSoC) that make up the majority of our daily devices today. The use of execution traces is unavoidable to apply a detailed analysis of such systems and identify unexpected behaviors. Even if the trace offers a rich corpus of information to the developer for her work, information relevant to the analysis are hidden in the trace and is unusable without a high level of expertise. Tools dedicated to trace analysis become necessary. However existing tools take little or no account of the specific business aspects to an application or the developer's business knowledge to optimize the analysis task. In this thesis, we propose an approach that allows the developer to represent, manipulate and query an execution trace based on concepts related to her own business knowledge. Our approach is the use of an ontology to model and query business concepts in a trace, and the use of an inference engine to reason about these business concepts. Specifically, we propose VIDECOM, the domain ontology for the analysis of execution traces of multimedia applications embedded on MPSoC. We then focus on scaling the operation of this ontology for the analysis of huge traces. Thus, we make a comparative study of different ontologies management systems (or triplestores) to determine the most appropriate architecture for very large traces in our VIDECOM ontology.We also propose an inference engine that addresses the challenges of reasoning about business concepts, namely the inference of the temporal order between business concepts in the trace and the termination of the process of generating new knowledge from business knowledge. Finally, we illustrate the practical use of VIDECOM in the SoC-Trace project for the analysis of real execution traces on MPSoC
Camus, Brice. "Formules des traces semi-classique au niveau d'une énergie critique". Reims, 2001. http://www.theses.fr/2001REIMS015.
Pełny tekst źródłaWe study the semi-classical trace formula on the level of a critical energy for a h-pseudo-differential operator on {R}{̂n} with a unique non-degenerate critical point of the principal symbol. In particular we consider the case where the hessian matrix of the principal symbol at the critical point is non-definite. This lead to the study of Hamiltonan systems near equilibrium and near the non zero periods of the linearised flow. The contributions of these periods leads to reformulate the problem in term of degenerate oscillatory integrals. The new contributions are interpreted in term of the local geometry of the energy surface and the classical dynamic on this energy surface
Dumas, Chloé. "Impact of extreme events on particulate trace metal transfer from the continent to the deep sea". Perpignan, 2014. https://tel.archives-ouvertes.fr/tel-01164554.
Pełny tekst źródłaMoy, Eloïse. "Étude de l’écriture chez l’enfant et l’adulte porteurs de trisomie 21 : analyse de la trace écrite et de sa dynamique". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM3040/document.
Pełny tekst źródłaDespite technological advances, handwriting continues to be a highly solicited skill in the school setting and constitutes, in part, the basis of successful integration and participation in community life. To this day, the study of handwriting in people with Down syndrome (DS) remains a field of study that is not well addressed by the scientific community. The objective of the current research is therefore to study writing skills in children and adults with DS compared to mental-age-matched and chronological-age-matched typically developing population. A task of copying text and writing single cursive letters on a graphics tablet reveal similar handwriting capacities between the DS group and the group of typical children of the same developmental age. Different individual factors influencing writing are also highlighted among fine motor skill and visuo-motor control in the DS population. Finally, letters presentation in different modalities shows evidence of improvement of letter writing in fluidity and trajectory through visualization of tracing and verbal instructions. Overall, the results are consistent with the hypothesis of a developmental delay and do not underline in the DS population. Our results encourage further investigation into the impact of tasks that lend themselves to fine motor skill and visuo-motor control and also training on the trajectory of writing with the aid of visual and verbal cues in order to attempt to improvement of quality and speed of writing as well as movement execution involved in writing in persons with DS
Cachia, Maxime. "Caractérisation des transferts d’éléments trace métalliques dans une matrice gaz/eau/roche représentative d'un stockage subsurface de gaz naturel". Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3006/document.
Pełny tekst źródłaNatural gas represents 20% of energy consumption in the world. This percentage is expected to increase in the next years due to the energy transition. For economic and strategic concerns, and in to regulate energy demand between summer and winter, natural gas might be stored in underground storages like aquifers. Consequently, injection and drawing operations favour contact between gaseous, liquid and solid species and make possible transfer phenomena of chemical species from one matrix to another. In addition, even though natural gases are composed essentially of methane (70-90%vol), they can also show various metallic trace element concentrations (mercury, arsenic, tin…). According harmful effects of these compounds on industrial infrastructures and on environment, knowing impacts of natural gas composition on aquifer storage is crucial.The different tasks of this thesis are incorporated within such a context with the objective to characterize gases-waters-rocks matrices and their potential interactions, focusing on metallic trace elements.Therefore, we have focused a part of this PhD thesis on the optimisation of conditions of use (i) of a in EX zone 0 sampler device, working according to the principle of bubbling and (ii) of trapping methodology as well as analytic methods. This unique device allows metal sampling from natural gases up to 100 bar pressure. Its use on industrial sites has permitted to measure and monitor during several years the metallic trace element chemical compositions of a natural gas and also more limited biogas and a biomethane analysis. Indeed, these two last gases are designed to reduce fossil fuel consumption particularly natural gas one. Biomethanes are led to use the same transportation network and to be temporarily stored in the same way as natural gaz. In addition of the gaseous phase, we have taken interest in the water and the mineral phases to characterize their chemical composition evolutions in time, without identify specific transfer mechanisms in touch with gas storage activity
Aysola, Prasad. "Pulse microwave-mediated sample clean-up method to analyse trace metals, PCBs and pesticides, and for the treatment of organic wastes". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39070.pdf.
Pełny tekst źródłaJourdain, Anne. "Étude et développement d'un spectromètre photoacoustique intègre sur silicium pour analyse de gaz". Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10165.
Pełny tekst źródłaOttogalli, François-Gaël. "Observations et analyses quantitatives multi-niveaux d'applications à objets réparties". Phd thesis, Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00004697.
Pełny tekst źródłaBorchert, Manuela. "Interactions between aqueous fluids and silicate melts : equilibration, partitioning and complexation of trace elements". Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4208/.
Pełny tekst źródłaDie Entstehung und Entwicklung von Graniten steht seit Jahrzehnten im Fokus vieler geologischer Studien, da sich die Erdkruste zu großen Teilen aus granitoiden Gesteinen zusammensetzt. Von besonderer Bedeutung für die Bildung von granitischen Schmelzen ist neben der Temperatur, der Wassergehalt der Schmelze, da dieser Parameter die chemische Zusammensetzung der Schmelze entscheidend verändern kann. Die Entmischung wässriger Fluide aus Schmelzen führt zur Neuverteilung von Elementen zwischen diesen Phasen. Bedingt durch die geringere Dichte des wässrigen Fluids im Vergleich zur Schmelze und dem Nebengestein, beginnt dieses aus tieferen Erdschichten aufzusteigen. Damit verknüpft ist nicht nur eine räumliche Trennung von Schmelze und Fluid, sondern auch die Alterierung des Nebengestein. Dieser Prozess ist insbesondere bei der Bildung von magmatisch-hydrothermalen Lagerstätten und in späten Entwicklungsstadien magmatischer Komplexe wichtig. Für ein detailliertes Verständnis dieser Prozesse ist es notwendig, das Elementverhalten in solchen Systemen in Abhängigkeit von Parametern wie Temperatur, Druck und chemischer Zusammensetzung des Systems experimentell zu untersuchen, und Elementverteilungskoeffizienten als Funktion dieser Variablen zu bestimmen. Für die Untersuchungen sind insbesondere Spurenelemente geeignet, da diese im Gegensatz zu Hauptelementen nicht essentiell für die Stabilität weiterer auftretender Phasen sind, aber sehr sensibel auf Änderungen intensiver Variablen reagieren können. Zudem werden bei geochemischen Mineral- und Gesteinsanalysen viele Spurenelemente, Spurenelementverhältnisse, und Spurenelementisotope als petrogenetische Indikatoren verwendet, d.h. diese Daten liefern Informationen darüber, wann und in welcher Tiefe und bei welchen chemischen Bedingungen ein Gestein gebildet worden ist, und welche weiteren Prozesse es auf dem Weg zur Erdoberfläche durchlaufen hat. Allerdings sind für vie- le Spurenelemente die Abhängigkeiten der Verteilung zwischen Fluiden und Schmelzen von intensiven Variablen nicht, oder nur unzureichend experimentell untersucht worden. Zusätzlich dazu basiert die Mehrheit der experimentell gewonnenen Verteilungskoeffizienten und deren Interpretation, insbesondere hinsichtlich der Elementkomplexierung im Fluid, auf der Analyse von schnell abgekühlten Phasen. Bisher ist nicht geklärt, ob solche Analysen repräsentativ sind für die Zusammensetzungen der Phasen bei hohen Drücken und Temperaturen. Das Ziel dieser Studie war die Erarbeitung eines experimentellen Datensatzes zur Spu- renelementverteilung zwischen granitischen Schmelzen und wässrigen Fluiden in Abhängigkeit von der Schmelzzusammensetzung, der Salinität des Fluids, des Drucks und der Temperatur. Ein Hauptanliegen der Arbeit bestand in der Weiterentwicklung einer experimentellen Methode bei welcher der Spurenelementgehalt im Fluid in-situ, d.h. unter hohen Drücken und Temperaturen, und im Gleichgewicht mit einer silikatischen Schmelze bestimmt wird. Die so gewonnenen Daten können anschließend mit den Resultaten von Abkühlexperimenten vergli- chen werden, um diese und auch Literaturdaten kritisch zu bewerten. Die Daten aller unter- suchten Spurenelemente dieser Arbeit (Rb, Sr, Ba, La, Y und Yb) zeigen: 1. unter den untersuchten Bedingungen eine Präferenz für die Schmelze unabhängig von der chemischen Zusammensetzung von Schmelze und Fluid, Druck oder Temperatur, 2. die Verwendung von chloridhaltigen Fluiden kann die Verteilungskoeffizienten um 1 bis 2 Größenordnungen anheben und 3. für die Verteilungskoeffizienten von Sr, Ba, La, Y und Yb eine starke Abhängigkeit von der Schmelzzusammensetzung im chloridischen System. Der Vergleich der Daten der verschiedenen Methoden zeigt, dass insbesondere für chloridfreie Fluide große Diskrepanzen zwischen den in-situ Daten und Analysen von abgeschreckten Proben bestehen. Dieses Ergebnis beweist eindeutig, dass beim Abschrecken der Proben Rückreaktionen stattfinden, und dass Daten, welche auf Analysen abgeschreckter Fluide basieren, nur eingeschränkt verwendet werden sollten. Die Variation der Verteilungskoeffizienten von Sr, Ba, La, Yb, und Y als Funktion der Schmelzzusammensetzung ist entweder auf eine Änderung der Komplexierung im Fluid und/oder einen anderen veränderten Einbau dieser Elemente in die Schmelze zurückzuführen. Daher wurde im Rahmen dieser Arbeit erstmals versucht, die Elementkomplexierung in silikatischen Fluiden direkt bei hohen Temperaturen und Drücken zu bestimmen. Die Daten für Sr zeigen, dass abhängig von der Schmelzzusammensetzung unterschiedliche Komplexe stabil sein müssen.
Thomas, Gaël. "Applications actives : construction dynamique d'environnements d'exécutions flexibles homogènes". Paris 6, 2005. http://www.theses.fr/2005PA066170.
Pełny tekst źródłaGoncalves, Ilidio. "Contrôle de l'atmosphère des lampes à incandescence par une méthode non destructive". Toulouse 3, 1991. http://www.theses.fr/1991TOU30176.
Pełny tekst źródłaLouise, Stéphane. "Calcul de majorants sûrs de temps d'exécution au pire pour des tâches d'applications temps-réels critiques, pour des systèmes disposants de caches mémoire". Phd thesis, Université Paris Sud - Paris XI, 2002. http://tel.archives-ouvertes.fr/tel-00695930.
Pełny tekst źródłaFrança, José Ricardo de Almeida. "Télédétection satellitaire des feux de végétation en région intertropicale. Application à l'estimation des flux des composés en trace émis dans l'atmosphère". Toulouse 3, 1994. http://www.theses.fr/1994TOU30086.
Pełny tekst źródłaMahieuxe, Bruno. "Capteurs à fibre optique pour le dosage des nitrates". Vandoeuvre-les-Nancy, INPL, 1995. http://www.theses.fr/1995INPL046N.
Pełny tekst źródła