Dissertations / Theses on the topic 'Phase analysi'

To see the other types of publications on this topic, follow the link: Phase analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Phase analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Khalifa, K. M. "Two-phase slug flow measurement using ultrasonic techniques in combination with T-Y junctions." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/10195.

Full text
Abstract:
The accurate measurement of multiphase flows of oil/water/gas is a critical element of oil exploration and production. Thus, over the last three decades; the development and deployment of in-line multiphase flow metering systems has been a major focus worldwide. Accurate measurement of multiphase flow in the oil and gas industry is difficult because there is a wide range of flow regimes and multiphase meters do not generally perform well under the intermittent slug flow conditions which commonly occur in oil production. This thesis investigates the use of Doppler and cross-correlation ultrasonic measurements made in different high gas void fraction flow, partially separated liquid and gas flows, and homogeneous flow and raw slug flow, to assess the accuracy of measurement in these regimes. This approach has been tested on water/air flows in a 50mm diameter pipe facility. The system employs a partial gas/liquid separation and homogenisation using a T-Y junction configuration. A combination of ultrasonic measurement techniques was used to measure flow velocities and conductivity rings to measure the gas fraction. In the partially separated regime, ultrasonic cross-correlation and conductivity rings are used to measure the liquid flow-rate. In the homogeneous flow, a clamp-on ultrasonic Doppler meter is used to measure the homogeneous velocity and combined with conductivity ring measurements to provide measurement of the liquid and gas flow-rates. The slug flow regime measurements employ the raw Doppler shift data from the ultrasonic Doppler flowmeter, together with the slug flow closure equation and combined with gas fraction obtained by conductivity rings, to determine the liquid and gas flow-rates. Measurements were made with liquid velocities from 1.0m/s to 2.0m/s with gas void fractions up to 60%. Using these techniques the accuracies of the liquid flow-rate measurement in the partially separated, homogeneous and slug regimes were 10%, 10% and 15% respectively. The accuracy of the gas flow-rate in both the homogeneous and raw slug regimes was 10%. The method offers the possibility of further improvement in the accuracy by combining measurement from different regimes.
APA, Harvard, Vancouver, ISO, and other styles
2

Khalifa, K. M. "Two-phase slug flow measurement using ultra-sonic techniques in combination with T-Y junctions." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/10195.

Full text
Abstract:
The accurate measurement of multiphase flows of oil/water/gas is a critical element of oil exploration and production. Thus, over the last three decades; the development and deployment of in-line multiphase flow metering systems has been a major focus worldwide. Accurate measurement of multiphase flow in the oil and gas industry is difficult because there is a wide range of flow regimes and multiphase meters do not generally perform well under the intermittent slug flow conditions which commonly occur in oil production. This thesis investigates the use of Doppler and cross-correlation ultrasonic measurements made in different high gas void fraction flow, partially separated liquid and gas flows, and homogeneous flow and raw slug flow, to assess the accuracy of measurement in these regimes. This approach has been tested on water/air flows in a 50mm diameter pipe facility. The system employs a partial gas/liquid separation and homogenisation using a T-Y junction configuration. A combination of ultrasonic measurement techniques was used to measure flow velocities and conductivity rings to measure the gas fraction. In the partially separated regime, ultrasonic cross-correlation and conductivity rings are used to measure the liquid flow-rate. In the homogeneous flow, a clamp-on ultrasonic Doppler meter is used to measure the homogeneous velocity and combined with conductivity ring measurements to provide measurement of the liquid and gas flow-rates. The slug flow regime measurements employ the raw Doppler shift data from the ultrasonic Doppler flowmeter, together with the slug flow closure equation and combined with gas fraction obtained by conductivity rings, to determine the liquid and gas flow-rates. Measurements were made with liquid velocities from 1.0m/s to 2.0m/s with gas void fractions up to 60%. Using these techniques the accuracies of the liquid flow-rate measurement in the partially separated, homogeneous and slug regimes were 10%, 10% and 15% respectively. The accuracy of the gas flow-rate in both the homogeneous and raw slug regimes was 10%. The method offers the possibility of further improvement in the accuracy by combining measurement from different regimes.
APA, Harvard, Vancouver, ISO, and other styles
3

CAIRO, BEATRICE. "ESTIMATING CARDIORESPIRATORY COUPLING FROM SPONTANEOUS VARIABILITY IN HEALTH AND PATHOLOGY." Doctoral thesis, Università degli Studi di Milano, 2021. http://hdl.handle.net/2434/816612.

Full text
Abstract:
Molteplici meccanismi sono responsabili delle interazioni cardiorespiratorie osservate nell’uomo. L’azione di questi meccanismi risulta in specifici ritmi di variabilità cardiaca (HRV) e ha effetti sulle interazioni tra attività cardiaca e respiratoria. I quattro principali tipi di fenomeni che derivano dalle interazioni tra cuore e sistema respiratorio sono: i) aritmia respiratoria sinusale (RSA); ii) accoppiamento cardioventilatorio; iii) sincronizzazione cardiorespiratoria in fase; iv) sincronizzazione cardiorespiratoria in frequenza. L’obiettivo di questa tesi è la descrizione e quantificazione di diversi aspetti delle interazioni cardiorespiratorie tramite l’utilizzo di una varietà di metodologie derivate dalla letteratura, adattate e ottimizzate per i tipici contesti sperimentali in cui HRV e segnale respiratorio sono comunemente acquisiti. Sei metodi analitici sono stati sfruttati a questo scopo per valutare l’entropia di trasferimento (TE), l’entropia cross-condizionata tramite entropia corretta cross-condizionata normalizzata (NCCCE), la coerenza quadratica (K2), l’accoppiamento cardioventilatorio tramite entropia normalizzata di Shannon (NSE) dell’intervallo temporale tra complesso QRS e inizio di fase inspiratoria o espiratoria, la sincronizzazione in fase tramite un indice di sincronizzazione (SYNC%) e il quoziente pulsazione-respirazione (PRQ). Questi approcci sono stati utilizzati con la finalità di testare gli effetti di uno stimolo simpatico, ovvero stimoli posturali quali l’head-up tilt (TILT) e l’ortostatismo attivo (STAND) sulle interazioni cardiorespiratorie. Gli approcci proposti sono stati testati in tre protocolli: i) atleti amatoriali sottoposti a un allenamento muscolare inspiratorio (IMT) durante clinostatismo supino (REST) e STAND; ii) volontari sani sottoposti a un decondizionamento da allettamento prolungato (HDBR), durante REST e TILT; iii) pazienti affetti da sindrome da tachicardia posturale ortostatica (POTS), durante REST e TILT, in condizione basale e durante approfondimento un anno dopo. I risultati principali della presente tesi di dottorato concernono l’effetto degli stimoli posturali sulle interazioni cardiorespiratorie in soggetti sani e patologici. Infatti, tutti gli indici proposti danno una visione coerente dell’intensità dell’interazione cardiorespiratoria in risposta a uno stimolo ortostatico, in quanto essa diminuisce in tutti i protocolli. Tuttavia, il potere statistico degli indici è differente. TE e K2 appaiono essere particolarmente deboli nell’identificare l’effetto dello stimolo posturale sulle interazioni cardiorespiratorie. NCCCE, NSE, e SYNC% dimostrano una capacità molto maggiore a tale riguardo, mentre PRQ appare troppo intimamente collegata alla frequenza cardiaca, in assenza di cambiamenti significativi della frequenza respiratoria. Per contro, tutti gli indici appaiono deboli nell’identificare gli effetti cronici di IMT e HDBR in una popolazione sana o le conseguenze croniche della gestione clinica in pazienti POTS. La tesi conclude che diversi aspetti delle interazioni cardiorespiratorie possono essere modificati in modo acuto ma gli effetti cronici di un trattamento o intervento a lungo termine sono irrisori sulla magnitudine delle interazioni cardiorespiratorie e/o possono essere confusi con la variabilità intrinseca degli indici. Considerazioni sulle differenze metodologiche e sull’efficacia degli indici proposti suggeriscono che un utilizzo simultaneo di molteplici metodi bivariati è vantaggiosa negli studi cardiorespiratori, in quanto diversi aspetti delle interazioni cardiorespiratorie possono essere valutati contemporaneamente. Questa valutazione simultanea può essere effettuata a un costo computazionale trascurabile e in contesti applicativi in cui il solo segnale ECG è disponibile.
Several mechanisms are responsible for cardiorespiratory interactions observed in humans. The action of these mechanisms results in specific patterns in heart rate variability (HRV) and affects the interaction between heart and respiratory activities. The four main types of phenomena resulting from the interactions between heart and respiratory system are: i) respiratory sinus arrhythmia (RSA); ii) cardioventilatory coupling; iii) cardiorespiratory phase synchronization; iv) cardiorespiratory frequency synchronization. The aim of this thesis is to describe and quantify different aspects of cardiorespiratory interactions employing a variety of methods from literature, adapted and optimized for the usual experimental settings in which HRV and respiratory signal are commonly acquired. Six analytical methods were exploited for this purpose assessing transfer entropy (TE), cross-conditional entropy via normalized corrected cross-conditional entropy (NCCCE), squared coherence (K2), cardioventilatory coupling via normalized Shannon entropy (NSE) of the time interval between QRS complex and inspiratory, or expiratory, onsets, phase synchronization via a synchronization index (SYNC%) and pulse-respiration quotient (PRQ). These approaches were employed with the goal of testing the effects of a sympathetic challenge, namely postural stimuli like head-up tilt (TILT) and active standing (STAND), on cardiorespiratory interactions. The proposed approaches were tested on three protocols: i) amateur athletes undergoing an inspiratory muscle training (IMT) during supine rest (REST) and STAND; ii) healthy volunteers undergoing a prolonged bed rest deconditioning (HDBR), during REST and TILT; iii) patients suffering from postural orthostatic tachycardia syndrome (POTS), during REST and TILT, at baseline and at one-year follow-up. The most important findings of the present doctoral thesis concern the effect of postural stimuli on cardiorespiratory interactions in health and disease. Indeed, all proposed indexes gave a coherent view of cardiorespiratory interaction strength in response to the orthostatic challenge, as it decreased in all protocols. However, the statistical power of the indexes was different. TE and K2 appeared to be particularly weak in detecting the effect of postural challenge on cardiorespiratory interactions. NCCCE, NSE and SYNC% exhibited much stronger ability in this regard, while PRQ seemed too closely related to heart rate, in presence of no significant modification of the respiratory rate. Conversely, all indexes appeared to be weak in detecting the chronic effects of IMT and HDBR on a healthy population and the long-term consequences of the clinical management in POTS patients. The thesis concludes that the different aspects of cardiorespiratory interactions can be modified acutely but the chronic effects of a long-term treatment or intervention on the magnitude of cardiorespiratory interactions are negligible and/or could be confused with the variability of markers. Considerations about the methodological dissimilarities and differences in effectiveness of the proposed indexes suggest that the simultaneous exploitation of all bivariate methodologies in cardiorespiratory studies is advantageous, as different aspects of cardiorespiratory interactions can be evaluated concurrently. This simultaneous evaluation can be carried out with a relatively negligible computational cost and in applicative contexts when only an ECG signal is available.
APA, Harvard, Vancouver, ISO, and other styles
4

Degeiter, Matthieu. "Étude numérique de la dynamique des défauts d’alignement des précipités γ’ dans les superalliages monocristallins à base de nickel." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0038/document.

Full text
Abstract:
Dans les alliages multiphasés, la cohérence des interfaces entre des phases en désaccord paramétrique génère des champs élastiques internes à longue distance et généralement anisotropes. L'interaction de ces champs affecte fortement la cinétique des transformations de phase diffusives, et influence la forme et l'arrangement spatial des précipités. Dans la microstructure des superalliages monocristallins à base de nickel, obtenue par précipitation de la phase γ’ ordonnée L12 dans la matrice CFC γ, l'élasticité conduit à la formation d'alignements quasi-périodiques des précipités γ’ cuboïdaux. La microstructure γ/ γ’ possède cependant des défauts systématiques d'alignement des précipités: des branches, des macro-dislocations et des motifs en chevrons. Nous nous intéressons à l'origine de ces défauts d'alignement. Nous conduisons des analyses de stabilité de l'arrangement périodique de précipités en interactions élastiques. Contrairement à la stabilité attendue, les calculs semi-analytiques ont révélé l'instabilité de la distribution périodique de précipités γ’ cubiques, vis-à-vis de certains modes de perturbation. Les principales instabilités sont le mode longitudinal [100] et le mode transverse [110], et leur domaine d'instabilité est analysé vis-à-vis de l'anisotropie élastique. Le développement de ces modes instables est étudié par une méthode de champ de phase classique, en simulant l'évolution de microstructures périodiques soumises à des légères perturbations initiales. Nous montrons que l'expression des instabilités d'arrangement procède essentiellement par l'évolution de la forme des précipités, et conduit à la formation de motifs qui ont pu être reliés à des microstructures expérimentales. En particulier, le mode transverse [110] conduit à la formation de motifs en chevrons. Nous étudions l'influence du taux de phase γ’ et de l'inhomogénéité du module élastique C’, et nous montrons le rôle qu'ils jouent dans la stabilisation de l'arrangement périodique. Dans des simulations réalisées dans des études antérieures, la dynamique des défauts est analysée au moyen de paramètres topologiques issus de la phénoménologie des structures hors-équilibre. Au cours d'un recuit isotherme, nous observons que les branches et les macro-dislocations migrent dans la microstructure selon des mécanismes de montée et de glissement. Nous utilisons ensuite une nouvelle formulation des modèles de champ de phase, intrinsèquement discrète, dans laquelle les interfaces sont résolues essentiellement avec un pas de grille sans friction de réseau et avec une invariance par rotation précise. Cette approche, appelée Sharp Phase Field Method (S-PFM), est implémentée sur une grille CFC, et avec une description des quatre variants de translation des précipités γ’. Nous montrons que la S-PFM permet la modélisation de microstructures à grande échelle, avec plusieurs milliers de précipités à deux et trois dimensions, et donne ainsi accès à des informations statistiques sur l'évolution de la microstructure et sur la dynamique des défauts d'alignement. Nous discutons finalement la perspective de modéliser l'évolution de la microstructure γ/γ’ à une échelle supérieure par une description de la dynamique des défauts d'alignement des précipités
In multiphase alloys, internal elastic fields often arise as a result of a coherently adjusted misfit between the lattices of coexisting phases. Given their long-range and usually anisotropic nature, the interaction of these fields is known to significantly alter the kinetics of diffusion-controlled phase transformations, as well as influence the shapes and spatial arrangement of the misfitting precipitates. In the microstructure of single-crystal nickel-base superalloys, obtained by precipitation of the L12-ordered γ’ phase in the FCC γ matrix, elasticity leads to the formation of nearly periodic alignments of the cuboidal γ’ precipitates. However, the γ/γ’ microstructure systematically displays defects in the precipitate alignment: branches, macro-dislocations and chevron patterns. We first address the question of the origin of these alignment defects. Stability analyses of the periodic arrangement of elastically interacting precipitates are carried out. Contrary to the expected stability, the semi-analytical calculations revealed the periodic distribution of cubic γ‘ precipitates to be unstable against specific perturbation modes. The main instabilities are the [100] longitudinal mode and the [110] transverse mode, and their instability range is analyzed with respect to the elastic anisotropy. The consequences of these unstable modes are investigated using a classic phase field method, by modeling the evolution of periodic microstructures undergoing small initial perturbations. We show the expression of the instabilities mainly proceeds by the evolution of the precipitate shapes, and leads to the formation of patterns which were related to experimental microstructures. Specifically, the [110] transverse instability is responsible for the formation of chevron patterns. The effects of the volume fraction and of an inhomogeneity on the C’ shear modulus on the stability of the arrangement are studied, and we show the role they play in the partial stabilization of the periodic distribution, though the [100] longitudinal mode always remains unstable. In phase field calculations carried out in previous studies, the dynamics of alignment defects are analyzed by means of topological parameters derived from pattern formation theory. During annealing, branches and macro-dislocations were observed to migrate in the microstructure according to climbing and gliding mechanisms. We then use a new formulation of phase field models, intrinsically discrete, in which the interfaces are resolved with essentially one grid point with no pinning on the grid and an accurate rotational invariance. This approach, known as the Sharp Phase Field Method (S-PFM), is implemented on a FCC grid and accounts for the four translational variants of the γ’ precipitates. We show that the S-PFM allows for the modeling of large-scale microstructures, with several thousand precipitates both in two and three dimensions, and provides access to statistical information on the microstructure evolution and on the the dynamics of alignment defects. We finally discuss the perspective of modeling the evolution of the γ/γ’ microstructure at the macroscale by means of a description of the defect dynamics in the precipitate alignments
APA, Harvard, Vancouver, ISO, and other styles
5

Lindgren, Erik. "Regularity properties of two-phase free boundary problems." Doctoral thesis, KTH, Matematik (Inst.), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10336.

Full text
Abstract:
This thesis consists of four papers which are all related to the regularity properties of free boundary problems. The problems considered have in common that they have some sort of two-phase behaviour.In papers I-III we study the interior regularity of different two-phase free boundary problems. Paper I is mainly concerned with the regularity properties of the free boundary, while in papers II and III we devote our study to the regularity of the function, but as a by-product we obtain some partial regularity of the free boundary.The problem considered in paper IV has a somewhat different nature. Here we are interested in certain approximations of the obstacle problem. Two major differences are that we study regularity properties close to the fixed boundary and that the problem converges to a one-phase free boundary problem.
QC 20100728
APA, Harvard, Vancouver, ISO, and other styles
6

Watson, Richard Charles. "Studies of reversed phase high performance liquid chromatography (RP-HPLC) stationary phases." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

von, Schwerin Erik. "Adaptivity for Stochastic and Partial Differential Equations with Applications to Phase Transformations." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4477.

Full text
Abstract:
his work is concentrated on efforts to efficiently compute properties of systems, modelled by differential equations, involving multiple scales. Goal oriented adaptivity is the common approach to all the treated problems. Here the goal of a numerical computation is to approximate a functional of the solution to the differential equation and the numerical method is adapted to this task. The thesis consists of four papers. The first three papers concern the convergence of adaptive algorithms for numerical solution of differential equations; based on a posteriori expansions of global errors in the sought functional, the discretisations used in a numerical solution of the differential equiation are adaptively refined. The fourth paper uses expansion of the adaptive modelling error to compute a stochastic differential equation for a phase-field by coarse-graining molecular dynamics. An adaptive algorithm aims to minimise the number of degrees of freedom to make the error in the functional less than a given tolerance. The number of degrees of freedom provides the convergence rate of the adaptive algorithm as the tolerance tends to zero. Provided that the computational work is proportional to the degrees of freedom this gives an estimate of the efficiency of the algorithm. The first paper treats approximation of functionals of solutions to second order elliptic partial differential equations in bounded domains of ℝd, using isoparametric $d$-linear quadrilateral finite elements. For an adaptive algorithm, an error expansion with computable leading order term is derived %. and used in a computable error density, which is proved to converge uniformly as the mesh size tends to zero. For each element an error indicator is defined by the computed error density multiplying the local mesh size to the power of 2+d. The adaptive algorithm is based on successive subdivisions of elements, where it uses the error indicators. It is proved, using the uniform convergence of the error density, that the algorithm either reduces the maximal error indicator with a factor or stops; if it stops, then the error is asymptotically bounded by the tolerance using the optimal number of elements for an adaptive isotropic mesh, up to a problem independent factor. Here the optimal number of elements is proportional to the d/2 power of the Ldd+2 quasi-norm of the error density, whereas a uniform mesh requires a number of elements proportional to the d/2 power of the larger L1 norm of the same error density to obtain the same accuracy. For problems with multiple scales, in particular, these convergence rates may differ much, even though the convergence order may be the same. The second paper presents an adaptive algorithm for Monte Carlo Euler approximation of the expected value E[g(X(τ),\τ)] of a given function g depending on the solution X of an \Ito\ stochastic differential equation and on the first exit time τ from a given domain. An error expansion with computable leading order term for the approximation of E[g(X(T))] with a fixed final time T>0 was given in~[Szepessy, Tempone, and Zouraris, Comm. Pure and Appl. Math., 54, 1169-1214, 2001]. This error expansion is now extended to the case with stopped diffusion. In the extension conditional probabilities are used to estimate the first exit time error, and difference quotients are used to approximate the initial data of the dual solutions. For the stopped diffusion problem the time discretisation error is of order N-1/2 for a method with N uniform time steps. Numerical results show that the adaptive algorithm improves the time discretisation error to the order N-1, with N adaptive time steps. The third paper gives an overview of the application of the adaptive algorithm in the first two papers to ordinary, stochastic, and partial differential equation. The fourth paper investigates the possibility of computing some of the model functions in an Allen--Cahn type phase-field equation from a microscale model, where the material is described by stochastic, Smoluchowski, molecular dynamics. A local average of contributions to the potential energy in the micro model is used to determine the local phase, and a stochastic phase-field model is computed by coarse-graining the molecular dynamics. Molecular dynamics simulations on a two phase system at the melting point are used to compute a double-well reaction term in the Allen--Cahn equation and a diffusion matrix describing the noise in the coarse-grained phase-field.
QC 20100823
APA, Harvard, Vancouver, ISO, and other styles
8

Grahn, Micael J. "Wirelessly networked digital phased array analysis and development of a phase synchronization concept." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Sep%5FGrahn.pdf.

Full text
Abstract:
Thesis (M.S. in Electronic Warfare Systems Engineering)--Naval Postgraduate School, September 2007.
Thesis Advisor(s): Jenn, David. "September 2007." Description based on title screen as viewed on October 23, 2007. Includes bibliographical references (p.67-69). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
9

Tsoupidi, Rodothea Myrsini. "Two-phase WCET analysis for cache-based symmetric multiprocessor systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222362.

Full text
Abstract:
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence.
Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.
APA, Harvard, Vancouver, ISO, and other styles
10

CARUSO, VALENTINA. "DEGRADATION OF ORGANIC AND MINERAL PHASES IN BURIED HUMAN REMAINS: THE EARTH SCIENCES ANALYTICAL CHARACTERIZATION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/478504.

Full text
Abstract:
The thesis focuses on the characterization of the alteration of the mineral and organic phases, investigated with different approaches, of human bone tissue from different burial contexts, with ages spanning from the Late Roman period to our time. This topic is very important in paleontological, archaeo-anthropological and forensic contexts in order to understand the taphonomic agents and then to provide biological data as possibly to discern human behavior in ancient funerary as well as in recent forensic contexts. It is well-known that peri and post mortem events may leave marks that have to be interpreted in the light of the state of the conservation or degradation of the skeletal remains. In fact, physical anthropologists are frequently required to date human bone remains, in order to recognize if osteological samples have an archaeological, historic or forensic interest. The determination of post mortem interval (PMI), the time elapsed between the death and the discovery of the corpse or skeletal remains, is extremely difficult to evaluate in absence of direct chronometric dating (e.g. C14), since bones might undergo several alterations, both structural and chemical, depending on the environment in which they deposited in. Because of bone tissue is an intimate association of mineral (carbonate-hydroxyapatite) and organic components (collagen) arranged in an ordinary structure, different levels of degradation are possible. Over time post mortem degradation is dominated by loss of structural collagen by collagenolytic enzymes, which caused a rapid swelling and hydrolysis of the protein fibers. Collagen dissolution is generally accompanied by the alteration of mineral crystals, which are vulnerable to diagenetic changes due to their small size. During diagenesis, the protein can be totally or partially removed and can replaced by inorganic precipitates, the most common beign hydroxyapatite, which in the process is subjected to recrystallization, ion exchange and substitution. As consequence, when depositional conditions are favorable for bone preservation, the mineral crystallinity increase, the porosity and chemical composition change. The quality and the assessement of organic and inorganic phase, can act positively or negatively both on bone mechanical properties in live, both on decomposition process after death, reducing or accelerating it. Several studies were performed to better understand the taphonomy of bone material during burial time. It appears that bone degradation depends on a wide range of environmental interactions, including biological, chemical and physical factors. These include: average temperature and humidity, microbilological composition and activity, soil chemistry (mineralogy and pH) and permeability, mechanical pressure and other numerous factors. Different type of bone degradation are observable at different scale of observation; particularly, in this study, bone preservation was investigated at macroscopic, biomolecular, microscopic, ultramicroscopic and chemical scale. The aim of this research is thus to further describe the impact of environmental conditions on bone preservation, and the effect of time, by applying and comparing the results from different analitical techniques. For this study 40 human skeletons of adult individuals from four different dated burial location in the Milan area were analyzed. The first one is a necropolis dated to the Late Roman age (3th-4th century AD), the second one is a 17th century AD mass grave, the third one is an ossuary contanining bones dated between 15th and 18th century AD, and the last one is a modern cemetery. The macroscopic analysis evaluated the general appearance of the remains and their state of preservation, through the observation of specific macroscopic parameters and morphological characteristics. The Luminol test, a fast and inexpensive method developed to detect blood traces, was performed to investigate the presence of haemoglobin preserved in bone. The histological analysis, conducted on calcified thin sections, considered the presence or absence of tunneling and bioerosion, in accordance to the Oxford Histological Index (OHI). Also, to evaluate the state of preservation of the organic component, primarily collagen, the samples were decalcified and stained with Hematoxylin and Eosin. Because of the lack of literature in this field, we created a new Decalcified Histological Index (DHI). Both calcified and decalcified bone thin sections were observed in transmitted and polarized light microscopy, in order to test the optical properties of structural components. Scanning electron microscope coupled with energy-dispersive X-ray spectroscopy (SEM-EDS) was used to evaluate exogenous chemical elements and minerals, adsorbed from burial environment, and histological changes, as well as recrystallization, tunnelling and fractures, due to fungal or bacteria action. X-ray micro-computed tomography of bone sections was performed at the SYRMEP beamline of the third-generation Synchrotron Light Laboratory (ELETTRA) located in Trieste (Italy), with the purpose to evaluate and quantify the preservation of bone structure, such as canals and lacunae, and the porosity changes due to diagenetic process. Fourier transform infrared spectrometry (FT-IR) and micro-spectrometry (mFTIR) were performed at Simon Fraser University (Burnaby) in Canada to investigate the preservation of both mineral and organic phases. Finally, 23 skeletons from the archaeological site of Travo (PC), dating from 7th-8th century AD, and their burial ground sediments were sampled and analyzed. Macroscopic, microscopic and chemical analyses were performed on bones to evaluate the tissue preservation state at different scales; the soil samples collected from the graves were characterized for color, particle size distribution, pH, organic carbon and calcium carbonate concentration. This study shows that macroscopic, biomolecular, microscopic, ultramicroscopic and chemical alterations follow independent paths that affect the bone preservation at different scales of observation. Therefore, the estimation of the diagenetic process cannot be limited to the macroscopic aspect of the bone tissue but must take into account biomolecular, microscopic and chemical alterations, since these may have affected the bone tissue differently at different scale. Bone degradation can be employed to estimate the post mortem interval, or to reconstruct the burial environment of human remains. As long as the evaluation of taphonomic alterations is performed at different scales with different ad hoc methodologies. In fact, age and environment can play an equal role on the degradation of organic and mineral phases, producing different effects on bone conservation at different levels.
APA, Harvard, Vancouver, ISO, and other styles
11

Guðmundsson, Reynir Leví. "A numerical study of two-fluid models for dispersed two-phase flow." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-132.

Full text
Abstract:

In this thesis the two-fluid (Eulerian/Eulerian) formulation for dispersed two-phase flow is considered. Closure laws are needed for this type of models. We investigate both empirically based relations, which we refer to as a nongranular model, and relations obtained from kinetic theory of dense gases, which we refer to as a granular model. For the granular model, a granular temperature is introduced, similar to thermodynamic temperature. It is often assumed that the granular energy is in a steady state, such that an algebraic granular model is obtained.

The inviscid non-granular model in one space dimension is known to be conditionally well-posed. On the other hand, the viscous formulation is locally in time well-posed for smooth initial data, but with a medium to high wave number instability. Linearizing the algebraic granular model around constant data gives similar results. In this study we consider a couple of issues.

First, we study the long time behavior of the viscous model in one space dimension, where we rely on numerical experiments, both for the non-granular and the algebraic granular model. We try to regularize the problem by adding second order artificial dissipation to the problem. The simulations suggest that it is not possible to obtain point-wise convergence using this regularization. Introducing a new measure, a concept of 1-D bubbles, gives hope for other convergence than point-wise.

Secondly, we analyse the non-granular formulation in two space dimensions. Similar results concerning well-posedness and instability is obtained as for the non-granular formulation in one space dimension. Investigation of the time scales of the formulation in two space dimension suggests a sever restriction on the time step, such that explicit schemes are impractical.

Finally, our simulation in one space dimension show that peaks or spikes form in finite time and that the solution is highly oscillatory. We introduce a model problem to study the formation and smoothness of these peaks.

APA, Harvard, Vancouver, ISO, and other styles
12

Gudmundsson, Reynir Levi. "A numerical study of two-fluid models for dispersed two-phase flow." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-132.

Full text
Abstract:
In this thesis the two-fluid (Eulerian/Eulerian) formulation for dispersed two-phase flow is considered. Closure laws are needed for this type of models. We investigate both empirically based relations, which we refer to as a nongranular model, and relations obtained from kinetic theory of dense gases, which we refer to as a granular model. For the granular model, a granular temperature is introduced, similar to thermodynamic temperature. It is often assumed that the granular energy is in a steady state, such that an algebraic granular model is obtained. The inviscid non-granular model in one space dimension is known to be conditionally well-posed. On the other hand, the viscous formulation is locally in time well-posed for smooth initial data, but with a medium to high wave number instability. Linearizing the algebraic granular model around constant data gives similar results. In this study we consider a couple of issues. First, we study the long time behavior of the viscous model in one space dimension, where we rely on numerical experiments, both for the non-granular and the algebraic granular model. We try to regularize the problem by adding second order artificial dissipation to the problem. The simulations suggest that it is not possible to obtain point-wise convergence using this regularization. Introducing a new measure, a concept of 1-D bubbles, gives hope for other convergence than point-wise. Secondly, we analyse the non-granular formulation in two space dimensions. Similar results concerning well-posedness and instability is obtained as for the non-granular formulation in one space dimension. Investigation of the time scales of the formulation in two space dimension suggests a sever restriction on the time step, such that explicit schemes are impractical. Finally, our simulation in one space dimension show that peaks or spikes form in finite time and that the solution is highly oscillatory. We introduce a model problem to study the formation and smoothness of these peaks.
QC 20101018
APA, Harvard, Vancouver, ISO, and other styles
13

Maraun, Douglas. "What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis." Phd thesis, [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=981698980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Olsson, Elin. "Mass Conserving Simulations of Two Phase Flow." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Aouni, Jihane. "Utility-based optimization of phase II / phase III clinical development." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS032/document.

Full text
Abstract:
Le développement majeur de la thèse a été consacré au problème d’optimisation du choix de dose dans les essais de recherche de dose, en phase II. Nous avons considéré ce problème sous l’angle des fonctions d’utilité. Nous avons alloué une valeur d’utilité aux doses, le problème pour le sponsor étant de trouver la meilleure dose, c’est-à-dire celle dont l’utilité est la plus élevée.Dans ce travail, nous nous sommes limités à une seule fonction d’utilité, intégrant deux composantes: une composante liée à l’efficacité (la POS=puissance d’un essai de phase III de 1000 patients de cette dose contre placebo) et une autre liée à la safety. Pour cette dernière, nous avons choisi de la caractériser par la probabilité prédictive d’observer un taux de toxicité inférieur ou égal à un certain seuil (que nous avons fixé à 0.15) en phase III (toujours pour un essai de 1000 patients au total). Cette approche a l’avantage d’être similaire aux concepts utilisés dans les essais de phase I en oncologie qui ont notamment pour objectif la recherche de la dose liée à une toxicité limite (notion de ”Dose limiting Toxicity”).Nous avons retenu une approche bayésienne pour l’analyse des données de la phase II.Mis à part les avantages théoriques connus de l’approche bayésienne par rapport à l’approche fréquentiste (respect du principe de vraisemblance, dépendance moins grande aux résultats asymptotiques, robustesse), nous avons choisi l’approche bayésienne pour plusieurs raisons:• Combinant, par définition même de l’approche bayésienne, une information a priori avec les données disponibles, elle offre un cadre plus flexible la prise de décision du sponsor: lui permettant notamment d’intégrer de manière plus ou moins explicite les informations dont il dispose en dehors de l’essai de la phase II.• L’approche bayésienne autorise une plus grande flexibilité dans la formalisation des règles de décision.Nous avons étudié les propriétés des règles de décisions par simulation d’essais de phase II de différentes tailles: 250, 500 et 1000 patients. Pour ces deux derniers design nous avons aussi évalué l’intérêt de d’effectuer une analyse intermédiaire lorsque la moitié des patients a été enrôlée (c’est-à-dire avec respectivement les premiers 250 et 500 patients inclus). Le but était alors d’évaluer si, pour les essais de phase II de plus grande taille, s’autoriser la possibilité de choisir la dose au milieu de l’étude et de poursuivre l’étude jusqu’au bout si l’analyse intermédiaire n’est pas concluante permettait de réduire la taille de l’essai de phase II tout en préservant la pertinence du choix de dose final
The main development of the thesis was devoted to the problem of dose choice optimization in dose-finding trials, in phase II. We have considered this problem from the perspective of utility functions. We have allocated a utility value to the doses itself, knowing that the sponsor’s problem was now to find the best dose, that is to say, the one having the highest utility. We have limited ourselves to a single utility function, integrating two components: an efficacy-related component (the PoS = the power of a phase III trial - with 1000 patients - of this dose versus placebo) and a safety-related component. For the latter, we chose to characterize it by the predictive probability of observing a toxicity rate lower or equal to a given threshold (that we set to 0.15) in phase III (still for a trial of 1000 patients in total). This approach has the advantage of being similar to the concepts used in phase I trials in Oncology, which particularly aim to find the dose related to a limiting toxicity (notion of "Dose limiting Toxicity").We have adopted a Bayesian approach for the analysis of phase II data. Apart from the known theoretical advantages of the Bayesian approach compared with the frequentist approach (respect of the likelihood principle, less dependency on asymptotic results, robustness), we chose this approach for several reasons:• It provides a more flexible framework for the decision-making of the sponsor because it offers the possibility to combine (by definition of the Bayesian approach) a priori information with the available data: in particular, it offers the possibility to integrate, more or less explicitly, the information available outside the phase II trial.• The Bayesian approach allows greater flexibility in the formalization of the decision rules.We studied the properties of decision rules by simulating phase II trials of different sizes: 250, 500 and 1000 patients. For the last two designs (500 and 1000 patients in phase II), we have also evaluated the interest of performing an interim analysis when half of the patients are enrolled (i.e. with the first 250and the first 500 patients included respectively). The purpose was then to evaluate whether or not, for larger phase II trials, allowing the possibility of choosing the dose in the middle of the study and continuing the study to the end if the interim analysis is not conclusive, could reduce the size of the phase II trial while preserving the relevance of the final dose choice
APA, Harvard, Vancouver, ISO, and other styles
16

Motamed, Mohammad. "Phase space methods for computing creeping rays." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Abdullah, Saman. "Analysis of individual feminine cycle hormone profiles for assessment of luteal defect." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1144.

Full text
Abstract:
Les niveaux hormonaux peuvent varier grandement entre cycles menstruels et entre femmes aux cycles dits « normaux ». Outre les niveaux quotidiens, ces profils présentent une grande diversité d’amplitudes, de durées, de positions et de formes. Ces constats ont ravivé l'intérêt pour l'étude des profils individuels plutôt que généraux. En effet, les profils de la littérature sont des moyennes dont peuvent s’éloigner plusieurs profils individuels ; d’où la nécessité de descriptions plus précises.Dans cette thèse, nous explorons la diversité des profils hormonaux au cours de la phase lutéale du cycle et présentons un concept original pour caractériser la plupart des ondes hormonales avec quatre paramètres seulement. Cela a été obtenu via une distribution bêta-binomiale. De plus, nous proposons un nouveau modèle de régression où le profil hormonal est variable dépendante et une variété de variables binaires ou continues sont prédicteurs.La méthode a été appliquée pour décrire les profils hormonaux de la phase lutéale et a donné des résultats intéressants. Un continuum allant de la phase lutéale normale à la déficience lutéale serait plus approprié qu’une classification binaire (normale/anormale). Les données analysées ont montré qu’un petit follicule a un impact négatif sur la qualité de la phase lutéale et qu’un niveau élevé de PDG perivulatoire (i.e., une lutéinisation prématurée) semble préjudiciable à la phase lutéale. Un niveau de PDG lutéale normal puis faible est probablement un signe d'anomalie de la phase lutéale. De plus, au cours de la phase lutéale, divers profils de métabolites de la progestérone sont corrélés avec plusieurs caractéristiques des femmes et du cycle
Even in normally cycling women, hormone levels vary widely between cycles and between women. Beyond day-by-day levels, hormone profiles do display a great variety of heights, durations, locations, and shapes. These observations have renewed the interest in the assessment of individual rather than general hormone profiles. Actually, as reported by the literature, cycle hormone profiles are averages of many individual profiles but individual profiles may be far from matching these averages. This raises the need for sharper descriptions.In this thesis, we explore the diversity of hormonal profiles observed during the luteal phase of the menstrual cycle and present an original concept to characterize most hormone waves using only four parameters. This was obtained via a beta-binomial distribution. Moreover, we propose a new regression model that considers the hormonal profile as dependent variable and a variety of binary or continuous variables as predictors.We applied the method to describe hormone profiles during the luteal phase and obtained interesting results. Instead of a binary classification (normal/abnormal), it would be more appropriate to consider a continuum from normal luteal phase to luteal deficiency. In the analyzed dataset, a small follicle had a negative impact on the quality of the luteal phase and a high periovulatory PDG level (i.e., a premature luteinization) seemed detrimental to the luteal phase. The occurrence of a normal then low luteal PDG level is probably a potential sign of luteal phase abnormality. Furthermore, distinct progesterone metabolite profiles during the luteal phase were found correlated with several women and cycle characteristics
APA, Harvard, Vancouver, ISO, and other styles
18

Goff, Matthew. "Multivariate discrete phase-type distributions." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Dissertations/Spring2005/m%5Fgoff%5F032805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hautphenne, Sophie. "An algorithmic look at phase-controlled branching processes." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210255.

Full text
Abstract:
Branching processes are stochastic processes describing the evolution of populations of individuals which reproduce and die independently of each other according to specific probability laws. We consider a particular class of branching processes, called Markovian binary trees, where the lifetime and birth epochs of individuals are controlled by a Markovian arrival process.

Our objective is to develop numerical methods to answer several questions about Markovian binary trees. The issue of the extinction probability is the main question addressed in the thesis. We first assume independence between individuals. In this case, the extinction probability is the minimal nonnegative solution of a matrix fixed point equation which can generally not be solved analytically. In order to solve this equation, we develop a linear algorithm based on functional iterations, and a quadratic algorithm, based on Newton's method, and we give their probabilistic interpretation in terms of the tree.

Next, we look at some transient features for a Markovian binary tree: the distribution of the population size at any given time, of the time until extinction and of the total progeny. These distributions are obtained using the Kolmogorov and the renewal approaches.

We illustrate the results mentioned above through an example where the Markovian binary tree serves as a model for female families in different countries, for which we use real data provided by the World Health Organization website.

Finally, we analyze the case where Markovian binary trees evolve under the external influence of a random environment or a catastrophe process. In this case, individuals do not behave independently of each other anymore, and the extinction probability may no longer be expressed as the solution of a fixed point equation, which makes the analysis more complicated. We approach the extinction probability, through the study of the population size distribution, by purely numerical methods of resolution of partial differential equations, and also by probabilistic methods imposing constraints on the external process or on the maximal population size.

/

Les processus de branchements sont des processus stochastiques décrivant l'évolution de populations d'individus qui se reproduisent et meurent indépendamment les uns des autres, suivant des lois de probabilités spécifiques.

Nous considérons une classe particulière de processus de branchement, appelés arbres binaires Markoviens, dans lesquels la vie d'un individu et ses instants de reproduction sont contrôlés par un MAP. Notre objectif est de développer des méthodes numériques pour répondre à plusieurs questions à propos des arbres binaires Markoviens.

La question de la probabilité d'extinction d'un arbre binaire Markovien est la principale abordée dans la thèse. Nous faisons tout d'abord l'hypothèse d'indépendance entre individus. Dans ce cas, la probabilité d'extinction s'exprime comme la solution minimale non négative d'une équation de point fixe matricielle, qui ne peut être résolue analytiquement. Afin de résoudre cette équation, nous développons un algorithme linéaire, basé sur l'itération fonctionnelle, ainsi que des algorithmes quadratiques, basés sur la méthode de Newton, et nous donnons leur interprétation probabiliste en termes de l'arbre que l'on étudie.

Nous nous intéressons ensuite à certaines caractéristiques transitoires d'un arbre binaire Markovien: la distribution de la taille de la population à un instant donné, celle du temps jusqu'à l'extinction du processus et celle de la descendance totale. Ces distributions sont obtenues en utilisant l'approche de Kolmogorov ainsi que l'approche de renouvellement.

Nous illustrons les résultats mentionnés plus haut au travers d'un exemple où l'arbre binaire Markovien sert de modèle pour des populations féminines dans différents pays, et pour lesquelles nous utilisons des données réelles fournies par la World Health Organization.

Enfin, nous analysons le cas où les arbres binaires Markoviens évoluent sous une influence extérieure aléatoire, comme un environnement Markovien aléatoire ou un processus de catastrophes. Dans ce cas, les individus ne se comportent plus indépendamment les uns des autres, et la probabilité d'extinction ne peut plus s'exprimer comme la solution d'une équation de point fixe, ce qui rend l'analyse plus compliquée. Nous approchons la probabilité d'extinction au travers de l'étude de la distribution de la taille de la population, à la fois par des méthodes purement numériques de résolution d'équations aux dérivées partielles, ainsi que par des méthodes probabilistes en imposant des contraintes sur le processus extérieur ou sur la taille maximale de la population.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
20

Ormerod, O. J. M. "Phase analysis in radionuclide angiography." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Allefeld, Carsten. "Phase synchronization analysis of event-related brain potentials in language processing." Phd thesis, [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974114480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

SOAVE, NICOLA. "Variational and geometric methods for nonlinear differential equations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/49889.

Full text
Abstract:
This thesis is devoted to the study of several problems arising in the field of nonlinear analysis. The work is divided in two parts: the first one concerns existence of oscillating solutions, in a suitable sense, for some nonlinear ODEs and PDEs, while the second one regards the study of qualitative properties, such as monotonicity and symmetry, for solutions to some elliptic problems in unbounded domains. Although the topics faced in this work can appear far away one from the other, the techniques employed in different chapters share several common features. In the firts part, the variational structure of the considered problems plays an essential role, and in particular we obtain existence of oscillating solutions by means of non-standard versions of the Nehari's method and of the Seifert's broken geodesics argument. In the second part, classical tools of geometric analysis, such as the moving planes method and the application of Liouville-type theorems, are used to prove 1-dimensional symmetry of solutions in different situations.
APA, Harvard, Vancouver, ISO, and other styles
23

Hantke, Maren [Verfasser]. "Two-phase flows with phase transitions : modelling, analysis, numerics / Maren Hantke." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1159954860/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fleury, Joachim. "Développement de phases stationnaires monolithiques pour la chromatographie en phase gazeuse miniaturisée ultra-rapide." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066652.

Full text
Abstract:
La miniaturisation des systèmes conventionnels de chromatographie en phase gazeuse (CPG) est d’un intérêt majeur pour de nombreuses applications. L'objectif est d'aboutir à des améliorations des systèmes existants, en termes de portabilité et d’autonomie, mais aussi en termes de durée d’analyse et de coût. In fine, ces systèmes miniaturisés de CPG pourraient être utilisés directement sur le terrain pour de l’analyse en continu presque en temps réel. Dans ce contexte, ce projet de thèse a consisté à développer des phases stationnaires monolithiques à base de silice afin de séparer, de manière ultra-rapide, des composés très volatils tels que des alcanes légers C1-nC5. Dans une première partie, la synthèse d’un monolithe de silice in situ dans des capillaires de 75 µm d.i. a été optimisée via une approche sol-gel de manière à adapter la perméabilité, et donc la structure macroporeuse des matériaux aux écoulements gazeux. Nous avons pu ainsi obtenir, pour la première fois, des séparations rapides C1-nC5 en CPG à des pressions conventionnelles en tête de colonne (Pin < 4 bar). Le second volet de cette thèse a consisté à optimiser et contrôler l’état de surface des monolithes par l’élaboration de deux traitements post-synthèse différents ayant pour objectif l’élimination du porogène organique résiduel. Des séparations C1-nC5 ultra-rapides (à l’échelle de quelques secondes) à haute température et en régime isotherme ont pu être obtenues en raison de la rétention et de l’efficacité élevées des matériaux. Enfin, le rendement, la répétabilité et la reproductibilité de la synthèse des monolithes de silice ont été étudiés afin d’évaluer leur potentielle fabrication à grande échelle
The miniaturization of conventional gas chromatography (GC) systems is of major interest for many applications. The aim is to achieve improvements in existing systems, in terms of portability and autonomy, but also in terms of analysis time and cost. Ultimately, these miniaturized GC systems could be field-portable for near real-time continuous monitoring. In this context, this PhD project consisted in developing silica-based monolithic stationary phases in order to obtain ultra-fast separation of very volatile compounds such as C1-nC5 light alkanes. First of all, in situ synthesis of a silica monolith in capillaries of 75 μm i.d. has been optimized via a sol-gel approach in order to adapt the permeability, and therefore the macroporous structure of the materials, for gas flows. For the first time, fast C1-nC5 separations were obtained at conventional column inlet pressures (Pin < 4 bar). The second part of this PhD project consisted in optimizing and controlling the surface state of the monoliths by the development of two different post-synthesis treatments with the objective of eliminating the residual organic porogen. Ultra-fast C1-nC5 separations (at a few seconds) at high temperature and isothermal conditions were achieved due to the high retention and efficiency of the materials. Finally, the yield, repeatability and reproducibility of silica monoliths synthesis were studied in order to evaluate their potential large-scale production
APA, Harvard, Vancouver, ISO, and other styles
25

Fürstberger, Silke. "Quantum billiards in reduced phase space." UIm : Universität Ulm, Fakultät für Naturwissenschaften, 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10976231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Digulescu, Angela. "Caractérisation des phénomènes dynamiques à l’aide de l’analyse du signal dans les diagrammes des phases." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT011/document.

Full text
Abstract:
La déformation des signaux au long de leur trajet de propagation est un des plus importants facteurs qui doivent être considérés à la réception. Ces effets sont dus à des phénomènes comme l’atténuation, la réflexion, la dispersion et le bruit. Alors que les premiers deux phénomènes sont assez facile à surveiller, parce qu’elles affectent l’amplitude, respectivement le retard des signaux, les deux derniers phénomènes sont plus difficiles à contrôler, parce qu’elles changent les paramètres du signal (amplitude, fréquence et phase) de manière totalement dépendante de l’environnement.Dans cette thèse, l’objectif principal est de contribuer à l’analyse des signaux liés aux différents phénomènes physiques, en visant une meilleure compréhension de ces phénomènes, ainsi que l’estimation de leurs paramètres qui sont intéressants de point de vue applicatif. Plusieurs contextes applicatifs ont été investigués dans deux configurations de : active et passive.Pour la configuration active, le premier contexte applicatif consiste en l’étude du phénomène de cavitation dans le domaine de la surveillance de systèmes hydrauliques. La deuxième application de la configuration active est la détection et le suivi des objets immergés sans synchronisation entre les capteurs d’émission et de réception.Pour la configuration passive, nous nous concentrons sur l’analyse des transitoires de pression dans les conduites d’eau en utilisant une méthode non-intrusive ainsi que sur la surveillance des réseaux d’énergie électrique en présence des phénomènes transitoires comme les arcs électriques.Malgré les différences entre les considérations physiques spécifiques à ces applications, nous proposons un modèle mathématique unique pour les signaux issus des deux types de configurations. Le modèle est basé sur l’analyse des récurrences. Avec ce concept, nous proposons une nouvelle approche pour les ondes basées sur l’espace des phases. Cette technique de construction des formes d’ondes présente l’intérêt de conduire à des méthodes de d’investigation active à haut cadence, très utiles pour la surveillance des phénomènes dynamiques.En plus, nous proposons des approches nouvelles pour l’investigation des caractéristiques des signaux. La première est la mesure TDR* (Time Distributed Recurrences) qui quantifie la matrice des récurrences/ distances et qui est utilisée pour la détection des signaux transitoires. La deuxième approche est l’analyse des phases à plusieurs retards et elle est utilisée pour la discrimination entre des signaux avec des paramètres très proches. Finalement, la quantification des lignes diagonales de la matrice des récurrences est proposée comme alternative pour l’analyse des signaux modulés en fréquence.Les travaux présentent les résultats expérimentaux en utilisant les méthodes théorétiques proposées dans cette thèse. Les résultats sont comparés avec des techniques classiques.Des perspectives de ces travaux, tant dans les domaines théorique et qu’applicatif, sont discutés à la fin du mémoire
Signals’ deformation along their propagation path is among the most important aspect which has to be taken into account at reception. These effects are caused by phenomena like attenuation, reflection, dispersion and noise. Whereas the first two are rather easy to monitor, because they affect the amplitude, respectively the delay, the latter two are more difficult to control, because they change signals’ parameters (amplitude, frequency and phase) in an environment-dependent manner.In this thesis, the main objective is to contribute to the analysis of signals related to different physical phenomena, aiming to better understand them as well as to estimate their parameters that are interesting from application point of view. Different applicative contexts have been investigated in active and passive sensing configurations. For the active part, we mention the monitoring of cavitation phenomena and its characterization for hydraulic system surveillance. The second application of the active sensing is the underwater object detection and tracking without synchronization between sensors. For the passive configuration, we focus on the pressure transient analysis in water pipes investigation with a non-intrusive method and on the surveillance of electrical power systems in the presence of transient phenomena such as electrical arcs.Despite the differences between the physical considerations, we propose a unique mathematical model of the signals issued from the active/passive sensing system used to analyze the considered phenomena. This model is based on the Recurrence Plot Analysis (RPA) method. With this concept, we propose the phase-space based waveform design. This waveform design technique presents the interest to conduct to a high speed sensing methods, very useful to monitor dynamic phenomena.Moreover, we propose new tools for the investigation of the signals characteristics. The first one is the TDR* measure (Time Distributed Recurrences) that quantifies the recurrence/ distance matrix and it is used for the detection of transient signals. The second one is the multi-lag phase analysis using multiple lags and it is successfully used to discriminate between signals with close parameters. Finally, the diagonal lines quantification of RPA matrix is proposed as an alternative for the analysis of modulated signals.Our work presents the experimental results using the proposed theoretical methods introduced by this thesis. The results are compared with classical techniques.The perspectives of this thesis are presented at the end of this paper
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Jiang. "Numerical analysis and simulations for phase-field equations." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/29.

Full text
Abstract:
Research on interfacial phenomena has a long history, which has attracted tremendous interest in recent years. One of the most successful tools is the phase-field approach. As phase-field models usually involve very complex dynamics and it is nontrivial to obtain analytical solutions, numerical methods have played an important role in various simulations. This thesis is mainly devoted to developing accurate, efficient and robust numerical methods and the related numerical analysis for three representative phase-field models, namely the Allen-Cahn equation, the Cahn-Hilliard equation and the thin film models. The first part of this thesis is mainly concentrated on the stability analysis for these three models, with particular attention to the Allen-Chan equation. We have established three stability criterion, i.e., nonlinear energy stability, L∞-stability and L2-stability. As shared by most phase-field models, one of the intrinsic properties of the Allen- Cahn and the Cahn-Hilliard equations is that they satisfy a nonlinear stability re- lationship, usually expressed as a time-decreasing free energy functional. We have studied several stabilized temporal discretization for both the Allen-Cahn and the Cahn-Hilliard equations so that the relevant nonlinear energy stability can be pre- served. The corresponding temporal discretization schemes are linear and are of second-order accuracy. We also apply multi-step implicit-explicit methods to ap- proximate the Allen-Cahn equation. We demonstrate that by suitably choosing the parameters in multi-step implicit-explicit methods the nonlinear energy stability can be preserved. Apart from studying the energy stability for the Allen-Cahn equation, we also establish the numerical maximum principle for some fully discretized schemes. We further extend our analysis technique to the fractional-in-space Allen-Cahn equation. A more general Allen-Cahn-type equation with a nonlinear degenerate mobility and a logarithmic free energy is also considered. The third stability under investigation is the L2-stability. We prove that the con- tinuum Allen-Cahn equation satisfies a uniform Lp-stability. Furthermore, we show that both semi-discretized Fourier Galerkin and Fourier collocation methods can in- herit this stability for p = 2, i.e., L2-stability. Based on the established L2-stability, we accomplish the spectral convergence estimate for the Fourier Galerkin methods. We adopt the second-order Strang splitting schemes in the temporal direction with Fourier collocation methods to demonstrate the uniform L2-stability in the fully dis- cretized scheme. Another contribution of this thesis is to propose a p-adaptive spectral deferred correction methods for the long time simulations for all three models. We develop a high-order accurate and energy stable scheme to simulate the phase-field models by combining the semi-implicit spectral deferred correction method and first-order stabilized semi-implicit schemes. It is found that the accuracy improvement may affect the overall energy stability. To compromise the accuracy and stability, a local p- adaptive strategy is proposed to enhance the accuracy by sacrificing some local energy stability in an acceptable level. Numerical results demonstrate the high effectiveness of our proposed numerical strategy. Keywords: Phase-field models, Allen-Cahn equations, Cahn-Hilliard equations, thin film models, nonlinear energy stability, maximum principle, L2-stability, adaptive simulations, stabilized semi-implicit schemes, finite difference, Fourier spectral meth- ods, spectral deferred correction methods, convex splitting
APA, Harvard, Vancouver, ISO, and other styles
28

Thomas, David Paul Foley Joe Preston. "Efficiency enhancements in micellar liquid chromatography through selection of stationary phase and mobile phase organic modifier /." Philadelphia, Pa. : Drexel University, 2009. http://hdl.handle.net/1860/3039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rodriguez, Moreno Paulina del Carmen. "Intégration de considérations environnementales dans la phase conceptuelle du processus de conception de nouveaux produits." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0016/document.

Full text
Abstract:
La thèse contribue à une meilleure connaissance du processus d’éco-conception de produits, notamment dans la prise en compte de considérations environnementales en amont du processus de conception. Centrée sur la phase conceptuelle, la motivation principale de ce travail est d’intervenir là où les décisions du concepteur ont une grande influence environnementale. En effet, un grand nombre d’auteurs convient que les premières étapes du processus de conception peuvent efficacement prévenir jusqu’à 80% des impacts environnementaux. Cependant, il existe un grand nombre d’obstacles qui empêchent l’intégration de considérations environnementales, surtout s’il s’agit d’un nouveau produit. Notre problématique a mis en évidence ces obstacles à travers deux types de verrous. En premier lieu, des verrous méthodologiques rencontrés principalement lors de la phase conceptuelle. En deuxième lieu, des verrous opérationnels liés au manque de connaissances environnementales du concepteur. En réponse à ces verrous nous proposons la création de liens entre l’analyse de cycle de vie (ACV) et l’analyse fonctionnelle (AF). Ces liens donnent lieu à un processus collaboratif d’éco-conception qui est en partie supporté par la méthode EcoAF spécialement développée à cet effet. La méthode intègre le concept de cycle de vie de l’ACV lors de la réalisation de l’AF, permettant ainsi de guider le concepteur vers l’intégration de considérations environnementales pour la création d’un produit avec un rendement environnemental équilibré sur l’ensemble du cycle de vie
The thesis contributes to a better understanding of the eco-design process of products, especially in the integration of environmental considerations in the early phases of the design process. Focused on the conceptual phase, the main motivation of this work is to intervene when the decisions of the designer have the greatest environmental influence. Indeed, many authors agree that the early stages of the design process can prevent up to 80% of environmental impacts. However, there are also many obstacles to the integration of environmental considerations, especially for a new product. The obstacles are highlighted through two types of locks. First, methodological locks encountered mainly at the conceptual stage. Secondly, operational locks are related with the lack of environmental knowledge of the designer. To solve the problems, we propose the creation of links between the life-cycle assessment (LCA), method that includes environmental knowledge, and functional analysis (FA), method well known by the designer early in the design process. These links have resulted in a collaborative eco-design process that is partly supported by the creation of the EcoAF method. Eco AF integrates life cycle concept of LCA when performing AF. It makes it possible to guide the designer in the integration of environmental considerations in creating a product with a balanced environmental performance throughout the life cycle
APA, Harvard, Vancouver, ISO, and other styles
30

Hiles, James F. "Multi-phase source selection strategy analysis." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA386722.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, Dec. 2000.
"December 2000." Thesis advisor(s): Jeffrey Cuskey, Keith Snider. Includes bibliographical references (p. 111-114). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Chae-Joon. "Quantitative analysis of oral phase dysphagia." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/29995.

Full text
Abstract:
Swallowing disorders (dysphagia) are becoming increasingly prevalent in the health care field. Oral phase dysphagia specifically refers to swallowing difficulties encountered during the initial, voluntary segment of the swallowing process, where the ingested food material is prepared into a manageable food mass (bolus) and then propelled from the mouth to the pharynx. Proper management (evaluation and treatment) of oral phase dysphagia poses a difficult task. The swallowing process is a complex sequence of highly coordinated events, while oral phase dysphagia can originate from a wide range of underlying causes. From a review of the literature, it is apparent that much of the current methods of management for oral phase dysphagia rely much on the subjective skills of the clinician in charge. In order to improve upon these methods as well as to gain a better understanding of the swallowing mechanism, the development of new, quantitative clinical methods are proposed. A computer-aided tongue force measurement system has been developed that is capable of quantifying individual tongue strength and coordination. This strain gauge-based transducer system measures upward and lateral tongue thrust levels during a "non-swallowing" test condition. Clinical testing has been performed on 15 able-bodied control subjects with no prior history of swallowing difficulty and several patient groups (25 post-polio, 8 stroke, 2 other neuromuscular disorders) with or without swallowing difficulties. The results have demonstrated the system's potential as a reliable, clinical tool (repeatability: t-test, p < 0.01) to quantify variation in tongue strength and coordination between and within the control and patient groups. In order to investigate the possible existence and role of a suction mechanism in transporting the bolus during the oral phase, a pharyngeal manometry setup has been utilized for oral phase application. This novel approach (oral phase manometry) would determine whether the widely-accepted tongue pressure mechanism is solely responsible for oral phase bolus transport. Based on extensive manometric data obtained from an able-bodied subject without any swallowing abnormality, no significant negative pressure or pressure gradients (suction) were found during the oral phase. The pressure generated by the tongue acting against the palate in propelling the bolus during the oral phase has been utilized in the development of an oral cavity pressure transducer system. This computer-aided measurement system employs a network of ten force sensing resistors (FSR's) attached to a custom-made mouthguard transducer unit to record the wave-like palatal pressure pattern during an actual swallow. Clinical testing has been performed on 3 able-bodied subjects with no swallowing abnormality, and their pressure data have been found to support tbe reliability of the device (repeatability: t-test, p < 0.01). The system has also been applied to study the effects of bolus size and consistency on the palatal pressure pattern during the oral phase. The results of this investigation has confirmed the importance of the tongue during the oral phase of swallowing. It has been shown that deficiency in tongue strength or coordination is a primary factor in oral phase dysphagia. The new, quantitative clinical methods which were developed for this research has utilized these parameters to provide improved methods of evaluation and treatment for oral phase dysphagia.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
32

Ratcliff, Marcus Dai Foster. "Phase locked loop analysis and design." Auburn, Ala, 2008. http://hdl.handle.net/10415/1452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Schubert, Roman. "Semiclassical localization in phase space." Ulm : Universität Ulm, Fakultät für Naturwissenschaften, 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10028611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mourier, Pierre. "La Chromatographie en phase supercritique avec le dioxyde de carbone." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37599852p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ndamase-, Nzuzo Pumla Patricia. "Numerical error analysis in foundation phase (Grade 3) mathematics." Thesis, University of Fort Hare, 2014. http://hdl.handle.net/10353/5893.

Full text
Abstract:
The focus of the research was on numerical errors committed in foundation phase mathematics. It therefore explored: (1) numerical errors learners in foundation phase mathematics encounter (2) relationships underlying numerical errors and (3) the implementable strategies suitable for understanding numerical error analysis in foundation phase mathematics (Grade 3). From 375 learners who formed the population of the study in the primary schools (16 in total), the researcher selected by means of a simple random sample technique 80 learners as the sample size, which constituted 10% of the population as response rate. On the basis of the research questions and informed by positivist paradigm, a quantitative approach was used by means of tables, graphs and percentages to address the research questions. A Likert scale was used with four categories of responses ranging from (A) Agree, (S A) Strongly Agree, (D) Disagree and (S D) Strongly Disagree. The results revealed that: (1) the underlying numerical errors that learners encounter, include the inability to count backwards and forwards, number sequencing, mathematical signs, problem solving and word sums (2) there was a relationship between committing errors and a) copying numbers b) confusion of mathematical signs or operational signs c) reading numbers which contained more than one digit (3) It was also revealed that teachers needed frequent professional training for development; topics need to change and lastly government needs to involve teachers at ground roots level prior to policy changes on how to implement strategies with regards to numerical errors in the foundational phase. It is recommended that attention be paid to the use of language and word sums in order to improve cognition processes in foundation phase mathematics. Moreover, it recommends that learners are to be assisted time and again when reading or copying their work, so that they could have fewer errors in foundation phase mathematics. Additionally it recommends that teachers be trained on how to implement strategies of numerical error analysis in foundation phase mathematics. Furthermore, teachers can use tests to identify learners who could be at risk of developing mathematical difficulties in the foundation phase.
APA, Harvard, Vancouver, ISO, and other styles
36

Green, Roger James. "The use of Fourier transform methods in automatic fringe pattern analysis." Thesis, King's College London (University of London), 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mogemark, Mickael. "Solid-phase glycoconjugate synthesis : on-resin analysis with gel-phase ¹9F NMR spectroscopy." Doctoral thesis, Umeå University, Chemistry, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-438.

Full text
Abstract:

An efficient and versatile non-destructive method to analyze the progress of solid-phase glycoconjugate synthesis with gel-phase 19F NMR spectroscopy is described. The method relies on use of fluorinated linkers and building blocks carrying fluorinated protective groups. Commercially available fluorinated reagents have been utilized to attach the protective groups.

The influence of resin structures for seven commercial resins upon resolution of gel-phase 19F NMR spectra was investigated. Two different linkers for oligosaccharide synthesis were also developed and successfully employed in preparation of α-Gal trisaccharides and a n-pentenyl glycoside. Finally, reaction conditions for solid-phase peptide glycosylations were established.

APA, Harvard, Vancouver, ISO, and other styles
38

Gospodinova, Maya. "Contribution à l'étude thermodynamique du système ternaire Fe-Ti-B du côté riche en Fe." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENI008.

Full text
Abstract:
Dans le cadre de la nouvelle réglementation sur l'environnement, un objectif d'allègement de 20% des véhicules automobiles doit être atteint afin de répondre aux impératifs de réduction des émissions de CO2. Le développement d'une nouvelle génération d'aciers sous forme de composites à matrice acier Fe-TiB2 est d'un grand intérêt industriel et pourrait être une réponse prometteuse car il permet d'améliorer la rigidité de l'acier tout en diminuant sa densité. L'élaboration de tels produits nécessite une très bonne connaissance de la thermodynamique des équilibres de phases dans les systèmes concernés. Ce mémoire est consacré à l'étude thermodynamique de cette nouvelle famille de composite à matrice acier Fe-TiB2 et plus particulièrement à l'établissement des équilibres de phases liquide/solide et solide/solide dans le système ternaire Fe-Ti-B du côté riche en fer ainsi qu'à la définition du domaine de stabilité du diborure de titane dans les solutions Fe-Ti-B. La démarche employée est une approche couplée d'expériences ciblées (séparation électromagnétique de phases, analyse thermique différentielle, équilibre de phases) et modélisation thermodynamique des équilibres de phases
To answer the imperatives of reduction of CO2 emissions, an objective of 20% weight lightening of automotive vehicles must be reached. Developing such innovative Fe-TiB2 composites of a great industrial interest could be a promising answer because it allows to improving the rigidity of the steel while decreasing its density. The development of such products requires a good knowledge of the thermodynamics of phase equilibria in the studied systems. This thesis is devoted to thermodynamic study of this new family of steel matrix composite Fe-TiB2 and particularly to the establishment of liquid/solid and solid/solid phase equilibria in the iron richer side of the ternary Fe-Ti-B system as well as to the definition of the stability domain of the titanium diboride in the Fe-Ti-B solutions. This task was performed by a coupled approach consisting in specific experiments (electromagnetic separation of phases, differential thermal analysis, phase equilibria) and thermodynamic modeling of phase equilibria
APA, Harvard, Vancouver, ISO, and other styles
39

Ahmad, Saeed. "Semifluxons in long Josephson junctions with phase shifts." Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12729/.

Full text
Abstract:
A Josephson junction is formed by sandwiching a non-superconducting material between two superconductors. If the phase difference across the superconductors is zero, the junction is called a conventional junction, otherwise it is unconventional junction. Unconventional Josephson junctions are widely used in information process and storage. First we investigate long Josephson junctions having two p-discontinuity points characterized by a shift of p in phase, that is, a 0-p-0 long Josephson junction, on both infinite and finite domains. The system is described by a modified sine-Gordon equation with an additional shift q(x) in the nonlinearity. Using a perturbation technique, we investigate an instability region where semifluxons are spontaneously generated. We study the dependence of semifluxons on the facet length, and the applied bias current. We then consider a disk-shaped two-dimensional Josephson junction with concentric regions of 0- and p-phase shifts and investigate the ground state of the system both in finite and infinite domain. This system is described by a (2 + 1)- dimensional sine-Gordon equation, which becomes effectively one dimensional in polar coordinates when one considers radially symmetric static solutions. We show that there is a parameter region in which the ground state corresponds to a spontaneously created ringshaped semifluxon. We use a Hamiltonian energy characterization to describe analytically the dependence of the semifluxonlike ground state on the length of the junction and the applied bias current. The existence and stability of excited states bifurcating from a uniform case has been discussed as well. Finally, we consider 0-k infinitely long Josephson junctions, i.e., junctions having periodic k-jump in the Josephson phase. We discuss the existence and stability of ground states about the periodic solutions and investigate band-gaps structures in the plasma band and its dependence on an applied bias current. We derive an equation governing gap-breathers bifurcating from the edge of the transitional curves.
APA, Harvard, Vancouver, ISO, and other styles
40

Bastard, Audrey. "Analyse théorique et physique de nouveaux matériaux à base de chalcogénures convenant aux Mémoires à Changements de Phases." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00771412.

Full text
Abstract:
Les mémoires à changement de phase (PCRAM) sont l'un des candidats les plus prometteurs pour la prochaine génération de mémoires non-volatiles du fait de leurs excellentes vitesses de fonctionnement et endurance. Cependant, deux inconvénients majeurs nécessitent une amélioration afin de permettre leur percée sur le marché des mémoires, à savoir un temps de rétention court à hautes températures et une consommation électrique trop importante. Cette thèse s'intéresse au développement de nouveaux matériaux à changement de phase afin de remplacer le matériau standard Ge2Sb2Te5, inadapté aux applications mémoires embarquées fonctionnant à hautes températures. Le comportement des matériaux binaires GeTe et GeSb a ainsi été évalué et comparé au matériau référence lors de la cristallisation de l'amorphe 'tel que déposé' mais aussi de l'amorphe 'fondu trempé'. En effet, il est important d'étudier le matériau dans son état amorphe 'fondu trempé' pour être au plus près de l'état du matériau cyclé dans les dispositifs. Ainsi, le mécanisme de cristallisation du GeTe déterminé par l'étude de la cristallisation de l'amorphe 'fondu trempé' par recuit laser est en accord avec l'observation MET in situ (recuit thermique) de la cristallisation. L'incorporation d'éléments 'dopants' dans ces matériaux binaires a également été évaluée afin d'augmenter à nouveau la stabilité thermique des matériaux non dopés. Certains éléments 'dopants' permettent une diminution du courant de reset, ou un retard à la formation de 'voids' au cours des cycles.
APA, Harvard, Vancouver, ISO, and other styles
41

Junius, Niels. "Développements instrumentaux pour le contrôle de la cristallisation par la dialyse : approche microfluidique et analyse aux rayons X." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAY032/document.

Full text
Abstract:
La cristallisation des protéines est une étape cruciale dans l’élucidation de la structure tridimensionnelle des protéines. C’est un processus très délicat qui dépend de nombreuses variables environnementales dont le contrôle précis est difficile, voire impossible, dans les installations typiquement utilisées. Des approches basées sur la parallélisation massive des expériences et la réduction du volume d’échantillon par expérience, pour trouver les conditions initiales de cristallisation, montrent leur relative efficacité dans la recherche de ces conditions initiales mais aussi leurs limites quant à l’optimisation des cristaux obtenus.La méthode présentée dans cette thèse diffère de ce paradigme. La démarche proposée est une approche séquentielle plutôt que la parallélisation d’expériences et est basée sur la connaissance des diagrammes de phase. Cette thèse repose sur une suite de développements d’instruments mis en oeuvre pour maîtriser et rationaliser la cristallisation par la méthode de la dialyse, permettant ainsi l’exploration des diagrammes de phases sans consommer l’échantillon de protéine.Il en résulte un dispositif microfluidique permettant la cristallisation de protéines par la méthode de la dialyse, l’utilisation d’un flux continu d’agent de cristallisation et par conséquent l’échange continu de conditions de cristallisation ainsi que le contrôle de la température au cours de l’expérience. Il est compatible avec le rayonnement X pour la collecte de données de diffraction in situ sur les cristaux ayant poussés dans la puce microfluidique. Ce système microfluidique est basé sur la miniaturisation du banc de cristallisation qui a été amélioré d’un point de vue : de l’électronique pour l’automatisation, du transport de fluides pour le fonctionnement en flux continu, du développement logiciel pour le contrôle des paramètres de cristallisation, de la mécanique pour améliorer la cellule de dialyse et la thermorégulation, et enfin par l’intégration d’un système UV permettant de réaliser des mesures d’absorbance in situ qui offre pour l’avenir la possibilité de mesurer la solubilité de protéine au cours d’une expérience de cristallisation par la dialyse.Finalement les développements instrumentaux et méthodologiques ont été validés par la cristallisation de plusieurs protéines modèles dont les cristaux ont diffractés aux rayons X avec succès. En outre le transport d’espèces en solution par la dialyse a été étudié par une approche combinée expérimentale et théorique
Protein crystallization is a key step in elucidating three-dimensional structure of proteins. This very sensitive process depends on many variables that are difficult to control precisely or simultaneously in the existing facilities. Instrumentation developments have concentrated on massive parallel experiments and sample volume reduction used by experiment. With this approach it is relatively easy to find initial crystallization conditions but their optimization to yield well diffracting crystals often proves to be more difficult.The method presented herein differs from the current paradigm, since we propose serial instead of parallel experiments based on the knowledge of phase diagrams. This project is based on a series of developments of instruments used to control and rationalize crystallisation using dialysis method, thus allowing phase diagrams exploration without consuming large quantity of protein sample.This results in a microfluidic device that allows crystallization of proteins by dialysis method, use of a continuous flow of crystallization agent and therefore continuous exchange of crystallization conditions as well as temperature control during experiment. It provides X-rays compatibility for in situ diffraction data collection of crystals grown in the microfluidic chip. This microfluidic system is based on the miniaturization of the crystallization bench which has been improved on electronics for automation, fluid transport to operate at a continuous flow, software development for the control of crystallization parameters, mechanics to improve both dialysis cell and thermoregulation, and finally by the integration of a UV system to perform in situ absorbance measurements that provide the future possibility to measure the solubility of proteins in a dialysis crystallization experiment.Finally both instrumental and methodological developments have been validated by the crystallization of several model proteins whose crystals diffracted succesfully X-rays. Furthermore understanding of the transport of species in solution by dialysis was investigated by combined experimental and theoretical approaches
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Weiwei. "Multimodal Cardiovascular Image Analysis Using Phase Information." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491680.

Full text
Abstract:
Cardiovascular heart disease (CVD) is one of the world's leading causes of death. Among the existing imaging techniques, cardiovascular magnetic imaging (CMR) and real-time three-dimensional echocardiography (RT3DUS) are receiving a lot of attention at the current time. Due to the 3D nature of the heart and its complex motion in 3D space, the RT3DUS is well-suited for 3D analysis of the cardiac disease. However, RT3DUS has lower specificity and sensitivity than the high spatial resolution CMR, which makes it difficult to interpret. This motivates research on assisting a clinician to automatically fuse the information from multiple imaging modalities for the early diagnosis and therapy of cardiac disease. This thesis establishes a framework for multimodal cardiovascular image analysis. First, we develop a (static) nonrigid registration of a RT3DUS volume slice and a CMR image. The local phase presentation of both images is utilized as an image descriptor of the 'featureness'. The local deformation of ventricles is modeled by a polyaffine transformation. The anchor points (or control points) used in the polyaffine transformation are automatically detected and refined by calculating a local misalignment measure based on phase mutual information. The registration process is built in an adaptive multi-scale framework to maximize the phase-based similarity measure by optimizing the parameters of the polyaffine transformation. Next, we explore a spatia-temporal alignment of RT3DUS and CMR sequences. The deformation field between both sequences is decoupled into spatial and temporal components. Temporal alignment is performed by re-slicing both sequences to contain the same number of frames and to make them correspond to the same temporal position using a differential registration. Spatial alignment is then carried out by extending the static nonrigid registration in a frame-by-frame manner. Landmarkbased validation shows that this new registration algorithm gives an accurate result. Finally, we proposed a registration-guided segmentation of the left ventricle in RT3DUS datasets. The image phase gradient is used as the edged indicator function. Incorporating local phase into the variational level set method without re-initialization enables a flexible initialization. This allows the co-registration of multimodal cardiovascular sequences to provide a strong prior knowledge about the shape of the left ventricle. We develop a registration-guided segmentation algorithm that efficiently converges to the object boundary of interest.
APA, Harvard, Vancouver, ISO, and other styles
43

Craik, Graham N. "Analysis of phase retrieval from multiple images." Thesis, Heriot-Watt University, 2010. http://hdl.handle.net/10399/2420.

Full text
Abstract:
This thesis considers the calculation of phase from sets of phase contrast and defocused images. An improvement to phase contrast imaging is developed that combines three phase contrast images. This method results in a reduction in the phase error by a factor of up to 20 in comparison to using a single image. Additionally the method offers the potential for optimisation and the extension to an arbitrary number of images. Phase diversity using defocused images is considered in more depth where the intensity transport equation is used to calculate the phase. First a Green's function approach to solving this equation was considered. One of the Green's functions stated in the literature is shown to be incorrect, the other two are shown to be correct both giving equivalent phase estimates. A further improvement is made to this method by removing the singularities in the phase calculation process. As an alternative to the Green's function solution a Fourier transform approach is also considered. A complete solution to the intensity transport equation is derived with inclusion of the boundary conditions. This completes the method incompletely described in the literature. Through simulation, generic key factors are identified for the potential optimisation of experimental and numerical process to improve the estimated phase. Determining 3D structural information of an object from the phase calculated in a single plane is considered using an iterative process. It is shown that this process is limited but can be used, in some cases, to generate an approximate representation of the object.
APA, Harvard, Vancouver, ISO, and other styles
44

Tcheslavski, Gleb V. "Coherence and Phase Synchrony Analysis of Electroencephalogram." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/30186.

Full text
Abstract:
Phase Synchrony (PS) and coherence analyses of stochastic time series - tools to discover brain tissue pathways traveled by electrical signals - are considered for the specific purpose of processing of the electroencephalogram (EEG). We propose the Phase Synchrony Processor (PSP), as a tool for implementing phase synchrony analysis, and examine its properties on the basis of known signals. Long observation times and wide filter bandwidths can decrease bias in PS estimates. The value of PS is affected by the difference in frequency of the sequences being analyzed and can be related to that frequency difference by the periodic sinc function. PS analysis of the EEG shows that the average PS is higher - for a number of electrode pairs - for non-ADHD than for ADHD participants. The difference is more pronounced in the δ rhythm (0-3 Hz) and in the γ rhythm (30-50 Hz) PS. The Euclidean classifier with electrode masking yields 66 % correct classification on average for ADHD and non-ADHD subjects using the δ and γ1 rhythms. We observed that the average γ1 rhythm PS is higher for the eyes closed condition than for the eyes open condition. The latter may potentially be used for vigilance monitoring. The Euclidean discriminator with electrode masking shows an average percentage of correct classification of 78 % between the eyes open and eyes closed subject conditions. We develop a model for a pair of EEG electrodes and a model-based MS coherence estimator aimed at processing short (i.e. 20 samples) EEG frames. We verify that EEG sequences can be modeled as AR(3) processes degraded by additive white noise with an average SNR of approximately 11-12 dB. Application of the MS coherence estimator to the EEG suggests that MS coherence is generally higher for non-ADHD individuals than for ADHD participants when evaluated for the θ rhythm of EEG. Also, MS coherence is consistently higher for ADHD subjects than for the majority of non-ADHD individuals when computed for the low end of the δ rhythm (i.e. below 1 Hz). ADHD produces more measurable effects in the frontal lobe EEG and for participants performing attention intensive tasks.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yajuan. "Cluster_Based Profile Monitoring in Phase I Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/46810.

Full text
Abstract:
Profile monitoring is a well-known approach used in statistical process control where the quality of the product or process is characterized by a profile or a relationship between a response variable and one or more explanatory variables. Profile monitoring is conducted over two phases, labeled as Phase I and Phase II. In Phase I profile monitoring, regression methods are used to model each profile and to detect the possible presence of out-of-control profiles in the historical data set (HDS). The out-of-control profiles can be detected by using the statis-tic. However, previous methods of calculating the statistic are based on using all the data in the HDS including the data from the out-of-control process. Consequently, the ability of using this method can be distorted if the HDS contains data from the out-of-control process. This work provides a new profile monitoring methodology for Phase I analysis. The proposed method, referred to as the cluster-based profile monitoring method, incorporates a cluster analysis phase before calculating the statistic. Before introducing our proposed cluster-based method in profile monitoring, this cluster-based method is demonstrated to work efficiently in robust regression, referred to as cluster-based bounded influence regression or CBI. It will be demonstrated that the CBI method provides a robust, efficient and high breakdown regression parameter estimator. The CBI method first represents the data space via a special set of points, referred to as anchor points. Then a collection of single-point-added ordinary least squares regression estimators forms the basis of a metric used in defining the similarity between any two observations. Cluster analysis then yields a main cluster containing at least half the observations, with the remaining observations comprising one or more minor clusters. An initial regression estimator arises from the main cluster, with a group-additive DFFITS argument used to carefully activate the minor clusters through a bounded influence regression frame work. CBI achieves a 50% breakdown point, is regression equivariant, scale and affine equivariant and distributionally is asymptotically normal. Case studies and Monte Carlo results demonstrate the performance advantage of CBI over other popular robust regression procedures regarding coefficient stabil-ity, scale estimation and standard errors. The cluster-based method in Phase I profile monitoring first replaces the data from each sampled unit with an estimated profile, using some appropriate regression method. The estimated parameters for the parametric profiles are obtained from parametric models while the estimated parameters for the nonparametric profiles are obtained from the p-spline model. The cluster phase clusters the profiles based on their estimated parameters and this yields an initial main cluster which contains at least half the profiles. The initial estimated parameters for the population average (PA) profile are obtained by fitting a mixed model (parametric or nonparametric) to those profiles in the main cluster. Profiles that are not contained in the initial main cluster are iteratively added to the main cluster provided their statistics are "small" and the mixed model (parametric or nonparametric) is used to update the estimated parameters for the PA profile. Those profiles contained in the final main cluster are considered as resulting from the in-control process while those not included are considered as resulting from an out-of-control process. This cluster-based method has been applied to monitor both parametric and nonparametric profiles. A simulated example, a Monte Carlo study and an application to a real data set demonstrates the detail of the algorithm and the performance advantage of this proposed method over a non-cluster-based method is demonstrated with respect to more accurate estimates of the PA parameters and improved classification performance criteria. When the profiles can be represented by vectors, the profile monitoring process is equivalent to the detection of multivariate outliers. For this reason, we also compared our proposed method to a popular method used to identify outliers when dealing with a multivariate response. Our study demonstrated that when the out-of-control process corresponds to a sustained shift, the cluster-based method using the successive difference estimator is clearly the superior method, among those methods we considered, based on all performance criteria. In addition, the influence of accurate Phase I estimates on the performance of Phase II control charts is presented to show the further advantage of the proposed method. A simple example and Monte Carlo results show that more accurate estimates from Phase I would provide more efficient Phase II control charts.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Parshall, Elaine Ruth 1962. "Phase-conjugate interferometry for thin film analysis." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/291358.

Full text
Abstract:
A phase-conjugate interferometric method of thin film analysis obtains three independent parameters with which to determine a film's refractive index n, absorption coefficient kappa, and thickness d. Because dimensionless intensity ratios are used, this method is self-calibrating except for light source polarization and incident angle. The use of self-pumped phase-conjugate reflectors makes the interferometer self-aligning and results in infinite spacing of fringes of equal thickness. A single layer thin film sample was analyzed by this technique, and the results compared to those of ellipsometry.
APA, Harvard, Vancouver, ISO, and other styles
47

Agic, Adnan. "Analysis of entry phase in intermittent machining." Licentiate thesis, Högskolan Väst, Avdelningen för avverkande och additativa tillverkningsprocesser (AAT), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-12255.

Full text
Abstract:
Cutting forces and vibrations are essential parameters in the assessment of a cutting process. As the energy consumption in the machining process is directly affected by the magnitude of the cutting forces it is of vital importance to design cutting edges and select process conditions that will maintain high tool performance through reduced energy consumption. The vibrations are often the cause of poor results in terms of accuracy, low reliability due to sudden failures and bad environmental conditions caused by noise. The goal of this work is to find out how the cutting edge and cutting conditions affect the entry conditions of the machining operation. This is done utilizing experimental methods and appropriate theoretical approaches applied to the cutting forces and vibrations. The research was carried out through three main studies beginning with a force build-up analysis of the cutting edge entry into the workpiece in intermittent turning. This was followed by a second study, concentrated on modelling of the entry phase which has been explored through experiments and theory developed in the first study. The third part was focused on the influence of the radial depth of cut upon the entry of cutting edge into the workpiece in a face milling application. The methodology for the identification of unfavourable cutting conditions is also explained herein. Important insights into the force build-up process help addressing the correlation between the cutting geometries and the rise time of the cutting force. The influence of the nose radius for a given cutting tool and workpiece configuration during the initial entry is revealed. The critical angle i.e. the position of the face milling cutter that results in unfavourable entry conditions has been explained emphasizing the importance of the selection of cutting conditions. Finally, the theoretical methods utilized for the evaluation of the role of cutting edge geometry within entry phase dynamics has been explored. This has revealed the trends that are of interest for selection of cutting conditions and cutting edge design.
APA, Harvard, Vancouver, ISO, and other styles
48

Ichikawa, Hiroyuki. "Optical beam array generation with phase gratings." Thesis, Heriot-Watt University, 1991. http://hdl.handle.net/10399/807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Levy, Donna Elise. "Association of Adaptive Early Phase Study Design and Late Phase Study Results in Oncology." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7029.

Full text
Abstract:
This quantitative study assessed the association of the design methods used for early phase oncology studies (adaptive versus traditional) and the outcome of late stage clinical trials. Differences by cancer type and by drug classification were also assessed. The theoretical and conceptual frameworks used were the general systems theory and the design and evaluation of complex interventions, respectively. Units of analysis were individual oncology studies in the ClinicalTrials.gov database and Bayesian logistic modeling was applied on a random sample of 381 studies initiated after November 1999 to December 2016. When assessing study design and outcome, there were lower odds of a positive outcome when adaptive methods were used though this association was not statistically significant (OR [95% highest posterior density (HPD)]:0.66 [0.20, 1.21]). Among the different drug types, using adaptive compared to traditional methods was associated with significantly higher odds of a positive outcome for taxanes, OR: 2.75, 95% HPD: 1.01, 5.16) and other, OR: 3.23, 95% HPD: 1.58, 5.46) but no association among studies of monoclonal antibodies or protein kinase inhibitors. Also, there were no significant associations between early phase study design and outcome in late phase studies by cancer type (lung, breast, other). Further research should be conducted using all completed oncology clinical trials in the database to more precisely determine the relationship between adaptive study design in early phase oncology studies and outcomes in late stage studies. Social change can occur through increased uptake of adaptive design methods, which may lead to more efficacious cancer treatment options.
APA, Harvard, Vancouver, ISO, and other styles
50

Shen, Jue. "Quantization Effects Analysis on Phase Noise and Implementation of ALL Digital Phase Locked-Loop." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37212.

Full text
Abstract:
With the advancement of CMOS process and fabrication, it has been a trend to maximize digital design while minimize analog correspondents in mixed-signal system designs. So is the case for PLL. PLL has always been a traditional mixed-signal system limited by analog part performance. Around 2000, there emerged ADPLL of which all the blocks besides oscillator are implemented in digital circuits. There have been successful examples in application of Bluetooth, and it is moving to improve results for application of WiMax and ad-hoc frequency hopping communication link. Based on the theoretic and measurement results of existing materials, ADPLL has shown advantages such as fast time-to-market, low area, low cost and better system integration; but it also showed disadvantages in frequency resolution and phase noise, etc. Also this new topic still opens questions in many researching points important to PLL such as tracking behavior and quantization effect. In this thesis, a non-linear phase domain model for all digital phase-locked loop (ADPLL) was established and validated. Based on that, we analyzed that ADPLL phase noise prediction derived from traditional linear quantization model became inaccurate in non-linear cases because its probability density of quantization error did not meet the premise assumption of linear model. The phenomena of bandwidth expansion and in-band phase noise decreasing peculiar to integer-N ADPLL were demonstrated and explained by matlab and verilog behavior level simulation test bench. The expression of threshold quantization step was defined and derived as the method to distinguish whether an integer-N ADPLL was in non-linear cases or not, and the results conformed to those of matlab simulation. A simplified approximation model for non-linear integer-N ADPLL with noise sources was established to predict in-band phase noise, and the trends of the results conformed to those of matlab simulation. Other basic analysis serving for the conclusions above covered: ADPLL loop dynamics, traditional linear theory and its quantitative limitations and numerical analysis of random number. Finally, a present measurement setup was demonstrated and the results were analyzed for future work.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography