Dissertations / Theses on the topic 'Quantitative'

To see the other types of publications on this topic, follow the link: Quantitative.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quantitative.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wengert, Christian. "Quantitative endoscopy /." Konstanz : Hartung-Gorre Verlag, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Faustino, Rui Alexandre Rodrigues Veloso. "Quantitative easing." Master's thesis, NSBE - UNL, 2012. http://hdl.handle.net/10362/9552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
Since November 2008, the Federal Reserve of the United States pursued a series of large-scale asset purchases, known as Quantitative Easing. In this Work Project, I describe the context, the objectives and the implementation of the Quantitative Easing. Additionally, I discuss its expected effects. Finally, I present empirical evidence of the effects on interest rates, output and inflation. I conclude that the first round of purchases was effective on preventing deflation and depression while the second had a small impact on economy.
3

Cleary, Maryanne Viola. "Quantitative HPTLC." Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-07112009-040558/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luu, Philippe. "La subjectivité dans les méthodes quantitatives. Une étude des pratiques en Sciences de Gestion." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En sciences de gestion, les méthodes quantitatives véhiculent deux idées reçues. Tout d’abord, elles désignent de manière quasi exclusive les méthodes statistiques causales. L’utilisation de ces dernières est ensuite perçue comme un indiscutable garant d’objectivité. Notre travail cherche à nuancer ces deux points et plus particulièrement la question de l’objectivité. Les méthodes quantitatives s’inscrivent en général dans le paradigme post-positiviste, où tendre vers l’objectivité consiste à contrôler au mieux les conditions dans lesquelles la recherche est réalisée. L’objectivité scientifique présente une double dimension : une dimension individuelle propre au chercheur et une dimension collective basée sur le regard de la communauté. L’objectif de notre recherche est de montrer comment la subjectivité intervient dans chacune des étapes d’un design de recherche quantitatif. Notre méthodologie s’appuie sur une étude de cas exploratoire réalisée dans un laboratoire en sciences de gestion et l’observation participante d’un ingénieur statisticien sur une période de 10 ans. Les unités d’étude considérées sont 24 papiers co-écrits par l’observateur participant durant cette période. Nos résultats indiquent, d’une part, que la définition des méthodes quantitatives peut potentiellement être élargie : les simulations informatiques ou les procédures d’optimisation numérique peuvent par exemple être incluses, sans qu’il s’agisse de techniques causales ou même statistiques. D’autre part, nos résultats illustrent l’omniprésence de la subjectivité dans de nombreuses techniques quantitatives, y compris statistiques. Lors du traitement de données, des options multiples se présentent au chercheur à chacune des étapes suivantes : au niveau de l’opérationnalisation des concepts, lors de la collecte des données, durant la préparation de l’échantillon et tout au long du paramétrage de l’analyse. La présence de nombreux arbitrages implique une grande variabilité dans les résultats d’une étude. L’intérêt de notre travail est d’augmenter l’espoir d’atteindre une objectivité maximale et collective en présentant les points que le chercheur doit documenter avec soin. Notre incitation à la transparence rejoint les recommandations mentionnées par la littérature pour répondre à la crise de la reproductibilité des travaux scientifiques qui touche à l’heure actuelle toutes les disciplines
In management sciences, quantitative methods convey two preconceived ideas. First, they refer almost exclusively to causal statistical methods. The use of the latter is perceived as an indisputable guarantee of objectivity. Our work aims to bring nuance to these two points and more particularly to the perceived objectivity. Quantitative methods are generally part of the post-positivist paradigm, where objectivity means to control the conditions under which research is conducted. Scientific objectivity has an individual dimension specific to the researcher and a collective dimension based on the community's perspective. The objective of our research is to show how subjectivity intervenes in each step of a quantitative research design. Our methodology is based on an exploratory case study conducted in a management science laboratory and the participant observation of a statistical engineer over a 10-year period. Our results illustrate the omnipresence of subjectivity in many quantitative techniques, including statistics. When processing data, the researcher faces multiple options during each of the following steps: operationalization of concepts, data collection, sample preparation and throughout the analysis setup. The presence of numerous trade-offs multiplies the possible outcomes of a study. Our work may help to increase the hope of achieving maximum and collective objectivity by highlighting the steps that the researcher must carefully document. Our encouragement of transparency is one of the recommendations mentioned by the literature in response to the reproducibility crisis, which currently affects all disciplines
5

Youle, Ian. "Quantitative tritium imaging." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0015/NQ45641.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elder, A. D. "Quantitative fluorescence microscopy." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The work presented here improves the level of quantification achievable with fluorescence microscopy by integrating novel technologies and developing new experimental and theoretical methodologies. Initial work focused on the use of fluorescence microscopy for the quantification of molecular interactions in living cells. This resulted in the development of an analysis routine for the quantification of Förster resonance energy transfer (FRET) by intensity-based sensitised acceptor emission measurements. The developed technique enabled quantification of the strength of interaction as well as the relative stoichiometry of free and bound fluorophores. The work culminated in the dynamic measurement of the cyclin – cyclin dependent kinase interaction through the course of the cell cycle. To improve the flexibility of microscopy techniques, a confocal microscopy system was designed and built which used novel fibre-based supercontinuum illumination technique and a prism-based spectrometer to provide wavelength resolved measurements. The multiparametric imaging approach which this system enabled was shown to aid in the quantification of complex systems. The remainder of this thesis considers the development of new frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) techniques. The advantages of lifetime imaging techniques were illustrated through their application to quantitative chemical analysis in microfluidic devices. Novel illumination technology was integrated into FD-FLIM systems; both in the form of inexpensive light emitting diodes and fibre-based supercontinuum technology. An in-depth theoretical analysis permitted the development of systems with much improved photon economy. Using extensions of the AB analysis technique, multicomponent lifetime data could be accurately quantified. finally, a new experimental technique was implemented, termed ø2FLIM, which enabled the rapid acquisition of alias-free fluorescence lifetime data.
7

梁永雄 and Wing-hung Leung. "Quantitative coronary arteriography." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31981483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Waszkiewicz, Pawel. "Quantitative continuous domains." Thesis, University of Birmingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schlachter, Simon Christopher. "Quantitative multidimensional microscopy." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martins, Carlos Jose Amaro Parente. "Quantitative string evolution." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sharma, Arvind Kumar. "Quantitative Stratigraphic Inversion." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/30172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We develop a methodology for systematic inversion of quantitative stratigraphic models. Quantitative stratigraphic modeling predicts stratigraphy using numerical simulations of geologic processes. Stratigraphic inversion methodically searches the parameter space in order to detect models which best represent the observed stratigraphy. Model parameters include sea-level change, tectonic subsidence, sediment input rate, and transport coefficients. We successfully performed a fully automated process based stratigraphic inversion of a geologically complex synthetic model. Several one and two parameter inversions were used to investigate the coupling of process parameters. Source location and transport coefficient below base level indicated significant coupling, while the rest of the parameters showed only minimal coupling. The influence of different observable data on the inversion was also tested. The inversion results using misfit based on sparse, but time dependent sample points proved to be better than the misfit based on the final stratigraphy only, even when sampled densely. We tested several inversion schemes on the topography dataset obtained from the eXperimental EarthScape facility simulation. The clustering of model parameters in most of the inversion experiments showed the likelihood of obtaining a reasonable number of compatible models. We also observed the need for several different diffusion-coefficient parameterizations to emulate different erosional and depositional processes. The excellent result of the piecewise inversion, which used different parameterizations for different time intervals, demonstrate the need for development or incorporation of time-variant parameterizations of the diffusion coefficients. We also present new methods for applying boundary condition on simulation of diffusion processes using the finite-difference method. It is based on the straightforward idea that solutions at the boundaries are smooth. The new scheme achieves high accuracy when the initial conditions are non vanishing at the boundaries, a case which is poorly handled by previous methods. Along with the ease in implementation, the new method does not require any additional computation or memory.
Ph. D.
12

Peace, Richard Aidan. "Quantitative cardiac SPECT." Thesis, University of Aberdeen, 2001. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU602292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Myocardial perfusion SPECT imaging is a sensitive and specific indicator of coronary artery disease (Fleischman et al. 1998). The clinical value of coronary scintigraphy is now established with a utilisation rate of eight procedures per 1000 population per year in the USA and two per 1000 in the EU (Pennell et al. 1998). While myocardial perfusion SPECT images are routinely interpreted by expert observers the classification is inevitably subject to inter-observer and intra-observer variability. An optimised and validated quantitative index of the presence or absence of coronary artery disease (CAD) could improve reproducibility, accuracy and diagnostic confidence. There are segmental techniques to automatically detect CAD from myocardial perfusion SPECT studies such as the CEqual quantitative analysis software (Van Train et al. 1994). However, they have not been shown to be significantly better than expert observers (Berman et al. 1998). The overall aim of this thesis was to develop, optimise and evaluate quantitative techniques for the detection of CAD in myocardial perfusion SPECT studies. This task was divided into three areas; quantification of transient ischaemic dilation (TID); quantitative detection and localisation of CAD; count normalisation of patient studies. Transient ischaemic dilation (TED) is the transient dilation of the left ventricle on immediate post stress images compared to resting technetium-99m imaging. Stolzenberg (1980) first noted TID as a specific marker for severe CAD. There are few published studies of fully quantitative evaluations of TID. The first aim of this thesis was to compare the performance of methods for quantifying TDD in myocardial perfusion SPECT. The second aim of this thesis was to investigate the use of image registration in myocardial perfusion SPECT for quantitative detection and localisation of CAD. This thesis describes two studies comparing six count normalisation techniques. These techniques were; normalise to the maximum value; to the mean voxel value; to the mean of the top 10% or 20% of counts; minimise the sum of squares between studies or the sum of absolute differences. Ten normal myocardial perfusion SPECT studies each with 300 different simulated perfusion defects were count normalised to the original studies. The fractional count normalisation error was consistently lower when the sum of absolute differences was minimised. However, a more clinically applicable measure of count normalisation performance is the effect on quantitative CAD detection. The Z-score method of automatic detection of CAD was repeated using each count normalisation technique. There was no statistically significant difference between the methods although the power of the ROC analysis was poor due to low patient numbers. The balance of evidence suggested that count normalisation by minimisation of the of absolute differences produced the best performance.
13

Grjasnow, Alexej. "Teilkohärente quantitative Phasenkontrastmikroskopie." Berlin mbv, 2009. http://d-nb.info/99459576X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tamski, Mika. "Quantitative electrochemical EPR." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/79963/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Electron paramagnetic resonance (EPR) is a spectroscopic technique sensitive to unpaired electrons present in paramagnetic species such as free radicals and organometallic complexes. Electrochemistry (EC) is an interfacial science, where reduction and oxidation processes are studied. A single electron reduction or oxidation generates a paramagnetic species with an unpaired electron, thus making EPR a valuable tool in the study of electrochemical systems. In this work a novel electrochemical cell was designed and developed to be used with a specific type of EPR resonator, called loop gap resonator (LGR). After building and characterising the performance of the EC-EPR setup, it was adapted for quantitative measurements in electrochemical EPR (QEC-EPR). Thus, for the first time, the technique of EC-EPR has been fully characterised for analytical work, opening possibilities to study electrode reactions quantitatively with accuracy and precision not obtained before, as demonstrated in Chapter 8 of this thesis.
15

Bordas, Alexandre. "Homogénéisation stochastique quantitative." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN053/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur l’homogénéisation quantitative d’équations aux dérivées partielles paraboliques, et de problèmes elliptiques discrets. Dans l’introduction, nous voyons comment de tels problèmes, même lorsque les coefficients sont déterministes, résultent d’un modèle aléatoire. Nous donnons ensuite une notion de ce qu’est l’homogénéisation : que se passe-t-il lorsque les coefficients eux-mêmes sont aléatoires, est-il possible de considérer qu’un environnement présentant des inhomogénéités sur de très petites échelles, se comporte d’une manière proche d’un environnement fictif qui serait homogène ?Nous donnons ensuite une interprétation de cette question en terme de marche aléatoire en conductances aléatoires, puis donnons une idée des outils utilisés dans les preuves des deux chapitres suivants. Dans le chapitre II, nous démontrons un résultat d’homogénéisation quantitative pour une équation parabolique – l’équation de la chaleur par exemple – dans un environnement admettant des coefficients aléatoires et dépendant du temps. La méthode utilisée consiste à considérer les solutions d’un tel problème comme optimiseurs de fonctionnelles qui seront définies au préalable, puis d’utiliser la propriété cruciale de sous-additivité de ces quantités, afin d’en déduire une convergence puis un résultat de concentration, qui permettra d’en déduire une vitesse de convergence des solutions vers la solution du problème homogénéisé, Dans le chapitre III, nous adaptons ces méthodes pour un problème elliptique sur le graphe Zd
This thesis deals with quantitative stochastic homogenization of parabolic partial differential equations, and discrete elliptic problems. In the introduction, we see how can such problems come from random models, even when the coefficients are deterministic. Then, we introduce homogenization : what happen if the coefficients themselves are random ? Could we consider that an environment with microscopical random heterogeneities behaves, at big scale, as a fictious deterministic homogeneous environment ? Then, we give a random walk in random environment interpretation and the sketch of the proofs in the two following chapters. In chapter II, we prove a quantitative homogenization result for parabolic PDEs, such as heat equation, in environment admitting time and space dependent coefficients. The method of the proof consists in considering solutions of such problems as minimizers of variational problems. The first step is to express solutions as minimizers, and then to use the capital property of subadditivity of the corresponding quantities, in order to deduce convergence and concentration result. From that, we deduce a rate of convergence of the actual solutions to the homogenized solution. In chapter III, we adapt these methods to a discrete elliptic problem on the lattice Zd
16

Chew, Serena Janine. "Comparison of quantitative precipitation forecast, a precipitation-based quantitative precipitation estimate and a radar-derived quantitative precipitation estimate." abstract and full text PDF (free order & download UNR users only), 2006. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1432997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hinoda, Takuya. "Quantitative assessment of gadolinium deposition in dentate nucleus using quantitative susceptibility mapping." Kyoto University, 2018. http://hdl.handle.net/2433/232091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Goldhahn, Dirk. "Quantitative Methoden in der Sprachtypologie: Nutzung korpusbasierter Statistiken." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-130550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Die Arbeit setzt sich mit verschiedenen Aspekten der Nutzung korpusbasierter Statistiken in quantitativen typologischen Untersuchungen auseinander. Die einzelnen Abschnitte der Arbeit können als Teile einer sprachunabhängigen Prozesskette angesehen werden, die somit umfassende Untersuchungen zu den verschiedenen Sprachen der Welt erlaubt. Es werden dabei die Schritte von der automatisierten Erstellung der grundlegenden Ressourcen über die mathematisch fundierten Methoden bis hin zum fertigen Resultat der verschiedenen typologischen Analysen betrachtet. Hauptaugenmerk der Untersuchungen liegt zunächst auf den Textkorpora, die der Analyse zugrundeliegen, insbesondere auf ihrer Beschaffung und Verarbeitung unter technischen Gesichtspunkten. Es schließen sich Abhandlungen zur Nutzung der Korpora im Gebiet des lexikalischen Sprachvergleich an, wobei eine Quantifizierung sprachlicher Beziehungen mit empirischen Mitteln erreicht wird. Darüber hinaus werden die Korpora als Basis für automatisierte Messungen sprachlicher Parameter verwendet. Zum einen werden derartige messbare Eigenschaften vorgestellt, zum anderen werden sie hinsichtlich ihrer Nutzbarkeit für sprachtypologische Untersuchungen systematisch betrachtet. Abschließend werden Beziehungen dieser Messungen untereinander und zu sprachtypologischen Parametern untersucht. Dabei werden quantitative Verfahren eingesetzt.
19

Przybilla, Norbert. "QUANTITATIVE SPECTROSCOPY OF SUPERGIANTS." Diss., lmu, 2002. http://nbn-resolving.de/urn:nbn:de:bvb:19-820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Uebleis, Christopher. "Die quantitative "real-time"." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-77470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Krenning, Boudewijn Juriaan. "Quantitative Three-dimensional Echocardiography." [S.l.] : Rotterdam : [The Author] ; Erasmus University [Host], 2007. http://hdl.handle.net/1765/10695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Brinca, Pedro Soares. "Essays in Quantitative Macroeconomics." Doctoral thesis, Stockholms universitet, Nationalekonomiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-92861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the first essay, Distortions in the Neoclassical Growth Model: A Cross Country Analysis, I show that shocks that express themselves as total factor productivity and labor income taxes are comparably more synchronized than shocks that resemble distortions to the ability of allocating resources across time and states of the world. These two shocks are also the most important to model. Lastly, I document the importance of international channels of transmission for the shocks, given that these are spatially correlated and that international trade variables, such as trade openness correlate particularly well with them. The second essay is called Monetary Business Cycle Accounting for Sweden. Given that the analysis is focused in one country, I can extend the prototype economy to include a nominal interest rate setting rule and government bonds. As in the previous essay, distortions to the labor-leisure condition and total factor productivity are the most relevant margins to be modeled, now joined by deviations from the nominal interest rate setting rule. Also, distortions do not share a structural break during the Great Recession, but they do during the 1990’s.  Researchers aiming to model Swedish business cycles must take into account the structural changes the Swedish economy went through in the 1990’s, though not so during the last recession. The third essay, Consumer Confidence and Consumption Spending: Evidence for the United States and the Euro Area, we show that, the consumer confidence index can be in certain circumstances a good predictor of consumption. In particular, out-of-sample evidence shows that the contribution of confidence in explaining consumption expenditures increases when household survey indicators feature large changes, so that confidence indicators can have some increasing predictive power during such episodes. Moreover, there is some evidence of a confidence channel in the international transmission of shocks, as U.S. confidence indices help predicting consumer sentiment in the euro area.
23

Datta, Neil Anirvan Sagomisa. "A quantitative combinatory logic." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Enright, S. A. "Towards quantitative computed tomography." Thesis, University of Canterbury. Electrical and Electronic Engineering, 1992. http://hdl.handle.net/10092/6886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computed tomography is introduced along with an overview of its diverse applications in many scientific endeavours. A unified approach for the treatment of scattering from linear scalar wave motion is introduced. The assumptions under which wave motion within a medium can be characterised by concourses of rays are presented along with comment on the validity of these assumptions. Early and conventional theory applied for modelling the behaviour of rays, within media for which ray assumptions are valid, are reviewed. A new computerised method is described for reconstruction of a refractive index distribution from time-of-flight measurements of radiation/waves passing through the distribution and taken on a known boundary surrounding it. The reconstruction method, aimed at solving the bent-ray computed tomography (CT) problem, is based on a novel ray description which doesn't require the ray paths to be known. This allows the refractive index to be found by iterative solution of a set of linear equations, rather than through the computationally intensive procedure of ray tracing, which normally accompanies iterative solutions to problems of this type. The preliminary results show that this method is capable of handling appreciable spatial refractive index variations in large bodies. A review containing theory and techniques for image reconstruction from projections is presented, along with their historical development. The mathematical derivation of a recently developed reconstruction technique, the method of linograms is considered. An idea, termed the plethora of views idea, which aims to improve quantitative CT image reconstruction, is introduced. The theoretical foundation for this is the idea that when presented with a plethora of projections, by which is meant a number greater than that required to reconstruct the known region of support of an image, so that the permissible reconstruction region can be extended, then the intensity of the reconstructed distribution should be negligible throughout the extended region. Any reconstruction within the extended region, that departs from what would be termed negligible, is deduced to have been caused by imperfections of the projections. The implicit expectation of novel schemes which are presented for improving CT image reconstruction, is that contributions within the extended region can be utilised to ameliorate the effects of the imperfections on the reconstruction where the distribution is known to be contained. Preliminary experimental results are reported for an iterative algorithm proposed to correct a plethora of X-ray CT projection data containing imperfections. An extended definition is presented for the consistency of projections, termed spatial consistency, that incorporates the region with which the projection data is consistent. Using this definition and an associated definition, spatial inconsistency, an original technique is proposed and reported on for the recovery of inconsistencies that are contained in the projection data over a narrow range of angles.
25

Williams, Geoffrey Alan. "Studies in quantitative macroeconomics." Thesis, University of East Anglia, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bradley, Michael Ian. "Quantitative bioprocess containment validation." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Heusser, Jonathan. "Automating quantitative information flow." Thesis, Queen Mary, University of London, 2011. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Unprecedented quantities of personal and business data are collected, stored, shared, and processed by countless institutions all over the world. Prominent examples include sharing personal data on social networking sites, storing credit card details in every store, tracking customer preferences of supermarket chains, and storing key personal data on biometric passports. Confidentiality issues naturally arise from this global data growth. There are continously reports about how private data is leaked from confidential sources where the implications of the leaks range from embarrassment to serious personal privacy and business damages. This dissertation addresses the problem of automatically quantifying the amount of leaked information in programs. It presents multiple program analysis techniques of different degrees of automation and scalability. The contributions of this thesis are two fold: a theoretical result and two different methods for inferring and checking quantitative information flows are presented. The theoretical result relates the amount of possible leakage under any probability distribution back to the order relation in Landauer and Redmond’s lattice of partitions [35]. The practical results are split in two analyses: a first analysis precisely infers the information leakage using SAT solving and model counting; a second analysis defines quantitative policies which are reduced to checking a k-safety problem. A novel feature allows reasoning independent of the secret space. The presented tools are applied to real, existing leakage vulnerabilities in operating system code. This has to be understood and weighted within the context of the information flow literature which suffers under an apparent lack of practical examples and applications. This thesis studies such “real leaks” which could influence future strategies for finding information leaks.
28

Yang, Y. "Essays in quantitative investments." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3021457/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis studies the characteristics of Chinese futures markets and the quantitative investment strategies. The main objective of this thesis is to provide a comprehensive analysis on the performance of quantitative investment strategies in the Chinese market. Furthermore, with an econometric analysis, the stylised facts of the Chinese futures markets are documented. Extensive backtesting results on the performance of momentum, reversal and pairs trading type strategies are provided. In the case of pairs trading type strategies, risk and return relationship is characterised by the length of the maximum holding periods, and thus re ected in the maximum drawdown risk. In line with the increasing holding periods, the pro tability of pairs trading increases over longer holding periods. Therefore, the abnormal returns from pairs trading in the Chinese futures market do not necessarily re ect market ine ciency. Momentum and reversal strategies are compared by employing both high- and low-frequency time series with precise estimation of transaction costs. The comparison of momentum and reversal investment strategies at the intra- and inter-day scales displays that the portfolio rebalancing frequency signi cantly impacts the pro tability of such strategies. Complementarily, the excess returns of inter-day momentum trading with the inclusion of precise estimates of transaction costs re ect that quantitative investment strategies consistently produce abnormal pro ts in the Chinese commodity futures markets. However, from a risk-adjusted view, the returns are obtained only by bearing additional drawdown risks. Finally, this thesis suggests that investor should choose quantitative trading strategies according to the investment horizon, tolerance for maximum drawdown and portfolio rebalancing costs.
29

Louth, Richard James. "Essays in quantitative analytics." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rushworth, Philip John. "Quantitative asymmetric reaction kinetics." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/45827/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The comparison of catalysts for producing chiral materials is of vital importance in the improvement of reaction scope and efficacy. Here we describe a new method of analysing the kinetics of the stereodetermining steps in asymmetric reactions by performing ligand/catalyst competition experiments against an internal standard and measuring the enantiomeric excess obtained at a variety of ratios of ligand/catalyst to internal standard. From these enantiomeric excess measurements, we can establish the relative rate of reaction between the ligand/catalyst systems and the internal standard, allowing us to make indirect comparisons of the rates at which the ligands/catalysts perform the reaction. Here we take this method and apply it to three common synthetic procedures: the Sharpless asymmetric dihydroxylation, the asymmetric Michael addition of malonates to nitroalkenes and palladium catalysed asymmetric allylation reaction.
31

Fredriksson, Ingemar. "Quantitative Laser Doppler Flowmetry." Doctoral thesis, Linköpings universitet, Biomedicinsk instrumentteknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Laser Doppler flowmetry (LDF) is virtually the only non-invasive technique, except for other laser speckle based techniques, that enables estimation of the microcirculatory blood flow. The technique was introduced into the field of biomedical engineering in the 1970s, and a rapid evolvement followed during the 1980s with fiber based systems and improved signal analysis. The first imaging systems were presented in the beginning of the 1990s. Conventional LDF, although unique in many aspects and elegant as a method, is accompanied by a number of limitations that may have reduced the clinical impact of the technique. The analysis model published by Bonner and Nossal in 1981, which is the basis for conventional LDF, is limited to measurements given in arbitrary and relative units, unknown and non-constant measurement volume, non-linearities at increased blood tissue fractions, and a relative average velocity estimate. In this thesis a new LDF analysis method, quantitative LDF, is presented. The method is based on recent models for light-tissue interaction, comprising the current knowledge of tissue structure and optical properties, making it fundamentally different from the Bonner and Nossal model. Furthermore and most importantly, the method eliminates or highly reduces the limitations mentioned above. Central to quantitative LDF is Monte Carlo (MC) simulations of light transport in tissue models, including multiple Doppler shifts by red blood cells (RBC). MC was used in the first proof-of-concept study where the principles of the quantitative LDF were tested using plastic flow phantoms. An optically and physiologically relevant skin model suitable for MC was then developed. MC simulations of that model as well as of homogeneous tissue relevant models were used to evaluate the measurement depth and volume of conventional LDF systems. Moreover, a variance reduction technique enabling the reduction of simulation times in orders of magnitudes for imaging based MC setups was presented. The principle of the quantitative LDF method is to solve the reverse engineering problem of matching measured and calculated Doppler power spectra at two different source-detector separations. The forward problem of calculating the Doppler power spectra from a model is solved by mixing optical Doppler spectra, based on the scattering phase functions and the velocity distribution of the RBC, from various layers in the model and for various amounts of Doppler shifts. The Doppler shift distribution is calculated based on the scattering coefficient of the RBC:s and the path length distribution of the photons in the model, where the latter is given from a few basal MC simulations. When a proper spectral matching is found, via iterative model parameters updates, the absolute measurement data are given directly from the model. The concentration is given in g RBC/100 g tissue, velocities in mm/s, and perfusion in g RBC/100 g tissue × mm/s. The RBC perfusion is separated into three velocity regions, below 1 mm/s, between 1 and 10 mm/s, and above 10 mm/s. Furthermore, the measures are given for a constant output volume of a 3 mm3 half sphere, i.e. within 1.13 mm from the light emitting fiber of the measurement probe. The quantitative LDF method was used in a study on microcirculatory changes in type 2 diabetes. It was concluded that the perfusion response to a local increase in skin temperature, a response that is reduced in diabetes, is a process involving only intermediate and high flow velocities and thus relatively large vessels in the microcirculation. The increased flow in higher velocities was expected, but could not previously be demonstrated with conventional LDF. The lack of increase in low velocity flow indicates a normal metabolic demand during heating. Furthermore, a correlation between the perfusion at low and intermediate flow velocities and diabetes duration was found. Interestingly, these correlations were opposites (negative for the low velocity region and positive for the mediate velocity region). This finding is well in line with the increased shunt flow and reduced nutritive capillary flow that has previously been observed in diabetes.
32

Graczyk, Alicja. "Development of quantitative dSTORM." Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Direct stochastic optical reconstruction microscopy (dSTORM) is a singlemolecule imaging technique which involves tagging molecular targets with fluorescently labelled antibodies. In this method, only a subset of fluorophores emit photons at the same time, while the majority of fluorescent tags are pushed into an optically inactive state. This powerful technique, where resolution of 20 nm can be achieved, suffers from two major drawbacks which prevent quantitative analysis. The first problem lies with labelling of proteins of interest, where a single protein is typically labelled by multiple secondary antibodies tagged with a variable number of fluorophores. To count the number of proteins only one fluorophore per protein of interest must be assured. To solve this problem, I aimed to develop a novel linker molecule which, together with Fab’, an antigen-binding fragment, would produce a detection agent for 1:1 fluorophore to protein labelling. An alternative approach was also employed, in which an anti-EGFP nanobody was homogeneously mono-labelled with Alexa Fluor 647. Binding to EGFP was analysed both qualitatively and quantitatively and an excellent nanomolar affinity was demonstrated. The degree of labelling investigation revealed 1:1 nanobody to fluorophore ratio. The analysis of the nanobody was also performed using dSTORM, both on glass and in cells. The monolabelled nanobody produced significantly less localisations per single target as compared to the commercially available F(ab’)2 fragment and showed excellent colocalisation with EGFP in EGFP-SNAP-25 and EGFP-Lifeact transfected cells. The second problem in dSTORM is connected with the photophysical process itself. This is because the same fluorophore in dSTORM can enter light and dark cycles multiple times, so it is impossible to establish if closely neighbouring signals originate from one or multiple sources. A polarisation-based method was developed allowing measurement of polarisation of each fluorophore’s dipole. My strategy involved a change in the microscope pathway employing a polarisation splitter to separate light coming from each fluorophore into two components with orthogonal polarisations. Finally, the single labelling was combined with the polarisation experiments to achieve quantitative dSTORM, where the neighbouring signals could be assigned to the same or different targets, based on the polarisation value of each signal.
33

Von, Essen Christian. "Quantitative Verification and Synthesis." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM090/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse contribue à l'étude théorique et a l'application de la vérification et de la synthèse quantitative. Nous étudions les stratégies qui optimisent la fraction de deux récompenses des MDPs. L'objectif est la synthèse de régulateurs efficaces dans des environnements probabilistes. Premièrement nous montrons que les stratégies déterministes et sans mémoire sont suffisants. Sur la base de ces résultats, nous proposons trois algorithmes pour traiter des modèles explicitement encodées. Notre évaluation de ces algorithmes montre que l'un de ces derniers est plus rapide que les autres. Par la suite nous proposons et mettons en place une variante symbolique basé sur les diagrammes de décision binaire.Deuxièmement, nous étudions le problème de réparation des programmes d'un point de vue quantitatif. Cela conduit à une reformulation de la réparation d'un log: que seules les exécutions fautives du programme soient modifiées. Nous étudions les limites de cette approche et montrons comment nous pouvons assouplir cette nouvelle exigence. Nous concevons et mettons en œuvre un algorithme pour trouver automatiquement des réparations, et montrons qu'il améliore les modifications apportées aux programmes. Troisièmement, nous étudions une nouvelle approche au framework pour la vérification et synthèse quantitative. La vérification et la synthèse fonctionnent en tandem pour analyser la qualité d'un contrôleur en ce qui concerne, par exemple , de robustesse contre des erreurs de modélisation. Nous considérons également la possibilité d'approximer la courbure de Pareto, qui appataît de la combinaison du modèle avec de multiples récompenses. Cela nous permet à la fois d'étudier les compromis inhérents au système et de choisir une configuration adéquate. Nous appliquons notre framework aux plusieurs études de cas. La majorité de l'étude de cas est concernée par un système anti-collision embarqué (ACAS X). Nous utilisons notre framework pour aider à analyser l'espace de conception du système et de valider le contrôleur en cours d'investigation par la FAA. En particulier, nous contribuons l'analyse par PCTL et stochastic model checking
This thesis contributes to the theoretical study and application of quantitative verification and synthesis. We first study strategies that optimize the ratio of two rewards in MDPs. The goal is the synthesis of efficient controllers in probabilistic environments. We prove that deterministic and memoryless strategies are sufficient. Based on these results we suggest 3 algorithms to treat explicitly encoded models. Our evaluation of these algorithms shows that one of these is clearly faster than the others. To extend its scope, we propose and implement a symbolic variant based on binary decision diagrams, and show that it cope with millions of states. Second, we study the problem of program repair from a quantitative perspective. This leads to a reformulation of program repair with the requirement that only faulty runs of the program be changed. We study the limitations of this approach and show how we can relax the new requirement. We devise and implement an algorithm to automatically find repairs, and show that it improves the changes made to programs.Third, we study a novel approach to a quantitative verification and synthesis framework. In this, verification and synthesis work in tandem to analyze the quality of a controller with respect to, e.g., robustness against modeling errors. We also include the possibility to approximate the Pareto curve that emerges from combining the model with multiple rewards. This allows us to both study the trade-offs inherent in the system and choose a configuration to our liking. We apply our framework to several case studies. The major case study is concerned with the currently proposed next generation airborne collision avoidance system (ACAS X). We use our framework to help analyze the design space of the system and to validate the controller as currently under investigation by the FAA. In particular, we contribute analysis via PCTL and stochastic model checking to add to the confidence in the controller
34

Yum, Minchul. "Essays in Quantitative Macroeconomics." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429444230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kattenbelt, Mark Alex. "Automated quantitative software verification." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:62430df4-7fdf-4c4f-b3cd-97ba8912c9f5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
36

KASE, Hanno. "Essays in quantitative macroeconomics." Doctoral thesis, European University Institute, 2021. https://hdl.handle.net/1814/73515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Defence Date: 21 December 2021
Examining Board: Prof. David Levine (EUI); Prof. Jesus Bueren, (EUI); Prof. Aldo Rustichini, (University of Minnesota); Prof. Galo Nuño (Banco de España)
This thesis consists of three essays in quantitative macroeconomics. In Chapter 1, joint with Leonardo Melosi and Matthias Rottner, we leverage recent developments in machine learning to develop methods to solve and estimate large and complex nonlinear macroeconomic models, e.g. HANK models. Our method relies on neural networks because of their appealing feature that even models with hundreds of state variables can be solved. While likelihood estimation requires the repeated solving of the model, something that is infeasible for highly complex models, we overcome this problem by exploiting the scalability of neural networks. Including the parameters of the model as quasi state variables in the neural network, we solve this extended neural network and apply it directly in the estimation. To show the potential of our approach, we estimate a quantitative HANK model that features nonlinearities on an individual (borrowing limit) and aggregate level (zero lower bound) using simulated data. The model also shows that there is an important economic interaction between the impact of the zero lower bound and the degree of household heterogeneity. Chapter 2 studies the impact of macroprudential limits on mortgage lending in a heterogeneous agent life-cycle model with incomplete markets, long-term mortgage, and default. The model is calibrated to German economy using Household Finance and Consumption Survey data. I consider the effects of four policy instruments: loan-to-value limit, debt-toincome limit, payment-to-income limit, and maximum maturity. I find that their effect on homeownership rate is fairly modest. Only the loan-to-value limit significantly reduces the homeownership rate among young households. At the same time, it has the largest positive welfare effect. Chapter 3 explores applications of the backpropagation algorithm on heterogeneous agent models. In addition, I clarify the connection between deep learning and dynamic structural models by showing how a standard value function iteration algorithm can be viewed as a recurrent convolutional neural network. As a result, many advances in the field of machine learning can carry over to economics. This in turn makes the solution and estimation of more complex models feasible.
1. Solving and Estimating Macroeconomic Models of the Future 2. Limits on Mortgage Lending 3. Backpropagating Through Heterogeneous Agent Models
37

Podlipská, J. (Jana). "Non-invasive semi-quantitative and quantitative ultrasound imaging for diagnostics of knee osteoarthritis." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526214351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Osteoarthritis (OA) is a common degenerative disease of synovial joints becoming more frequent with age. Pain, stiffness and functional disability caused by OA negatively affect the quality of individuals’ lives. In order to prevent the manifestation of symptoms and further OA progress, early diagnosis is essential. Ultrasonography has the potential to detect various abnormalities in the knee joint, however its place in clinical practice remains uncertain. The present study aimed to determine the diagnostic performance of the semi-quantitative wide-area ultrasound (US) scanning of knee femoral cartilage degeneration, osteophytes and meniscal extrusion using magnetic resonance imaging as the reference tool. Diagnostic ability of conventional radiography (CR) was also determined and the performances of both modalities compared. Subsequently, the association of structural US findings with knee pain and function was investigated. Finally, quantitative US image analysis focusing on detection and evaluation of subchondral bone integrity in early OA was developed. The US quantitative outcomes were compared with CR and arthroscopy. Tibio-femoral osteophytes, medial meniscal extrusion and medial femoral cartilage morphological degeneration were identified by US with better or at least comparable accuracy than by CR, in which joint space narrowing was used as a composite measure of cartilage damage and meniscal extrusion. The global femoral cartilage grade associated strongly with increased pain and disability. Site-specifically, especially medial cartilage degeneration and femoral lateral osteophytes were associated with increased pain and disability. Regarding the quantitative outcomes, significant increase in US intensity in the femoral subchondral bone depth 0.35–0.7 mm and decrease in intensity slope up to 0.7 mm depth were observed during radiographic or arthroscopic OA progression. Novel wide-area US scanning provides relevant additional diagnostic information on tissue-specific OA pathology not depicted by CR. US-detected changes of femoral cartilage and osteophytes are also associated with clinical symptoms. Consequently, the use of US as a complementary imaging tool along with CR may enable more accurate diagnostics of knee OA. Furthermore, developed quantitative US analysis is a promising tool for detection of femoral subchondral bone changes in knee OA
Tiivistelmä Nivelrikko on erittäin yleinen nivelten rappeumasairaus, joka aiheuttaa kipua, jäykkyyttä sekä liikkumisvaikeutta. Nivelrikon nykyistä varhaisempi diagnosointi olisi äärimmäisen tärkeää, jotta voitaisiin vähentää oireiden esiintymistä ja hidastaa sairauden etenemistä. Ultraäänikuvaus on lupaava menetelmä nivelrikon varhaisdiagnostiikkaan, mutta sitä ei kuitenkaan ole vielä yleisesti hyväksytty rutiininomaiseen kliiniseen käyttöön. Tämän tutkimuksen päätavoitteena oli selvittää polven semi-kvantitatiivisen ultraäänikuvauksen diagnostista kykyä verrattuna perinteiseen röntgenkuvaukseen, kun arvioidaan rustokudoksen kulumista, luupiikkejä sekä nivelkierukan siirtymää. Magneettikuvausta käytettiin vertailumenetelmänä. Toisena tavoitteena oli selvittää yhteyttä polven ultraäänilöydösten ja polven kivun sekä liikkuvuuden välillä. Lopuksi selvitettiin, voidaanko kvantitatiivisella analyysilla parantaa ultraäänikuvauksen tarkkuutta varhaisvaiheen nivelrikkopotilaiden rustonalaisen luun kunnon määrittämiseen. Röntgenkuvaukseen verrattuna ultraäänikuvaus osoittautui vähintään yhtä hyväksi tai paremmaksi kuvantamismenetelmäksi, kun arvioitiin reisi- ja sääriluun luupiikkejä, sisemmän nivelkierukan siirtymää tai rustokudoksen kulumista. Reisiluun nivelruston yleinen kulumisen aste oli suoraan verrannollinen potilaiden polvinivelen liikerajoituksiin ja kipuun. Erityisesti reisiluun sisemmän puolen nivelruston kuluminen sekä ulomman puolen luupiikit liittyivät potilaiden oireisiin. Kvantitatiiviset ultraäänitulokset osoittivat, että ultraäänen rusto-luurajapinnalta tulevan heijastuksen maksimi-intensiteetti lisääntyi sekä intensiteetin tason laskunopeus vähentyi nivelrikon vaikeusasteen kasvaessa. Tutkimus osoitti, että ultraäänikuvauksella voidaan saada tärkeää diagnostista lisätietoa polven nivelrikon aiheuttamista kudosmuutoksista, joita ei pystytä havaitsemaan perinteisellä röntgenkuvauksella. Ultraäänikuvauksessa näkyvät kudosmuutokset liittyvät myös potilaan kliinisiin oireisiin. Lisäksi rustonalaista luuta voidaan analysoida kvantitatiivisesti ultraäänikuvien perusteella, mikä edelleen helpottaa kuvien tulkintaa. Tutkimuksen perusteella voidaankin suositella ultraäänikuvauksen nykyistä laajempaa kliinistä käyttöä röntgenkuvausta täydentävänä tutkimusmenetelmänä nivelrikon varhaisdiagnostiikassa
38

Albert, Elise. "Déterminants génétiques et génomiques de la réponse au déficit hydrique chez la tomate (Solanum lycopersicum) et impact sur la qualité des fruits." Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0688/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A l’échelle du globe, la diminution des ressources en eau est devenue un des principaux facteurs limitants pour les productions agricoles. Jusqu’à présent, les approches génomiques à haut débit conduites chez les espèces modèles ont permis d’identifier des centaines de gènes potentiellement impliqués dans la survie des plantes en conditions de sécheresse,mais très peu ont des effets bénéfiques sur la qualité et le rendement des cultures.Néanmoins, l’application d’un déficit hydrique bien contrôlé peut permettre d’améliorer la qualité des fruits charnus par dilution et/ou accumulation de composés gustatifs majeurs.Dans ce contexte, la première partie du travail de thèse avait pour but de déchiffrer les déterminants génétiques de la réponse au déficit hydrique chez la tomate en explorant les interactions ‘génotype x niveau d’irrigation’ (G x I) et ‘QTL x niveau d’irrigation’ (QTL x I) dans deux populations. La première population consistait en un ensemble de lignées recombinantes (RIL) issues du croisement entre deux accessions cultivées, tandis que la seconde était composée de diverses accessions à petits fruits principalement originaires d'Amérique du Sud. Les plantes ont été phénotypées pour un ensemble de caractères agronomiques (vigueur des plantes et qualité des fruits) et génotypées pour des milliers de SNP. Les données ont été analysées en utilisant les méthodologies de la cartographie de liaison et d'association, permettant l'identification de QTL et gènes candidats putatifs pour la réponse de la tomate au déficit hydrique. La deuxième partie du travail de thèse avait pour objectif d'explorer la régulation des gènes dans les fruits et les feuilles de tomates en condition de déficit hydrique. Dans ce but, des données de séquençage du transcriptome ont été recueillies sur les deux génotypes parentaux de la population RIL et leur hybride F1. Les données ont été analysées pour identifier les gènes et les allèles exprimés de manière différentielle. Puis, l'expression de 200 gènes a été mesurée dans les fruits et les feuilles de l’ensemble des lignées de la population RIL par qPCR micro-fluidique à haut débit. Des eQTL et des interactions ‘eQTL x niveau d’irrigation’ ont été identifiés pour ces gènes par cartographie de liaison. Les colocalisations entre les QTL phénotypiques et les QTL d’expression ont été analysées. Les connaissances produites au cours de cette thèse contribuent à une meilleure compréhension des interactions des plantes de tomate avec leur environnement et fournissent des bases pour l'amélioration de la qualité des fruits en conditions d’irrigation limitée
Water scarcity will constitute a crucial constraint for agricultural productivity in a nearfuture. High throughput approaches in model species have identified hundreds of genespotentially involved in survival under drought conditions, but very few having beneficialeffects on quality and yield in crops plants. Nonetheless, controlled water deficits mayimprove fleshy fruit quality through weaker dilution and/or accumulation of nutritionalcompounds. In this context, the first part of the PhD was aimed at deciphering the geneticdeterminants of the phenotypic response to water deficit in tomato by exploring thegenotype by watering regime (G x W) and QTL by watering regime (QTL x W) interactions intwo populations. The first population consisted in recombinant inbreed lines (RIL) from across between two cultivated accessions and the second was composed of diverse small fruittomato accessions mostly native from South America. Plants were phenotyped for majorplant and fruit quality traits and genotyped for thousands of SNP. Data were analyzed withinthe linkage and association mapping frameworks allowing the identification of QTLs andputative candidate genes for response to water deficit in tomato. The second part of the PhDhad the objective to explore gene regulation in green fruit and leaves of tomato plantsstressed by water deficit. For this purpose, RNA-Seq data were collected on the two parentalgenotypes of the RIL population and their F1 hybrid. Data were analyzed to identifydifferentially expressed genes and allele specific expression (ASE). Then, the expression of200 genes was measured in leaves and fruits of the whole RIL population by high throughputmicrofluidic qPCR. eQTLs and eQTL by watering regime interactions were mapped for thosegenes using linkage mapping. Colocalisations with the phenotypic QTLs were analyzed. Theknowledge produced during this PhD will contribute to a better understanding of the tomatoplant interaction with their environment and provide bases for improvement of fruit qualityunder limited water supply
39

Bode, Oliver. "Quantitative Analyse dynamischer nichtlinearer Panelmodelle." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967138493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mak, Hoi-yan Jennifer. "Quantitative measures of supraglottic activity." Click to view the E-thesis via HKUTO, 2002. http://sunzi.lib.hku.hk/hkuto/record/B36208401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (B.Sc)--University of Hong Kong, 2002.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, May 10, 2002." Also available in print.
41

Rohde, Gustavo Kunde. "Registration methods for quantitative imaging." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Applied Mathematics and Scientific Computation. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
42

Pesavento, Andreas. "Quantitative Ultraschallabbildungsverfahren für die Muskeldiagnostik." [S.l.] : [s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959563660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Braun, Christelle. "Quantitative Approaches to Information Hiding." Phd thesis, Ecole Polytechnique X, 2010. http://tel.archives-ouvertes.fr/tel-00527367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kühnle, Tim. "Quantitative Analysis of Human Chronotypes." Diss., lmu, 2006. http://nbn-resolving.de/urn:nbn:de:bvb:19-51686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hergum, Torbjørn. "3D Ultrasound for Quantitative Echocardiography." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for sirkulasjon og bildediagnostikk, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-5937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Medical ultrasound imaging is widely used to diagnose cardiacdiseases. The recent availability of real time 3D ultrasound posesseveral interesting challenges and opportunities, and the work of thisthesis is devoted to both challenges and opportunities. One of the key benefits of ultrasound imaging is that its images arereal time. This has been challenged with the recent introduction of 3Dimages, where the number of ultrasound beams is squared compared totraditional 2D images. One common way to alleviate this is byreceiving several closely spaced ultrasound beams from each pulsetransmission, which increases acquisition speed but affects the imagequality. Specifically, B-mode images are irregularly sampled and losespatial shift invariance while a bias in the Doppler velocityestimates causes a discontinuity in the velocity estimates in colorflow images. We have found that these artifacts can be reducedsignificantly by interpolation of the beamformed data from overlappingbeams, with the limitation of requiring at least twice the number ofbeamformers. We have also found that valvular regurgitation is one of thecardiac diseases that can benefit greatly from quantification ofseverity using 3D ultrasound. We have devised a modality that useshigh pulse repetition frequency 3D Doppler to isolate thebackscattered signal power from the vena contracta of a regurgitantjet. This measure is calibrated with a narrow reference beam insidethe jet to estimate the cross-sectional area of the vena contracta. Wehave validated this method with computer simulations, with an in vitrostudy and finally in vivo with 27 patients who had mitralregurgitation. We found that the cross-sectional area and regurgitantvolume of the vena contracta could be quantified without bias as long as the orifice was sufficiently large for a calibration beam tofit inside it. The severity of smaller regurgitations will beoverestimated, but this does not pose a clinical problem, as thesepatients can easily be identified by standard 2D Doppler examination and donot typically need further quantification. Finally, we have developed a new, fast 3D ultrasound simulation methodthat can incorporate anisotropic scattering from cardiac muscle cells. This approach is three orders of magnitudefaster than the most commonly used simulation methods, making it wellsuited for the simulation of dynamic 3D images for development and testingof quantitative diagnostic methods such as 3D speckle tracking andvolumetric measurements.
46

Dahlström, Christina. "Quantitative microscopy of coating uniformity." Doctoral thesis, Mittuniversitetet, Institutionen för tillämpad naturvetenskap och design, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Print quality demands for coated papers are steadily growing, and achieving coating uniformity is crucial for high image sharpness, colour fidelity, and print uniformity. Coating uniformity may be divided into two scales: coating thickness uniformity and coating microstructure uniformity, the latter of which includes pigment, pore and binder distributions within the coating layer. This thesis concerns the investigation of both types of coating uniformity by using an approach of quantitative microscopy.First, coating thickness uniformity was analysed by using scanning electron microscope (SEM) images of paper cross sections, and the relationships between local coating thickness variations and the variations of underlying base sheet structures were determined. Special attention was given to the effect of length scales on the coating thickness vs. base sheet structure relationships.The experimental results showed that coating thickness had a strong correlation with surface height (profile) of base sheet at a small length scale. However, at a large length scale, it was mass density of base sheet (formation) that had the strongest correlation with coating thickness. This result explains well the discrepancies found in the literature for the relationship between coating thickness variation and base sheet structure variations. The total variance of coating thickness, however, was dominated by the surface height variation in the small scale, which explained around 50% of the variation. Autocorrelation analyses were further performed for the same data set. The autocorrelation functions showed a close resemblance of the one for a random shot process with a correlation length in the order of fibre width. All these results suggest that coating thickness variations are the result of random deposition of particles with the correlation length determined by the base sheet surface textures, such as fibre width.In order to obtain fundamental understandings of the random deposition processes on a rough surface, such as in paper, a generic particle deposition model was developed, and systematic analyses were performed for the effects of particle size, coat weight (average number of particles), levelling, and system size on coating thickness variation. The results showed that coating thickness variation3grows with coat weight, but beyond a certain coat weight, it reaches a plateau value. A scaling analysis yielded a universal relationship between coating thickness variation and the above mentioned variables. The correlation length of coating thickness was found to be determined by average coat weight and the state of underlying surfaces. For a rough surface at relatively low coat weight, the correlation length was typically in the range of fibre width, as was also observed experimentally.Non-uniformities within the coating layer, such as porosity variations and binder distributions, are investigated by using a newly developed method: field emission scanning electron microscopy (FESEM) in combination with argon ion beam milling technique. The combination of these two techniques produced extremely high quality images with very few artefacts, which are particularly suited for quantitative analyses of coating structures. A new evaluation method was also developed by using marker-controlled watershed segmentation (MCWS) of the secondary electron images (SEI).The high resolution imaging revealed that binder enrichment, a long disputed subject in the area, is present in a thin layer of a 500 nm thickness both at the coating surface and at the base sheet/coating interface. It was also found that the binders almost exclusively fill up the small pores, whereas the larger pores are mainly empty or depleted of binder.
47

McGinnity, Colm Joseph. "Quantitative imaging in epilepsy (PET)." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/40095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction Epilepsy is a heterogeneous collection of neurological diseases characterised clinically by recurrent seizures. Pre-clinical models implicate derangements in ligand-gated receptor-mediated neurotransmission in seizure generation and termination. In this thesis, the author quantified activated N-methyl D-aspartate- and opioid peptide receptor availability in adults with focal epilepsy. Methods This thesis consists of three positron emission tomography (PET) studies of adults with focal epilepsy, using [18F]GE-179 (activated NMDA receptors) and [11C]diprenorphine (DPN; opioid receptors) radioligands. A novel resolution-recovery technique, Structural Functional Synergistic-Resolution Recovery (SFS-RR), was applied to pre-existing paired [11C]DPN PET datasets acquired from adults with temporal lobe epilepsy (TLE). Activated NMDA receptor availability was quantified in adults with frequent interictal epileptiform discharges (IEDs), by regional compartmental modelling and model-free voxelwise analyses. Statistical parametric mapping was used to identify significant differences in volumes-of-distribution (VT) between populations. Results [18F]GE-179 had good brain extraction with a relatively homogeneous distribution and moderately-paced kinetics in grey matter. The two brain compartments, four rate-constants model best described the radioligand's kinetics in grey matter. Global increases in [18F]GE-179 VT were seen for seven of 11 participants with frequent IEDs. Focal increases in [18F]GE-179 VT of up to nearly 24% were also identified for three of the 11 participants. A post-ictal increase in [11C]DPN VT was identified in the ipsilateral parahippocampal gyrus. Discussion This first-in-man evaluation of [18F]GE-179 evidenced several properties that are desirable in PET radioligands, but the specificity of binding requires further characterisation. The results suggest focal increases in activated NMDA receptor availability in participants with refractory focal epilepsy, and also post-ictal increases in opioid peptide availability in the parahippocampal gyrus in TLE. Both findings may have pathophysiological relevance, and illustrate the potential of quantitative ligand PET with advanced post-processing to investigate changes in inhibitory and excitatory receptor systems in the epilepsies in vivo.
48

Chen, Lin. "Causal modeling in quantitative genomics /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/9577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hampel, Uwe. "Quantitative und bildgebende Nahinfrarot-Gewebediagnostik /." [S.l. : s.n.], 2005. http://www.gbv.de/dms/ilmenau/toc/513691421.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Whitcomb, Richard W. "Quantitative ultrasonic evaluation of concrete." Thesis, Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/19004.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography