To see the other types of publications on this topic, follow the link: Quantitative methods.

Dissertations / Theses on the topic 'Quantitative methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quantitative methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rohde, Gustavo Kunde. "Registration methods for quantitative imaging." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2938.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Applied Mathematics and Scientific Computation. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

REN, XIAOHUI. "COMPARING QUANTITATIVE ASSOCIATION RULE METHODS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1089133333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yong, Florence Hiu-Ling. "Quantitative Methods for Stratified Medicine." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17463130.

Full text
Abstract:
Stratified medicine has tremendous potential to deliver more effective therapeutic intervention to improve public health. For practical implementation, reliable prediction models and clinically meaningful categorization of some comprehensible summary measures of individual treatment effect are vital elements to aid the decision-making process and bring stratified medicine to fruitful realization. We tackle the quantitative issues involved from three fronts : 1) prediction model building and selection; 2) reproducibility assessment; and 3) stratification. First, we propose a systematic model development strategy that integrates cross-validation and predictive accuracy measures in the prediction model building and selection process. Valid inference is made possible via internal holdout sample or external data evaluation to enhance generalizability of the selected prediction model. Second, we employ parametric or semi-parametric modeling to derive individual treatment effect scoring systems. We introduce a stratification algorithm with constrained optimization by utilizing dynamic programming and supervised-learning techniques to group patients into different actionable categories. We integrate the stratification and newly proposed prediction performance metric into the model development process. The methodologies are first presented in single treatment case, and then extended to two treatment cases. Finally, adapting the concept of uplift modeling, we provide a framework to identify the subgroup(s) with the most beneficial prospect; wasteful, harmful, and futile subgroups to save resources and reduce unnecessary exposure to treatment adverse effects. The proposals are illustrated by AIDS clinical study data and cardiology studies for non-censored and censored outcomes. The contribution of this dissertation is to provide an operational framework to bridge predictive modeling and decision making for more practical applications in stratified medicine.
Biostatistics
APA, Harvard, Vancouver, ISO, and other styles
4

Ramya, Sravanam Ramya. "Empirical Study on Quantitative Measurement Methods for Big Image Data : An Experiment using five quantitative methods." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13466.

Full text
Abstract:
Context. With the increasing demand for image processing applications in multimedia applications, the importance for research on image quality assessment subject has received great interest. While the goal of Image Quality Assessment is to find the efficient Image Quality Metrics that are closely relative to human visual perception, from the last three decades much effort has been put by the researchers and numerous papers and literature has been developed with emerging Image Quality Assessment techniques. In this regard, emphasis is given to Full-Reference Image Quality Assessment research where analysis of quality measurement algorithms is done based on the referenced original image as that is much closer to perceptual visual quality. Objectives. In this thesis we investigate five mostly used Image Quality Metrics which were selected (which includes Peak Signal to Noise Ratio (PSNR), Structural SIMilarity Index (SSIM), Feature SIMilarity Index (FSIM), Visual Saliency Index (VSI), Universal Quality Index (UQI)) to perform an experiment on a chosen image dataset (of images with different types of distortions due to different image processing applications) and find the most efficient one with respect to the dataset used. This research analysis could possibly be helpful to researchers working on big image data projects where selection of an appropriate Image Quality Metric is of major significance. Our study details the use of dataset taken and the experimental results where the image set highly influences the results.  Methods. The goal of this study is achieved by conducting a Literature Review to investigate the existing Image Quality Assessment research and Image Quality Metrics and by performing an experiment. The image dataset used in the experiment is prepared by obtaining the database from LIVE Image Quality Assessment database. Matlab software engine was used to experiment for image processing applications. Descriptive analysis (includes statistical analysis) was employed to analyze the results obtained from the experiment. Results. For the distortion types involved (JPEG 2000, JPEG compression, White Gaussian Noise, Gaussian Blur) SSIM was efficient to measure the image quality after distortion for JPEG 2000 compressed and white Gaussian noise images and PSNR was efficient for JPEG compression and Gaussian blur images with respect to the original image.  Conclusions. From this study it is evident that SSIM and PSNR are efficient in Image Quality Assessment for the dataset used. Also, that the level of distortions in the image dataset highly influences the results, where in our case SSIM and PSNR perform efficiently for the used database.
APA, Harvard, Vancouver, ISO, and other styles
5

Yelchuru, Ramprasad. "Quantitative methods for controlled variables selection." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for kjemisk prosessteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-17539.

Full text
Abstract:
Optimal operation is important to improve productivity to be more competitive, and therefore, increase profitability. Optimal operation can be viewed to constitute the control layer (supervisory layer plus regulatory layer) and optimization layer in the hierarchical decomposition of plantwide control. The task of control layer is to keep controlled variables at given set points and the task of optimization layer is to provide optimal set points. For simple implementation, we want to update the set points less frequently while obtaining an acceptable loss in the presence of disturbances. This can be achieved by appropriate controlled variables selection and keeping them at constant set points. This approach is termed as “self-optimizing control” as this approach automatically lead the operation close to optimal operation. Physically, in self-optimizing control, the selected controlled variables can be seen as the set of variables whose optimal values are insensitive to disturbances and controlling these (at constant set point) would reduce the need for frequent set point updates. The selected controlled variables obtained in “self-optimizing control” link the optimization layer and the control layer. Self-optimizing control provides a mathematical framework and we use this framework to select the controlled variables c as linear combinations of measurements y, c = Hy, with the aim to minimize the steady state loss from optimal operation. In “self-optimizing control”, we keep the controlled variables c at constant set points using feedback, and this feedback introduces implementation errors. The focus of this thesis is to devise systematic and good methods to arrive at controlled variables by finding optimal H that minimize the steady state loss of optimality in the presence of both disturbances and implementation errors. There are three main contributions in this thesis. The first contribution is to provide (i) a convex formulation to find the optimal combination matrix H for a given measurement set, and (ii) a Mixed-Integer Quadratic Programming (MIQP) methodology to select optimal measurement subsets that result in minimal steady state loss in the presence of disturbances. The methods provided in this thesis are exact for quadratic problems with linear measurement relations. The MIQP methods can handle additional structural constraints compared to the Branch and Bound (BAB) methods reported in literature for these problems. The MIQP methods are evaluated on a toy example, an evaporator example, a binary distillation column example with 41 stages and a Kaibel column example with 71 stages. Second contribution is to develop convex approximation methods that incorporate structural constraints to improve the dynamic controllability properties, such as fast response, control loop localization and to reduce time delays between the manipulated variables (u) and the controlled variables (c). For these cases, H is structured, for example, decentralized H or triangular H. The decentralized H is to obtain c as combination of measurements of a individual process unit. These structured H cases in self-optimizing control are non-convex. Hence, we propose a few new ideas and convex approximation methods to obtain good upper bounds for these structured H problems. The proposed methods are evaluated on random cases, an evaporator case study and a binary distillation column case study with 41 stages. Third contribution is to extend the self-optimizing control ideas to find optimal controlled variables in the regulatory layer. The regulatory layer is designed to facilitate stable operation, to regulate and to keep the operation in the linear operating range. The regulatory layer performance is quantified using the state drift criterion. Quantitative method for the regulatory layer selection with one, two or more closed loops is proposed to minimize the drift in states. The proposed quantitative methods are evaluated on a distillation column with 41 stages and a Kaibel column with 71 stages case studies. To summarize, in self-optimizing control, for selecting the controlled variables c as linear combinations of measurements y, c = Hy, (a) we developed MIQP methods that belong to a convex sub class to find globally optimal H and optimal measurement subsets; (b) we developed convex approximation methods to find good upper bounds to find optimal decentralized/triangular H and optimal measurement subsets; (c) we extended the self-optimizing control concepts to find c in the regulatory layer and proposed a quantitative method that minimizes the state drift to arrive at optimal regulatory layer with 1, 2 or more closed loops. In conclusion, we developed quantitative methods for controlled variables selection in both supervisory layer and regulatory control layer. We demonstrated the developed methods on a few representative case studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Hall, Emma Louise. "Quantitative methods to assess cerebral haemodynamics." Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/12673/.

Full text
Abstract:
In this thesis methods for the assessment of cerebral haemodynamics using 7 T Magnetic Resonance Imaging (MRI) are described. The measurement of haemodynamic parameters, such as cerebral blood flow (CBF), is an important clinical tool. Arterial Spin Labelling (ASL) is a non-invasive technique for CBF measurement using MRI. ASL methodology for ultra high field (7 T) MRI was developed, including investigation of the optimal readout strategy. Look-Locker 3D-EPI is demonstrated to give large volume coverage improving on previous studies. Applications of methods developed to monitor functional activity, through flow or arterial blood volume, in healthy volunteers and in patients with low grade gliomas using Look-Locker ASL are described. The effect of an increased level of carbon dioxide in the blood (hypercapnia) was studied using ASL and functional MRI; hypercapnia is a potent vasodilator and has a large impact on haemodynamics. These measures were used to estimate the increase in oxygen metabolism associated with a simple motor task. To study the physiology behind the hypercapnic response, magnetoencephalography was used to measure the impact of hypercapnia on neuronal activity. It was shown that hypercapnia induces widespread desynchronisation in a wide frequency range, up to ~ 50 Hz, with peaks in the sensory-motor areas. This suggests that hypercapnia is not iso-metabolic, which is an assumption of calibrated BOLD. A Look-Locker gradient echo sequence is described for the quantitative monitoring of a gadolinium contrast agent uptake through the change in longitudinal relaxation rate. This sequence was used to measure cerebral blood volume in Multiple Sclerosis patients. Further development of the sequence yielded a high resolution anatomical scan with reduced artefacts due to field inhomogeneities associated with ultra high field imaging. This allows whole head images acquired at sub-millimetre resolution in a short scan time, for application in patient studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Mougin, Olivier. "Quantitative methods in high field MRI." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11608/.

Full text
Abstract:
The increased signal-to-noise ratio available at high magnetic field makes possible the acquisition of clinically useful MR images either at higher resolution or for quantitative methods. The work in this thesis is focused on the development of quantitative imaging methods used to overcome difficulties due to high field MRI systems (> 3T). The protocols developed and presented here have been tested on various studies aiming at discriminating tissues based on their NMR properties. The quantities of interest in this thesis are the longitudinal relaxation time T1, as well as the magnetization transfer process, particularly the chemical exchange phenomenon involving amide protons which is highlighted particularly well at 7T under specific conditions. Both quantities (T1 and amide proton transfer) are related to the underlying structure of the tissues in-vivo, especially inside the white matter of the brain. While a standard weighted image at high resolution can provide indices of the extent of the pathology, a robust measure of the NMR properties of brain tissues can detect earlier abnormalities. A method based on a 3D Turbo FLASH readout and measuring reliably the T1 in-vivo for clinical studies at 7T is first presented. The other major part of this thesis presents magnetization transfer and chemical exchange phenomena. First a quantitative method is investigated at 7T, leading to a new model for exchange as well as contrast optimization possibility for imaging. Results using those methods are presented and applied in clinical setting, the main focus being to image reliably the brain of both healthy subjects and Multiple Sclerosis patients to look at myelin structures.
APA, Harvard, Vancouver, ISO, and other styles
8

Suwignjo, Patdono. "Quantitative methods for performance measurement systems." Thesis, University of Strathclyde, 1999. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21437.

Full text
Abstract:
The business environment has changed dramatically since the 1980s. Many researchers have shown that the traditional financially-based performance measurement systems have failed to cope with the current dynamic business environment. Even although new performance measurement systems have been proposed, such as Activity-Based Costing, the Balanced Scorecard, the SMART system, the Performance Measurement Questionnaires and the Cambridge model, the problem of quantifying the interaction of the factors affecting business performance still remains. The objectives of this thesis are: 1. To develop a performance measurement system model that can be used to quantify the effects of factors on performance and consolidate them into a single performance indicator. 2. To develop a model for reducing the number of performance reports. 3. To carry out experiments for testing the validity, applicability and stability of the models developed. To achieve these objectives this thesis reviews research methodology literature, studies the traditional and new performance measurement systems, identifies the current problems of performance measurement systems, reviews existing methods for identifying, structuring and prioritising performance measures, reviews the multicriteria methods, studies the analytic hierarchy process (AHP) and its controversy, develops quantitative methods for performance measurement systems and carries out experiments to test the validity, stability and applicability of the methods developed. To quantify the effect of factors on performance and consolidate them into a single performance indicator a quantitative method for performance measurement system (QMPMS) was developed. The method uses cognitive maps for identifying factors affecting performance and their relationship, structured diagrams for structuring the factors hierarchically and analytic hierarchy process for quantifying the effects of factors on performance. The method was then extended to reduce the number of performance reports. The QMPMS and its extension were implemented in three case studies to test their theoretical and application validity. The first case study applied the models to 'J&B Scotland Ltd.' to identify whether the models can produce the intended outputs. The second case study applied the QMPMS to 'Seagate Distribution (UK) Ltd.' to test the validity (accuracy) and stability of the QMPMS. Finally, the third case implemented the QMPMS to quantify and consolidate Inland Revenue, Cumbernauld's performance measures. It was found from the experiments that the QMPMS is quite accurate (the mean percentage of deviation is less than 4 percent), stable for a reasonable period of time and it can be applied comfortably to real cases. The QMPMS is now being used by the Inland Revenue - Cumbernauld for producing a single performance indicator of their business processes and overall office.
APA, Harvard, Vancouver, ISO, and other styles
9

Fredriksen, Tonje Dobrowen. "Quantitative Doppler Methods in Cardiovascular Imaging." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for sirkulasjon og bildediagnostikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27300.

Full text
Abstract:
Ultrasound imaging of blood flow in the heart and blood vessels has become an essential part of diagnosing diseases related to the circulatory system. By using different Doppler methods, the blood flow may be visualized or quantified. In this work we take advantage of the opportunities given by the introduction of parallel processing of ultrasound data to develop new quantitative Doppler methods. Pulsed wave (PW) Doppler is a technique for measuring blood velocities, providing the full velocity spectrum in a specific region of interest. The maximum velocities may be found by delineation of the spectral envelope, and may be used to estimate the severity of stenoses or valve leakages. However, PW Doppler suffers from several challenges, which makes quantitative analysis problematic. To limit spectral broadening, we created a new method called 2-D tracking Doppler, which incorporates information from several parallel receive beams. Spectra with improved resolution and signal-to-noise ratio were produced for a large span of beam-to-flow angles. The new method was tested using in vitro and in vivo recordings. A signal model was derived and the expected Doppler power spectra were calculated, showing good agreement with experimental data. Experiments were performed to investigate how the 2-D tracking Doppler method depends on the tracking angle. It was shown that the spectra have lowest bandwidth and maximum power when the tracking angle is equal to the beam-to-flow angle. This may facilitate new techniques for velocity calibration. It was shown that the velocity calibration errors may be lower for the 2-D tracking Doppler method than for a conventional PW Doppler approach, and especially for large beam-to-flow angles. In heart disease, the quantification of valve regurgitation is a remaining challenge. In this thesis, we have investigated a new technique to estimate the size of regurgitant jets using spectral Doppler and parallel beamforming. A modality that uses high pulse repetition frequency 3-D Doppler was devised, to isolate the backscattered signal power from the vena contracta, that is the narrowest flow region of a regurgitant jet. A simulation study was performed to test and optimize the new method, suggesting a feasible setup for the transmit- and receive beams. Cross-sectional power Doppler images of simulated regurgitations of various sizes were generated, and the regurgitant volumes were accurately estimated. Since the velocity-time integral and the orifice area are extracted from a single recording, the proposed method may give more robust volume estimates than methods where the velocities and the area are measured from separate recordings.
APA, Harvard, Vancouver, ISO, and other styles
10

Herling, Therese Windelborg. "Microfluidic methods for quantitative protein studies." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Duncan, Jonathan A. L. "Quantitative methods of cutaneous scar assessment." Thesis, University of Manchester, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.682240.

Full text
Abstract:
Scars and scarring are fundamental issues for every surgical, burns and trauma patient. As such it is a phenomenon that we should be able to deal with in a simple and systematic fashion. However, at present there is no general agreement as to the most appropriate method or combination of methods, for scar assessment which will take account of all characteristics. This is important as there is a need for an accurate, objective and quantitative technique for scar assessment which will be able to assist in the diagnosis, monitoring and evaluation of the various scar management regimes available, as well as in the development of newer anti-scarring therapies. As part of two separate clinical trials investigating the scar improving efficacy of transforming growth factor beta 3 (TGFBeta3), the first section of this thesis investigates the use of Visual Analogue Scale scoring and Scar Ranking as a global scar assessment tool. Four thousand two hundred and ninety six photographic scar images were assessed by an external lay panel, using a newly devised computerized scar assessment system. The results obtained have shown the two methods to be consistent, reliable, feasible and truly valid in their assessments. Additionally, they were found to be highly sensitive and capable of measuring differences in scar quality. By comparing the Visual Analogue Scale scores of scar images with the associated clinical scar assessments, the Visual Analogue Scale scoring methodology was also found, not only to give a true representation of the clinical or ’biological’ reality of the scars, but to be superior to the more traditional forms of categorical scar scoring assessment. The second section of this thesis investigates instruments which objectively assess individual scar characteristics. Scars were assessed using: a SIAscope (for scar colour); a Visco-Elastic Skin Analyser - VESA, Reviscometer. Cutometer and Ballistometer (for scar mechanical properties); a Dermascan C ultra-sound (for scar volume); and a Phase shift Rapid In-vivo Measurement of Skin - PRIMOS (for scar surface characteristics). Each instrument was used to assess clinical characteristics of scars within the clinical trial. These scars were also assessed clinically and by the External Panel Visual Analogue Scale. The results obtained have shown each of the instruments to have a high level of sensitivity in the examination of the parameter for which they were intended, but that they do not, as individual tools, have the general discriminatory capacity as the Visual Analogue Scale as an overall scar assessment tool. In the search for an overall objective scar assessment methodology, the Visual Analogue Scale scar assessment, by an external panel, has been identified as today’s premier method. The assessor’s ability to integrate each characteristic of the scar, its severity, and relation to the scar type, results in a global integrated assessment of scar quality and appearance. The use of a panel of assessors adds to the objectivity of the assessment making the External Panel Visual Analogue Scale scoring methodology a robust, sensitive tool for the assessment of cutaneous scars.
APA, Harvard, Vancouver, ISO, and other styles
12

Caruso, Matteo. "On logical quantitative methods in politics." Thesis, IMT Alti Studi Lucca, 2021. http://e-theses.imtlucca.it/337/1/Caruso_phdthesis.pdf.

Full text
Abstract:
The first chapter introduces the methodology of logical quantitative models and its applications to political sciences. The second chapter explains the conversion of votes to seats. I use the law of minority attrition, expanding its form into a final model which is applicable from single member district to several electoral systems. The third chapter introduces the estimation of party seats from the previous elections using a weighted regression with independent variables jointly: 1) the product of the assembly size and the district magnitude, 2) the past values of the biggest party shares, and 3) the number of Effective parties and simply considered. Chapter four develops a probability density function with five inflection points which describes any party system. It better catches the asymmetries among the party seats distribution at nationwide level. Chapter five implements the Downsian (or positional) competition model that describes the left-right space occupied by each party through Beta functions that I have tested on the Italian elections from 1992 to 2018. Chapter six presents an in-depth qualitative analysis of the hypothesis that the more proportional an electoral system, the more the parties tend to centripetal competition, thus connecting ideological terms, effective number of parties and electoral system. In chapter seven, I suggest an alternative logical method to aggregate electoral flows, which resolves Goodman’s problematics and provides a simpler solution than that of G. King. Chapter eight provides tools to more accurately calculate the optimal value of S (Taagepera and Shugart, 1989, p. 175), and unprecedently, the optimal value of other institutional variables such as: the district magnitude, the Gallagher’s index of dis-representation, and the dis-representation index attributable to an electoral system (De), originally developed in this thesis. Chapter nine wants to determine an equilibrium between parties’ and voters’ “electoral utility”, which is the quantity of dis-representation which benefits a group of parties and voters in the system, producing disutility for the others; this chapter enriches the law of minority attrition including thresholds and majority premiums (MJPs) and strategic vote, using a primary game theory approach and the "Maximin" Rawlsian theory (1971) as a benchmark for equality. Chapter ten provides an overview of links among the new tools and knowledge developed in this thesis, with the final aim of the normative building of an optimal electoral system, which can warrant both logical coherence and social equity as categorized by Arrow (1951).
APA, Harvard, Vancouver, ISO, and other styles
13

Luu, Philippe. "La subjectivité dans les méthodes quantitatives. Une étude des pratiques en Sciences de Gestion." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR0028.

Full text
Abstract:
En sciences de gestion, les méthodes quantitatives véhiculent deux idées reçues. Tout d’abord, elles désignent de manière quasi exclusive les méthodes statistiques causales. L’utilisation de ces dernières est ensuite perçue comme un indiscutable garant d’objectivité. Notre travail cherche à nuancer ces deux points et plus particulièrement la question de l’objectivité. Les méthodes quantitatives s’inscrivent en général dans le paradigme post-positiviste, où tendre vers l’objectivité consiste à contrôler au mieux les conditions dans lesquelles la recherche est réalisée. L’objectivité scientifique présente une double dimension : une dimension individuelle propre au chercheur et une dimension collective basée sur le regard de la communauté. L’objectif de notre recherche est de montrer comment la subjectivité intervient dans chacune des étapes d’un design de recherche quantitatif. Notre méthodologie s’appuie sur une étude de cas exploratoire réalisée dans un laboratoire en sciences de gestion et l’observation participante d’un ingénieur statisticien sur une période de 10 ans. Les unités d’étude considérées sont 24 papiers co-écrits par l’observateur participant durant cette période. Nos résultats indiquent, d’une part, que la définition des méthodes quantitatives peut potentiellement être élargie : les simulations informatiques ou les procédures d’optimisation numérique peuvent par exemple être incluses, sans qu’il s’agisse de techniques causales ou même statistiques. D’autre part, nos résultats illustrent l’omniprésence de la subjectivité dans de nombreuses techniques quantitatives, y compris statistiques. Lors du traitement de données, des options multiples se présentent au chercheur à chacune des étapes suivantes : au niveau de l’opérationnalisation des concepts, lors de la collecte des données, durant la préparation de l’échantillon et tout au long du paramétrage de l’analyse. La présence de nombreux arbitrages implique une grande variabilité dans les résultats d’une étude. L’intérêt de notre travail est d’augmenter l’espoir d’atteindre une objectivité maximale et collective en présentant les points que le chercheur doit documenter avec soin. Notre incitation à la transparence rejoint les recommandations mentionnées par la littérature pour répondre à la crise de la reproductibilité des travaux scientifiques qui touche à l’heure actuelle toutes les disciplines
In management sciences, quantitative methods convey two preconceived ideas. First, they refer almost exclusively to causal statistical methods. The use of the latter is perceived as an indisputable guarantee of objectivity. Our work aims to bring nuance to these two points and more particularly to the perceived objectivity. Quantitative methods are generally part of the post-positivist paradigm, where objectivity means to control the conditions under which research is conducted. Scientific objectivity has an individual dimension specific to the researcher and a collective dimension based on the community's perspective. The objective of our research is to show how subjectivity intervenes in each step of a quantitative research design. Our methodology is based on an exploratory case study conducted in a management science laboratory and the participant observation of a statistical engineer over a 10-year period. Our results illustrate the omnipresence of subjectivity in many quantitative techniques, including statistics. When processing data, the researcher faces multiple options during each of the following steps: operationalization of concepts, data collection, sample preparation and throughout the analysis setup. The presence of numerous trade-offs multiplies the possible outcomes of a study. Our work may help to increase the hope of achieving maximum and collective objectivity by highlighting the steps that the researcher must carefully document. Our encouragement of transparency is one of the recommendations mentioned by the literature in response to the reproducibility crisis, which currently affects all disciplines
APA, Harvard, Vancouver, ISO, and other styles
14

Eljarrat, Ascunce Alberto. "Quantitative methods for electron energy loss spectroscopy." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/349214.

Full text
Abstract:
This thesis explores the analytical capabilities of low-loss electron energy loss spectroscopy (EELS), applied to disentangle the intimate configuration of advanced semiconductor heterostructures. Modern aberration corrected scanning transmission electron microscopy (STEM) allows extracting spectroscopic information from extremely constrained areas, down to atomic resolution. Because of this, EELS is becoming increasingly popular for the examination of novel semiconductor devices, as the characteristic size of their constituent structures shrinks. Energy-loss spectra contain a high amount of information, and since the electron beam undergoes well-known inelastic scattering processes, we can trace the features in these spectra down to elementary excitations in the atomic electronic configuration. In Chapter 1, the general theoretical framework for low-loss EELS is described. This formulation, the dielectric model of inelastic scattering, takes into account the electrodynamic properties of the fast electron beam and the quantum mechanical description of the materials. Low-loss EELS features are originated both from collective mode (plasmons) and single electron excitations (e.g. band gap), that contain relevant chemical and structural information. The nature of these excitations and the inelastic processes involved has to be taken into account in order to analyze experimental data or to perform simulations. The computational tools required to perform these tasks are presented in Chapter 2. Among them, calibration, deconvolution and Kramers-Kronig analysis (KKA) of the spectrum constitute the most relevant procedures, that ultimately help obtain the dielectric information in the form of a complex dielectric function (CDF). This information may be then compared to the one obtained by optical techniques or with the results from simulations. Additional techniques are explained, focusing first on multivariate analysis (MVA) algorithms that exploit the hyperspectral acquisition of EELS, i.e. spectrum imaging (SI) modes. Finally, an introduction to the density functional theory (DFT) simulations of the energy-loss spectrum is given. In Chapter 3, DFT simulations concerning (Al, Ga, In)N binary and ternary compounds are introduced. The prediction of properties observed in low-loss EELS of these semiconductor materials, such as the band gap energy, is improved in these calculations. Moreover, a super-cell approach allows to obtain the composition dependence of both band gap and plasmon energies from the theoretical dielectric response coefficients of ternary alloys. These results are exploited in the two following chapters, in which we experimentally probe structures based on group-III nitride binary and ternary compounds. In Chapter 4, two distributed Bragg reflector structures are examined (based upon AlN/GaN and InAlN/GaN multilayers, respectively) through different strategies for the characterization of composition from plasmon energy shift. Moreover; HAADF image simulation is used to corroborate he obtained results; plasmon width, band gap energy and other features are measured; and, KKA is performed to obtain the CDF of GaN. In Chapter 5, a multiple InGaN quantum well (QW) structure is examined. In these QWs (indium rich layers of a few nanometers in width), we carry out an analysis of the energy-loss spectrum taking into account delocalization and quantum confinement effects. We propose useful alternatives complementary to the study of plasmon energy, using KKA of the spectrum. Chapters 6 and 7 deal with the analysis of structures that present pure silicon-nanocrystals (Si-NCs) embedded in silicon-based dielectric matrices. Our aim is to study the properties of these nanoparticles individually, but the measured low-loss spectrum always contains mixed signatures from the embedding matrix as well. In this scenario, Chapter 6 proposes the most straightforward solution; using a model-based fit that contains two peaks. Using this strategy, the Si-NCs embedded in an Er-doped SiO2 layer are characterized. Another strategy, presented in Chapter 7, uses computer-vision tools and MVA algorithms in low-loss EELS-SIs to separate the signature spectra of the Si-NCs. The advantages and drawbacks of this technique are revealed through its application to three different matrices (SiO2, Si3N4 and SiC). Moreover, the application of KKA to the MVA results is demonstrated, which allows to extract CDFs for the Si-NCs and surrounding matrices.
Este trabajo explora las posibilidades analíticas que ofrece la técnica de espectroscopia electrónica de bajas pérdidas (low-loss EELS), capaces de revelar la configuración estructural de los más avanzados dispositivos semiconductores. El uso de modernos microscopios electrónicos de transmisión-barrido (STEM) nos permite obtener información espectroscópica a partir de volúmenes reducidos, hasta llegar a resolución atómica. Por ello, EELS es cada vez mas popular para la observación de los dispositivos semiconductores, a medida que los tamaños característicos de sus estructuras constituyentes se miniaturiza. Los espectros de pérdida de energía contienen mucha información: dado que el haz de electrones sufre unos bien conocidos procesos de dispersión inelástica, podemos trazar relaciones entre estos espectros y excitaciones elementales en la configuración atómica de los elementos y compuestos constituyentes de cada material. Se describe un marco teórico para el estudio del low-loss EELS: el modelo dieléctrico de dispersión inelástica, que toma en consideración las propiedades electrodinámicas del haz de electrones y la descripción mecano-cuántica de los materiales. Adicionalmente, se describen en detalle las herramientas utilizadas en el análisis de datos experimentales o la simulación teórica de espectros. Monitorizando las energías de band gap y plasmon en los datos experimentales de low-loss EELS se obtiene información directa sobre propiedades electrónicas de los materiales. Además, usando análisis Kramers-Kronig en los espectros se obtiene información dieléctrica que puede ser comparada con las simulaciones o con otras técnicas (ópticas). Se demuestra el uso de estas herramientas con una serie de estudios sobre estructuras basadas en nitruros del grupo-III. Por otro lado, el uso de algoritmos para el análisis multivariante permite separar las contribuciones individuales que se miden mezcladas en espectros de estructuras complicadas. Hemos utilizado estas avanzadas herramientas para el análisis de estructuras basadas en silicio que contienen nano-cristales embebidos en matrices dieléctricas.
APA, Harvard, Vancouver, ISO, and other styles
15

Svegemo, Malin, and Anna Asplund. "Quantitative thermal perception thresholds, comparison between methods." Thesis, Uppsala University, Department of Medical Biochemistry and Microbiology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7377.

Full text
Abstract:

Skin temperature is detected through signals in unmyelinated C-fibers and thin myelinated Aδ-fibers in the peripheral and central nervous system. Disorders in thin nerve fibres are important and not rare but difficult to diagnose by the most common neurophysiological methods. In this pilot study different methods for quantitative sensory testing, QST, were compared to give some ideas about which method could be the most efficient to use in order to point out injuries of the sensory system in clinical practice. The comparison was made between Békésy (separate warmand cold thresholds) and Marstock test (combined warm and cold thresholds). The study also included the test persons estimations of the difficulty to perform the tests.

The study showed that there was no practical difference between the tests and that the test persons estimations did not show any indications that the methods differed in rating of difficulty. Our study did not give reason to stop measuring warm and cold detection thresholds separately, which is the international standard and have some theoretical advantages. We also compared detection thresholds for hand and foot, warmth and cold and for both slow and fast temperature changes to enlighten factors that could affect our measuring data.

APA, Harvard, Vancouver, ISO, and other styles
16

Moran, Jodi. "Quantitative Testing of Probabilistic Phase Unwrapping Methods." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1107.

Full text
Abstract:
The reconstruction of a phase surface from the observed principal values is required for a number of applications, including synthetic aperture radar (SAR) and magnetic resonance imaging (MRI). However, the process of reconstruction, called
APA, Harvard, Vancouver, ISO, and other styles
17

Carlborg, Örjan. "New methods for mapping quantitative trait loci /." Uppsala : Dept. of Animal Breeding and Genetics, Swedish Univ. of Agricultural Sciences ([Institutionen för husdjurens genetik], Sveriges lantbruksuniv.), 2002. http://projkat.slu.se/SafariDokument/210.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wealands, Stephen Russell. "Quantitative methods for hydrological spatial field comparison /." Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/00002722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Ting. "METHODS DEVELOPMENT IN QUALITATIVE AND QUANTITATIVE PROTEOMICS." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_diss/838.

Full text
Abstract:
Proteomics based on liquid chromatography coupled to mass spectrometry has developed rapidly in the last decade and become a powerful tool for protein mixtures analysis. LC-MS based proteomics involves four steps, sample preparation, liquid chromatography, mass spectrometry and bioinformatics. Improvements in each step have extended its applications to new biological research areas. This dissertation mainly focuses on method developments in both qualitative and quantitative proteomics. The first part of this dissertation focuses on qualitative analysis of T. gondii Parasitophorous Vacuole Membrane (PVM) proteins, which is very important for T. gondii’s survival. The hypothesis of this study is that proteomic approaches coupled with immunoprecipitation using polyclonal antisera as affinity reagents can successfully characterize the proteome of the T. gondii PVM. The “Three-layer Sandwich Gel Electrophoresis” (TSGE) protocol, was developed to contend with efficient salt removal and protein concentration from challenging samples. Furthermore, the TSGE coupled to 2D-LC-MS/MS was proven to be effective with the proteomic analysis of complex protein mixtures like T. gondii whole cell lysate, allowing for high-throughput protein analysis from complex samples. By using the TSGE-2D-LC-MS/MS methodology, we successfully identified 61 proteins from the PVM samples and constructed the PVM proteome. The second part of this dissertation describes a novel method for selecting an appropriate isocyanate reagent for potential quantitative proteomics application. Our hypothesis is alteration of isocyanate structure will change fragmentation pattern and ESI property of isocyanate modified peptides. The CID property of N-terminal modified peptides by phenyl isocyanate (PIC), phenethyl isocyanate (PEIC) and pyridine-3- isocyanate (PyIC) was systematically studied using LC-ESI-MS/MS. We observed that adjustment of isocyanate structure changed both ESI and fragmentation characteristic of modified peptides. We rationalized the decrease of protonation of PIC and PEIC modified peptides results from the neutral property of the both reagents. The electron withdrawing feature of PyIC leads to significant reduction of fragments during CID. Therefore, we designed a new isocyanate reagent, 3-(isocyanatomethyl) pyridine (PyMIC). The results revealed that PyMIC modified peptides had more suitable ESI properties and generated more sequence-useful fragments compared to PIC, PyIC and even unmodified peptides. PyMIC is a more appropriate labeling reagent for quantitative proteomics applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Soonthornsaratoon, T. "Gradient-based methods for quantitative photoacoustic tomography." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1452208/.

Full text
Abstract:
Photoacoustic tomography (PAT) is showing its potential as a non-invasive biomedical imaging modality, and interest in the field is growing rapidly. The images possess excellent contrast, high spatial resolution and good specificity, however, they are largely qualitative and not directly representative of the optically absorbing structures of interest. Quantitative PAT (QPAT) aims to determine quantitatively accurate spatial maps of the underlying tissue chromophores, in order to obtain highly-resolved images of functional information such as blood oxygen saturation and haemoglobin concentration. PAT images are inherently three-dimensional (3D), and their high resolution means that the data sets are of an extremely large scale; a typical problem can easily possess 10⁷ unknowns. Existing methods for QPAT have failed to address their applicability to real, 3D PAT images, either by making restrictive approximations to the light model or by using computational intensive techniques which are impractical for large-scale data sets. This thesis develops a practical inversion method for the full and general QPAT problem, in which the tissue geometry is arbitrary, the optical coefficients are unknown and the data is large-scale. The accuracy of the inversion method is ensured by use of the radiative transfer equation (RTE), which provides a highly accurate description of the propagation of light within biological tissue. Using the RTE, a thorough investigation into the effects of errors in the scattering coefficient on the reconstructed absorption coefficient is performed. Computational efficiency in the inversion is provided through an adjoint-assisted, gradient-based minimisation scheme, which iteratively adjusts the parameters of interest until the model prediction matches the measured data. Since the RTE proves too computationally intensive for large data sets, an extension to 3D simulated data is facilitated by the incorporation of the δ-Eddington approximation, thereby providing an accurate, efficient inversion method for QPAT that may be readily applied to experimental data.
APA, Harvard, Vancouver, ISO, and other styles
21

Widman, Erik. "Ultrasonic Methods for Quantitative Carotid Plaque Characterization." Doctoral thesis, KTH, Medicinsk bildteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192339.

Full text
Abstract:
Cardiovascular diseases are the leading causes of death worldwide and improved diagnostic methods are needed for early intervention and to select the most suitable treatment for patients. Currently, carotid artery plaque vulnerability is typically determined by visually assessing ultrasound B-mode images, which is influenced by user-subjectivity. Since plaque vulnerability is correlated to the mechanical properties of the plaque, quantitative techniques are needed to estimate plaque stiffness as a surrogate for plaque vulnerability, which would reduce subjectivity during plaque assessment. The work in this thesis focused on three noninvasive ultrasound-based techniques to quantitatively assess plaque vulnerability and measure arterial stiffness. In Study I, a speckle tracking algorithm was validated in vitro to assess strain in common carotid artery (CCA) phantom plaques and thereafter applied in vivo to carotid atherosclerotic plaques where the strain results were compared to visual assessments by experienced physicians. In Study II, hard and soft CCA phantom plaques were characterized with shear wave elastography (SWE) by using phase and group velocity analysis while being hydrostatically pressurized followed by validating the results with mechanical tensile testing. In Study III, feasibility of assessing the stiffness of simulated plaques and the arterial wall with SWE was demonstrated in an ex vivo setup in small porcine aortas used as a human CCA model. In Study IV, SWE and pulse wave imaging (PWI) were compared when characterizing homogeneous CCA soft phantom plaques. The techniques developed in this thesis have demonstrated potential to characterize carotid artery plaques. The results show that the techniques have the ability to noninvasively evaluate the mechanical properties of carotid artery plaques, provide additional data when visually assessing B-mode images, and potentially provide improved diagnoses for patients suffering from cerebrovascular diseases.

Doctoral thesis in medical technology and medical sciences

QC 20160921

APA, Harvard, Vancouver, ISO, and other styles
22

Görtler, Jochen [Verfasser]. "Quantitative Methods for Uncertainty Visualization / Jochen Görtler." Konstanz : KOPS Universität Konstanz, 2021. http://d-nb.info/1238017924/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ecke, Andreas. "Quantitative Methods for Similarity in Description Logics." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-223626.

Full text
Abstract:
Description Logics (DLs) are a family of logic-based knowledge representation languages used to describe the knowledge of an application domain and reason about it in formally well-defined way. They allow users to describe the important notions and classes of the knowledge domain as concepts, which formalize the necessary and sufficient conditions for individual objects to belong to that concept. A variety of different DLs exist, differing in the set of properties one can use to express concepts, the so-called concept constructors, as well as the set of axioms available to describe the relations between concepts or individuals. However, all classical DLs have in common that they can only express exact knowledge, and correspondingly only allow exact inferences. Either we can infer that some individual belongs to a concept, or we can't, there is no in-between. In practice though, knowledge is rarely exact. Many definitions have their exceptions or are vaguely formulated in the first place, and people might not only be interested in exact answers, but also in alternatives that are "close enough". This thesis is aimed at tackling how to express that something "close enough", and how to integrate this notion into the formalism of Description Logics. To this end, we will use the notion of similarity and dissimilarity measures as a way to quantify how close exactly two concepts are. We will look at how useful measures can be defined in the context of DLs, and how they can be incorporated into the formal framework in order to generalize it. In particular, we will look closer at two applications of thus measures to DLs: Relaxed instance queries will incorporate a similarity measure in order to not just give the exact answer to some query, but all answers that are reasonably similar. Prototypical definitions on the other hand use a measure of dissimilarity or distance between concepts in order to allow the definitions of and reasoning with concepts that capture not just those individuals that satisfy exactly the stated properties, but also those that are "close enough".
APA, Harvard, Vancouver, ISO, and other styles
24

Hochuli, R. "Monte Carlo methods in quantitative photoacoustic tomography." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1507921/.

Full text
Abstract:
Quantitative photoacoustic tomography (QPAT) is a hybrid biomedical imaging technique that derives its specificity from the wavelength-dependent absorption of near-infrared/visible laser light, and its sensitivity from ultrasonic waves. This promising technique has the potential to reveal more than just structural information, it can also probe tissue function. Specifically, QPAT has the capability to estimate concentrations of endogenous chromophores, such as the concentrations of oxygenated and deoxygenated haemoglobin (from which blood oxygenation can be calculated), as well as the concentrations of exogenous chromophore, e.g. near-infrared dyes or metallic nanoparticles. This process is complicated by the fact that a photoacoustic image is not directly related to the tissue properties via the absorption coefficient, but is proportional to the wavelength-dependent absorption coefficient times the internal light fluence, which is also wavelength-dependent and is in general unknown. This thesis tackles this issue from two angles; firstly, the question of whether certain experimental conditions allow the impact of the fluence to be neglected by assuming it is constant with wavelength, a `linear inversion', is addressed. It is demonstrated that a linear inversion is appropriate only for certain bands of illumination wavelengths and for limited depth. Where this assumption is not accurate, an alternative approach is proposed, whereby the fluence inside the tissue is modelled using a novel Monte Carlo model of light transport. This model calculates the angle-dependent radiance distribution by storing the field in Fourier harmonics, in 2D, or spherical harmonics, in 3D. This thesis demonstrates that a key advantage of computing the radiance in this way is that it simplifies the computation of functional gradients when the estimation of the absorption and scattering coefficients is cast as a nonlinear least-squares problem. Using this approach, it is demonstrated in 2D that the estimation of the absorption coefficient can be performed to a useful level of accuracy, despite the limited accuracy in reconstruction of the scattering coefficient.
APA, Harvard, Vancouver, ISO, and other styles
25

Huang, Dashan. "Studies on quantitative finance via operations research methods." 京都大学 (Kyoto University), 2007. http://hdl.handle.net/2433/135989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Revermann, Tobias. "Methods and instrumentation for quantitative microchip capillary electrophoresis." Enschede : University of Twente [Host], 2007. http://doc.utwente.nl/57704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Misra, Jatin 1976. "Quantitative methods for linking transcriptional profiles to physiology." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29374.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2004.
Includes bibliographical references (leaves 121-126).
Underlying the current drive towards a systemic study of biology is the tacit assumption that a quantitative relationship can be obtained between molecular markers and macro- scale physiological measurements, which can be utilized to construct predictive models of cellular behavior. This thesis explores the evidence for such quantitative relations, and then illustrates one approach for the construction of models linking phenotype and molecular measurements. Specifically, this thesis focuses on the analysis of gene expression data as generated through DNA microarrays. Through the application of dimensional reduction methods such as PCA, and interactive pattern exploration, evidence for the existence of quantitative relations between gene expression signatures and physiological markers is presented. Subsequently, a large-scale experiment is designed and conducted to provide data sufficiently rich to support the construction of predictive models. The specific system probed in this experiment is the development of insulin resistance in mice models. A bootstrap-based regression framework is then developed for the construction and evaluation of predictive models linking age of the mice and serum insulin and leptin levels to transcriptional profiles. A regression framework has the advantage of avoiding complicated and detailed assumptions regarding mechanistic behavior of the genes involved. In addition, the genes identified through the modeling often have important biological significance.
(cont.) Further, the framework is flexible, and can be readily adapted to include different sources of data, such as protein expressions and metabolic fluxes. In summary, this thesis validates the construction of predictive, quantitative models linking physiology and molecular markers, and presents in detail one specific approach for the construction of models based on these relationships.
by Jatin Misra.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
28

Caster, Ola. "Quantitative methods to support drug benefit-risk assessment." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-100286.

Full text
Abstract:
Joint evaluation of drugs’ beneficial and adverse effects is required in many situations, in particular to inform decisions on initial or sustained marketing of drugs, or to guide the treatment of individual patients. This synthesis, known as benefit-risk assessment, is without doubt important: timely decisions supported by transparent and sound assessments can reduce mortality and morbidity in potentially large groups of patients. At the same time, it can be hugely complex: drug effects are generally disparate in nature and likelihood, and the information that needs to be processed is diverse, uncertain, deficient, or even unavailable. Hence there is a clear need for methods that can reliably and efficiently support the benefit-risk assessment process. For already marketed drugs, this process often starts with the detection of previously unknown risks that are subsequently integrated with all other relevant information for joint analysis. In this thesis, quantitative methods are devised to support different aspects of drug benefit-risk assessment, and the practical usefulness of these methods is demonstrated in clinically relevant case studies. Shrinkage regression is adapted and implemented for large-scale screening in collections of individual case reports, leading to the discovery of a link between methylprednisolone and hepatotoxicity. This adverse effect is then considered as part of a complete benefit-risk assessment of methylpredniso­lone in multiple sclerosis relapses, set in a general framework of probabilistic decision analysis. Two methods devised in the thesis substantively contribute to this assessment: one for efficient generation of utility distributions for the considered clinical outcomes, driven by modelling of qualitative information; and one for computing risk limits for rare and otherwise non-quantifiable adverse effects, based on collections of individual case reports.

At the time of the doctoral defence the following papers were unpublished and had a status as follows: Paper 6: Manuscript; Paper 7: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
29

Häggström, Ida. "Quantitative methods for tumor imaging with dynamic PET." Doctoral thesis, Umeå universitet, Radiofysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-95126.

Full text
Abstract:
There is always a need and drive to improve modern cancer care. Dynamic positron emission tomography (PET) offers the advantage of in vivo functional imaging, combined with the ability to follow the physiological processes over time. In addition, by applying tracer kinetic modeling to the dynamic PET data, thus estimating pharmacokinetic parameters associated to e.g. glucose metabolism, cell proliferation etc., more information about the tissue's underlying biology and physiology can be determined. This supplementary information can potentially be a considerable aid when it comes to the segmentation, diagnosis, staging, treatment planning, early treatment response monitoring and follow-up of cancerous tumors. We have found it feasible to use kinetic parameters for semi-automatic tumor segmentation, and found parametric images to have higher contrast compared to static PET uptake images. There are however many possible sources of errors and uncertainties in kinetic parameters obtained through compartment modeling of dynamic PET data. The variation in the number of detected photons caused by the random nature of radioactive decay, is of course always a major source. Other sources may include: the choice of an appropriate model that is suitable for the radiotracer in question, camera detectors and electronics, image acquisition protocol, image reconstruction algorithm with corrections (attenuation, random and scattered coincidences, detector uniformity, decay) and so on. We have found the early frame sampling scheme in dynamic PET to affect the bias and uncertainty in calculated kinetic parameters, and that scatter corrections are necessary for most but not all parameter estimates. Furthermore, analytical image reconstruction algorithms seem more suited for compartment modeling applications compared to iterative algorithms. This thesis and included papers show potential applications and tools for quantitative pharmacokinetic parameters in oncology, and help understand errors and uncertainties associated with them. The aim is to contribute to the long-term goal of enabling the use of dynamic PET and pharmacokinetic parameters for improvements of today's cancer care.
Det finns alltid ett behov och en strävan att förbättra dagens cancervård. Dynamisk positronemissionstomografi (PET) medför fördelen av in vivo funktionell avbilning, kombinerad med möjligheten att följa fysiologiska processer över tiden. Genom att därtill tillämpa kinetisk modellering på det dynamiska PET-datat, och därigenom skatta farmakokinetiska parametrar associerade till glukosmetabolism, cellproliferation etc., kan ytterligare information om vävnadens underliggande biologi och fysiologi bestämmas. Denna kompletterande information kan potentiellt vara till stor nytta för segmentering, diagnos, stadieindelning, behandlingsplanering, monitorering av tidig behandlingsrespons samt uppföljning av cancertumörer. Vi fann det möjligt att använda kinetiska parametrar för semi-automatisk tumörsegmentering, och fann även att parametriska bilder hade högre kontrast jämfört med upptagsbilder från statisk PET. Det finns dock många möjliga källor till osäkerheter och fel i kinetiska parametrar som beräknats genom compartment-modellering av dynamisk PET. En av de största källorna är det radioaktiva sönderfallets slumpmässiga natur som orsakar variationer i antalet detekterade fotoner. Andra källor inkluderar valet av compartment-modell som är lämplig för den aktuella radiotracern, PET-kamerans detektorer och elektronik, bildtagningsprotokoll, bildrekonstruktionsalgoritm med tillhörande korrektioner (attenuering, slumpmässig och spridd strålning, detektorernas likformighet, sönderfall) och så vidare. Vi fann att tidssamplingsschemat för tidiga bilder i dynamisk PET påverkar både fel och osäkerhet i beräknade kinetiska parametrar, och att bildkorrektioner för spridd strålning är nödvändigt för de flesta men inte alla parametrar. Utöver detta verkar analytiska bildrekonstruktionsalgoritmer vara bättre lämpade för tillämpningar som innefattar compartment-modellering i jämförelse med iterativa algoritmer. Denna avhandling med inkluderade artiklar visar möjliga tillämpningar och verktyg för kvantitativa kinetiska parametrar inom onkologiområdet. Den bidrar också till förståelsen av fel och osäkerheter associerade till dem. Syftet är att bidra till det långsiktiga målet att möjliggöra användandet av dynamisk PET och farmakokinetiska parametrar för att förbättra dagens cancervård.
APA, Harvard, Vancouver, ISO, and other styles
30

Topping, Ryan. "Quantitative methods for reconstructing protein-protein interaction histories." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/14618.

Full text
Abstract:
Protein-protein interactions (PPIs) are vital for the function of a cell and the evolution of these interactions produce much of the evolution of phenotype of an organism. However, as the evolutionary process cannot be observed, methods are required to infer evolution from existing data. An understanding of the resulting evolutionary relationships between species can then provide information for PPI prediction and function assignment. This thesis further develops and applies the interaction tree method for modelling PPI evolution within and between protein families. In this approach, a phylogeny of the protein family/ies of interest is used to explicitly construct a history of duplication and specification events. Given a model relating sequence change in this phylogeny to the probability of a rewiring event occurring, this method can then infer probabilities of interaction between the ancestral proteins described in the phylogeny. It is shown that the method can be adapted to infer the evolution of PPIs within obligate protein complexes, using a large set of such complexes to validate this application. This approach is then applied to reconstruct the history of the proteasome complex, using x-ray crystallography structures of the complex as input, with validation to show its utility in predicting present day complexes for which we have no structural data. The methodology is then adapted for application to transient PPIs. It is shown that the approach used in the previous chapter is inadequate here and a new scoring system is described based on a likelihood score of interaction. The predictive ability of this score is shown in predicting known two component systems in bacteria and its use in an interaction tree setting is demonstrated through inference of the interaction history between the histidine kinase and response regulator proteins responsible for sporulation onset in a set of bacteria. This thesis demonstrates that with suitable modifications the interaction tree approach is widely applicable to modelling PPI evolution and also, importantly, predicting existing PPIs. This demonstrates the need to incorporate phylogenetic data in to methods of predicting PPIs and gives some measure of the benefit in doing so.
APA, Harvard, Vancouver, ISO, and other styles
31

Wei, Zhen. "Functional learning methods with applications to quantitative finance /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tönsing, Christian [Verfasser], and Jens [Akademischer Betreuer] Timmer. "Quantitative modeling of human diseases - methods and applications." Freiburg : Universität, 2020. http://d-nb.info/1212360559/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gooley, Theodore Alan. "Quantitative comparisons of statistical methods in image reconstruction." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185251.

Full text
Abstract:
Statistical methods for approaching image reconstruction and restoration problems have generated much interest among statisticians in the past decade. In this dissertation, we examine in detail various statistical methods of image reconstruction through the simulation of a multiple-pinhole coded-aperture imaging system for use in emission tomography. We reconstruct each object from a class of 64 total objects, obtaining a reconstruction for each of the 64 originals by several different methods. Among the methods that we use to obtain these reconstructions are maximum likelihood techniques, where we make use of both the popular expectation-maximization (EM) algorithm and a Monte Carlo search routine. We also examine methods that include, in some form, various kinds of prior information. One way of using prior information is through the specification of a prior probability density on the object (or class of objects) to be reconstructed. We investigate the use of Markov random field (MRF) models as a means of specifying the prior densities that will be used to obtain reconstructions. Once given a prior density, these reconstructions are taken to be approximations to the maximum a posteriori (MAP) estimate of the original object. We also investigate reconstructions obtained through other prior densities plus reconstructions obtained by introducing prior information in alternate ways. Once all the reconstructions are obtained, we attempt to answer the important question, "which reconstruction method is 'best'?" We define "best" in this context to be the method that allows a human observer to perform a specified task the most accurately. The task to be performed is to determine whether or not a small protrusion exists on an elliptical object. (This task is motivated by the desire to detect wall-motion abnormalities in the left ventricle of the heart.) We generate 32 objects with protrusions (abnormal objects) and 32 objects without protrusions (normal objects). These objects constitute our class of 64 originals which are reconstructed by the various methods. The reconstruction methods are then analyzed through receiver operating characteristic (ROC) analysis, and a performance index, the area under the curve (AUC), is obtained for each method. Statistical tests are then performed on certain pairs of methods so that the hypothesis that no difference between the AUC's exists can be tested. We found that the reconstruction methods that used the largest amount of (accurate) prior information were generally superior to other methods considered. We also compute calculable figures of merit (FOM) associated with each reconstruction method with the hope that these FOM's will predict the performance of the human observer. Unfortunately, our results indicate that the FOM's that we considered do not correlate well with the performance of the human.
APA, Harvard, Vancouver, ISO, and other styles
34

Jackson, Andrew Robert. "Computational and instrumental developments in quantitative Auger electron analysis." Thesis, University of York, 1999. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Naining. "Quantitative cellular methods in the evaluation of prostate cancer /." Stockholm, 2000. http://diss.kib.ki.se/2000/91-628-3929-2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hampton, Jennifer. "The nature of quantitative methods in A level sociology." Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/118019/.

Full text
Abstract:
British sociology has been characterised as suffering from a 'quantitative deficit' originating from a shift towards qualitative methods in the discipline in the 1960s. Over the years, this has inspired a number of initiatives aimed at improving number work within the discipline, of which the Q-step programme is the most recent. These initiatives, and the work that supports them, primarily concern themselves with the curricula, attitudes, and output of students and academics within Higher Education. As such, the role that the substantive A level plays in post-16 quantitative education has been largely ignored. This thesis addresses this apparent gap in the literature, providing a study of the curriculum, with a particular focus on the quantitative method element therein. The thesis takes a mixed-method approach to curriculum research, encompassing the historical as well as the current, and the written as well as the practiced. The analysis is presented in a synoptic manner, interweaving data from across the methods used, in an attempt to provide an integrated and holistic account of A level Sociology. An overarching theme of marginalisation becomes apparent; not least with the subject itself, but also with quantitative methods positioned as problematic within the research methods element of the curriculum, which is itself bound and limited. The high-stakes exam culture is shown to dominate the behaviour of both teachers and students, regardless of their attitudes and understanding of the relevancy and/or importance of quantitative methods in the subject. Taken together, these findings imply a potential problem for recruitment into quantitative sociology, whilst offering an avenue by which this might be addressed. Linked to the high-stakes performativity culture, a novel conceptualisation of teachers' understandings of the relationship between their role, the curriculum, the discipline, and notions of powerful knowledge is offered.
APA, Harvard, Vancouver, ISO, and other styles
37

Varela, Marta. "Quantitative methods for assessing perfusion in the neonatal brain." Thesis, King's College London (University of London), 2011. https://kclpure.kcl.ac.uk/portal/en/theses/quantitative-methods-for-assessing-perfusion-in-the-neonatal-brain(052717e7-c61e-4f79-9137-a4003399b37c).html.

Full text
Abstract:
Cerebral perfusion, or cerebral blood flow, CBF, is a physiological variable that measures the amount of blood delivered to brain tissue per unit time. In neonates, CBF assumes an important role as alterations in CBF are linked to many instances of brain injury, particularly following preterm or complicated births. CBF can be measured using a number of techniques, most of which rely on ionising radiation and require the administration of exogenous substances. This renders them unsuitable for studying CBF in healthy neonates and for repeated studies. In this thesis, Magnetic Resonance Imaging (MRI) techniques capable of measuring CBF non-invasively are optimised for the neonatal population. The longitudinal relaxation time constant, T1, of blood is an important parameter in many MRI techniques, such as perfusion quantification using Arterial Spin Labelling (ASL). In most applications, ex-vivo literature values are used for blood T1. A novel method to measure blood T1 in vivo in a very short time is presented. It is found that blood T1 values in neonates are very variable and strongly correlated to the haematocrit. This method can be used to improve CBF quantification using ASL in neonates. A robust method used to measure mean CBF is also presented. Firstly, the mean flow to the brain in the arteries of the neck was measured using an optimised phase-contrast angiography protocol. This was then divided by the brain volume, computed from an anatomical MR image, to yield mean CBF. This method was applied to a small cohort of infants, where the relationship between CBF and postmenstrual age was investigated. Arterial Spin Labelling was also used in both adults and neonates. In adults, CBF values were compared to those obtained using the PCA method. Techniques to estimate the parameters needed in the ASL model for CBF quantification were also explored. Finally, ASL data acquired in neonates is presented and further improvements to CBF measurements using ASL in this age group are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Perry, Amelia Ruth. "Quantitative microscopic methods for crystal growth and dissolution processes." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/73865/.

Full text
Abstract:
The aim of this thesis was to investigate crystal nucleation, growth and dissolution processes, focussing particularly on the behaviour of the crystal surface. To facilitate this various methods of microscopy were used, as well as electrochemical techniques, with the goal to separate mass transport towards the crystal surface and the processes which occur close to the crystal surface, and measure intrinsic growth/dissolution rates. In order to do this, crystal systems were screened for their relevance to applications in industrial processes, and those chosen were related to pharmaceutical crystallization and scale formation in o↵ shore oil wells. For each system, different methods of electrochemical measurement and microscopy were investigated to chose a technique which works best for the problem in hand. Further to the experimental data produced, these were supported by mass transfer models, with the aim of finding out more quantitative information about the surface behaviour of the crystal systems observed. Firstly, salicylic acid micro-crystals were observed in aqueous solution by optical microscopy to visualise growth/dissolution rates of individual faces. It was found from finite element method (FEM) simulations that the most active (001) face was strongly mass transport controlled, and that the (110) and ( ¯ 110) were closer to the surface controlled regime. Salicylic acid crystals were further analysed by scanning electrochemical microscopy (SECM) using 3 dimensional (3D) scans containing a series of approaches to the surface. By inducing dissolution on the crystal surface, and measuring a change in ultramicroelectrode (UME) current, the dissolution rate constant of the (110) face of salicylic acid was determined for this heterogeneous surface. Barite nucleation and growth was observed by optical microscopy, using a flow cell with hydrodynamic flow. High supersaturations were used and the crystals were deposited onto foreign surfaces with differing surface charge. It was found that the flux of material, once initial nucleation was achieved, matched closely to simulated mass transport fluxes. Finally, nanoprecipitation was induced at the opening of a nanopipette (ca. 100 nm) diameter and an ion current was applied to induce the early stages of barite nucleation. It was possible to observe nucleation and blockage of the nanopipette from the current transient produced. This process was used to test the effectiveness of different phosphonate inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
39

Braddick, Darren. "Quantitative assay methods and mathematical modelling of peptidoglycan transglycosylation." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/57211/.

Full text
Abstract:
The proportion of antibiotic resistant Gram-positive strains in the clinic and community continue to rise, despite the number of new antibiotics continuing to fall with time. At the intersection of this problem is the established challenge of working with what has ultimately been both nature’s and humanity’s favoured and most successful antibiotic target, the biosynthesis of the bacterial cell wall. The challenge lies in the predominately membrane/lipid linked habitat that the enzymes and substrates of this complex biosynthetic pathway function within. Membrane protein science remains non-trivial and often difficult, and as such remains undeveloped despite its hugely important role in the medical and biological sciences. As a result, there is a paucity of understanding for this pathway, with limited methods for assay of the activity of the biosynthetic enzymes. These enzyme include the monofunctional transglycosylases, monofunctional transpeptidase penicillin-binding proteins (PBPs) and bifunctional PBPs capable of both transglycosylation and transpeptidation. A number of these enzymes were expressed and purified, with the intention of obtaining novel kinetic and catalytic characterisation of their activities. The more complex of these enzymes could not be proven to be active, and so the comparatively simpler enzyme, an S. aureus monofunctional transglycosylase called MGT, was taken as a model enzyme and used to help design novel assay methods for its transglycosylase activity. The assays developed in this work gave access to novel time-course data and will help demonstrate other interesting mechanistic/catalytic information about the MGT enzyme and of transglycosylation in general. Mathematical modelling was performed around the experimental work. Novel and unique models were designed to define the mechanism of the MGT and generic transglycosylation, as this had not been performed before. The mathematical concepts of structural identifiability and structural indistinguishability were used to analyse these models. Data from experiments were then used to attempt data fitting with the models, and information about the underlying unknown kinetic parameters were collected. Together, a new framework of understanding of the MGT and transglycosylation can be made, which may hopefully be a small step towards answering the challenge now posed by widespread antibiotic resistance.
APA, Harvard, Vancouver, ISO, and other styles
40

Kravchenko, T. "The quantitative methods of estimation of corporate management efficiency." Thesis, Видавництво СумДУ, 2006. http://essuir.sumdu.edu.ua/handle/123456789/21605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bigdeli, T. Bernard. "Quantitative Genetic Methods to Dissect Heterogeneity in Complex Traits." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2651.

Full text
Abstract:
Etiological models of complex disease are elusive[46, 33, 9], as are consistently replicable findings for major genetic susceptibility loci[54, 14, 15, 24]. Commonly-cited explanations invoke low-frequency genomic variation[41], allelic heterogeneity at susceptibility loci[33, 30], variable etiological trajectories[18, 17], and epistatic effects between multiple loci; these represent among the most methodologically-challenging issues in molecular genetic studies of complex traits. The response has been con- sistently reactionary—hypotheses regarding the relative contributions of known func- tional elements, or emphasizing a greater role of rare variation[46, 33] have undergone periodic revision, driving increasingly collaborative efforts to ascertain greater numbers of participants and which assay a rapidly-expanding catalogue of human genetic variation. Major deep-sequencing initiatives, such as the 1,000 Genomes Project, are currently identifying human polymorphic sites at frequencies previously unassailable and, not ten years after publication of the first major genome-wide association find- ings, re-sequencing has already begun to displace GWAS as the standard for genetic analysis of complex traits. With studies of complex disease primed for an unprecedented survey of human genetic variation, it is essential that human geneticists address several prominent, problematic aspects of this research. Realizations regarding the boundaries of human traits previously considered to be effectively disparate in presentation[44, 39, 35, 27, 25, 12, 4, 13], as well as profound insight into the extent of human genetic diversity[23, 22] are not without consequence. Whereas the resolution of fine-mapping studies have undergone persistent refinement, recent polygenic findings suggest a less discriminant basis of genetic liability, raising the question of what a given, unitary association finding actually represents. Furthermore, realistic expectations regarding the pattern of findings for a particular genetic factor between or even within populations remain unclear. Of interest herein are methodologies which exploit the finite extent of genomic variability within human populations to distinguish single-point and cumulative group differences in liability to complex traits, the range of allele frequencies for which common association tests are appropriate, and the relevant dimensionality of common genetic variation within ethnically-concordant but differentially ascertained populations. Using high-density SNP genotype data, we consider both hypothesis-driven and agnostic (genome-wide) approaches to association analysis, and address specific issues pertaining to empirical significance and the statistical properties of commonly- applied tests. Lastly, we demonstrate a novel perspective of genome-wide genetic “background” through exhaustive evaluation of fundamental, stochastic genetic processes in a sample of matched affected and unaffected siblings selected from high- density schizophrenia families.
APA, Harvard, Vancouver, ISO, and other styles
42

Cummings, Rebecca. "Development and application of label free quantitative proteomic methods." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/8313/.

Full text
Abstract:
The aim of this Ph.D. was to develop advanced methods for quantitative proteomics and use these methods to investigate the presence of protein biomarkers of sperm performance, differential expression of sperm membrane proteins and differential expression of E.coli proteins. Quantitative analysis of E.coli generated analytical samples that were analysed with multiple mass spectrometers and with multiple software packages. Through these samples an optimal label free quantitative proteomic workflow was generated and software was thoroughly tested to determine the optimal software to be used for data analysis on varying biological questions. Identification of protein(s) that correlate with increased or decreased fertility would be economically beneficial. Currently semen samples are subject to quality control where general movement and morphological defects are studied, but this does not always correlate with the ejaculate passing a post cryopreservation quality control check or that specific bull generating offspring. Identification of a protein or set of proteins with abundance variation in bulls of known high or low fertility would allow lower fertility bulls to be removed from the breeding programme at an early age, reducing rearing costs, and would allow longitudinal health monitoring of individual bulls. Discovery of differentially expressed proteins in the membrane of sperm with the X or Y chromosome would allow the generation of a method to separate the two sperm populations. This will be beneficial as most livestock farmers would prefer offspring of a specific sex, either to sell or replenish animal stock. Quantitative analysis of proteins present in bovine seminal plasma led to the identification and quantitative comparison of the seminal plasma proteins present in two breeds of bull, Holstein and Belgian Blue and a quantitative comparison of the seminal plasma from two domestic farm animal species, bovine and porcine. Intra species comparisons determined no quantitative variation between the two breeds, while the inter species comparison determined variation between the proteins present in both species seminal plasma and the corresponding amounts of proteins present in both species. A quantitative comparison was performed to determine the expression of proteins from two strains of E.coli, a wild type strain (MG1655) and a genome depleted strain (MDS66), this led to the confirmation of gene deletions in the genome depleted strain due to their lack of protein products in mass spectrometric analysis, and the identification of proteins that were differentially expressed due to pleiotropic effects of these genome deletions. To investigate the proteins expressed in the sperm membrane a mass spectrometer compatible enrichment method was generated and membrane proteins were identified, quantified and compared between sperm expressing X and Y chromosomes. This study did not lead to the determination of any proteins with differential expression in the X or Y bearing sperm.
APA, Harvard, Vancouver, ISO, and other styles
43

Ye, Chun. "Statistical methods for the analysis of expression quantitative traits." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3386752.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed February 11, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 156-169).
APA, Harvard, Vancouver, ISO, and other styles
44

Beauchat, Tracey Allen. "Analysis of (iso)surface reconstructions: Quantitative metrics and methods." W&M ScholarWorks, 1996. https://scholarworks.wm.edu/etd/1539623885.

Full text
Abstract:
Due to sampling processes volumetric data is inherently discrete and most often knowledge of the underlying continuous model is not available. Surface rendering techniques attempt to reconstruct the continuous model, using isosurfaces, from the discrete data. Therefore, it natural to ask how accurate the reconstructed isosurfaces are with respect to the underlying continuous model. A reconstructed isosurface may look impressive when rendered ("photorealism"), but how well does it reflect reality ("physical realism")?;The users of volume visualization packages must be aware of the short-comings of the algorithms used to produce the images so that they may properly interpret, and interact with, what they see. However, very little work has been done to quantify the accuracy of volumetric data reconstructions. Most analysis to date has been qualitative. Qualitative analysis uses simple visual inspection to determine whether characteristics, known to exist in the real world object, are present in the rendered image. Our research suggests metrics and methods for quantifying the "physical realism" of reconstructed isosurfaces.;Physical realism is a many faceted notion. In fact, a different metric could be defined for each physical property one wishes to consider. We have defined four metrics--Global Surface Area Preservation (GSAP), Volume Preservation (VP), Point Distance Preservation (PDP), and Isovalue Preservation (IVP). We present experimental results for each of these metrics and discuss their validity with respect to those results.;We also present the Reconstruction Quantification (sub)System (RQS). RQS provides a flexible framework for measuring physical realism. This system can be embedded in existing visualization systems with little modification of the system itself. Two types of analysis can be performed; reconstruction analysis and algorithm analysis. Reconstruction analysis allows users to determine the accuracy of individual surface reconstructions. Algorithm analysis, on the other hand, allows developers of visualization systems to determine the efficacy of the visualization system based on several reconstructions.
APA, Harvard, Vancouver, ISO, and other styles
45

Su, Ting. "Quantitative material decomposition methods for X-ray spectral CT." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI056/document.

Full text
Abstract:
La tomographie (CT) aux rayons X joue un rôle important dans l'imagerie non invasive depuis son introduction. Au cours des dernières années, de nombreuses avancées technologiques en tomographie par rayons X ont été observées, notamment la CT spectrale, qui utilise un détecteur à comptage de photons (PCD) pour discriminer les photons transmis correspondant à des bandes d'énergie sélectionnées afin d'obtenir une information spectrale. La CT spectrale permet de surmonter de nombreuses limitations des techniques précédentes et ouvre de nombreuses applications nouvelles, parmi lesquelles la décomposition quantitative des matériaux est le sujet le plus étudié. Un certain nombre de méthodes de décomposition des matériaux ont été rapportées et différents systèmes expérimentaux sont en cours de développement pour la CT spectrale. Selon le type de données sur lequel l'étape de décomposition fonctionne, nous avons les méthodes du domaine des projections (décomposition avant reconstruction) et les méthodes du domaine de l'image reconstruite (décomposition après reconstruction). La décomposition couramment utilisée est basée sur le critère des moindres carrés, nommée proj-LS et méthode ima-LS. Cependant, le problème inverse de la décomposition du matériau est généralement mal posé et les mesures du CT spectral aux rayons X souffrent de bruits de comptage de photons de Poisson. Le critère des moindres carrés peut conduire à un surajustement des données de mesure bruitées. Dans le présent travail, nous avons proposé un critère de moindre log-carré pour la méthode du domaine de projection afin de minimiser les erreurs sur le coefficient d'atténuation linéaire: méthode proj-LLS. De plus, pour réduire l'effet du bruit et lisser les images, nous avons proposé d'ajouter un terme de régularisation par patch pour pénaliser la somme des variations au carré dans chaque zone pour les décompositions des deux domaines, nommées proj-PR-LLS et ima -PR-LS méthode. Les performances des différentes méthodes ont été évaluées par des études de simulation avec des fantômes spécifiques pour différentes applications: (1) Application médicale: identification de l'iode et du calcium. Les résultats de la décomposition des méthodes proposées montrent que le calcium et l'iode peuvent être bien séparés et quantifiés par rapport aux tissus mous. (2) Application industrielle: tri des plastiques avec ou sans retardateur de flamme. Les résultats montrent que 3 types de matériaux ABS avec différents retardateurs de flamme peuvent être séparés lorsque l'épaisseur de l'échantillon est favorable. Enfin, nous avons simulé l'imagerie par CT spectrale avec un fantôme de PMMA rempli de solutions de Fe, Ca et K. Différents paramètres d'acquisition, c'est-à-dire le facteur d'exposition et le nombre de bandes d'énergie, ont été simulés pour étudier leur influence sur la performance de décomposition pour la détermination du fer
X-ray computed tomography (X-ray CT) plays an important part in non-invasive imaging since its introduction. During the past few years, numerous technological advances in X-ray CT have been observed, including spectral CT, which uses photon counting detectors (PCDs) to discriminate transmitted photons corresponding to selected energy bins in order to obtain spectral information with one single acquisition. Spectral CT enables us to overcome many limitations of the conventional CT techniques and opens up many new application possibilities, among which quantitative material decomposition is the hottest topic. A number of material decomposition methods have been reported and different experimental systems are under development for spectral CT. According to the type of data on which the decomposition step operates, we have projection domain method (decomposition before reconstruction) and image domain method (decomposition after reconstruction). The commonly used decomposition is based on least square criterion, named proj-LS and ima-LS method. However, the inverse problem of material decomposition is usually ill-posed and the X-ray spectral CT measurements suffer from Poisson photon counting noise. The standard LS criterion can lead to overfitting to the noisy measurement data. In the present work, we have proposed a least log-squares criterion for projection domain method to minimize the errors on linear attenuation coefficient: proj-LLS method. Furthermore, to reduce the effect of noise and enforce smoothness, we have proposed to add a patchwise regularization term to penalize the sum of the square variations within each patch for both projection domain and image domain decomposition, named proj-PR-LLS and ima-PR-LS method. The performances of the different methods were evaluated by spectral CT simulation studies with specific phantoms for different applications: (1) Medical application: iodine and calcium identification. The decomposition results of the proposed methods show that calcium and iodine can be well separated and quantified from soft tissues. (2) Industrial application: ABS-flame retardants (FR) plastic sorting. Results show that 3 kinds of ABS materials with different flame retardants can be separated when the sample thickness is favorable. Meanwhile, we simulated spectral CT imaging with a PMMA phantom filled with Fe, Ca and K solutions. Different acquisition parameters, i.e. exposure factor and number of energy bins were simulated to investigate their influence on the performance of the proposed methods for iron determination
APA, Harvard, Vancouver, ISO, and other styles
46

Wiggins, Bradford J. "The Dilemma of Mixed Methods." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2810.

Full text
Abstract:
The past three decades have seen a proliferation of research methods, both quantitative and qualitative, available to psychologists. Whereas some scholars have claimed that qualitative and quantitative methods are inherently opposed, recently many more researchers have argued in favor of "mixed methods" approaches. In this dissertation I begin with a review of the mixed methods literature regarding how to integrate qualitative and quantitative methodologies. Based on this review, I argue that current mixed methods approaches have fallen short of their goal of integrating qualitative and quantitative methodologies and I argue that this problem may be due to a problematic ontology. In response to this problem I propose and conduct an ontological analysis, which examines the writings of leading mixed methods researchers for evidence of an underlying ontology. This analysis reveals that an abstractionist ontology underlies current mixed methods approaches. I then propose that an alternative relational ontology might better enable mixed methods researchers to meaningfully relate qualitative and quantitative methodologies and I provide an exploration of what assuming a relational ontology would mean for mixed methods research.
APA, Harvard, Vancouver, ISO, and other styles
47

Oest, Rutger Daniel van. "Essays on quantitative marketing models and Monte Carlo integration methods." [Amsterdam : Rotterdam : Thela Thesis] ; Erasmus University [Host], 2005. http://hdl.handle.net/1765/6776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Guo, Zhigang. "Novel methods for increasing efficiency of quantitative trait locus mapping." Diss., Manhattan, Kan. : Kansas State University, 2007. http://hdl.handle.net/2097/374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Veerman, Jacob Lennert. "Quantitative health impact assessment: an exploration of methods and validity." [S.l.] : Rotterdam : [The Author] ; Erasmus University [Host], 2007. http://hdl.handle.net/1765/10490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Avadhut, Yamini. "Quantitative solid state nuclear magnetic resonance methods for inorganic materials." Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-153598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography