Dissertations / Theses on the topic 'Quantitative imaging analysis'

To see the other types of publications on this topic, follow the link: Quantitative imaging analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quantitative imaging analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Weith-Glushko, Seth A. "Quantitative analysis of infrared contrast enhancement algorithms /." Online version of thesis, 2007. http://hdl.handle.net/1850/4208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Chengshuai. "Quantitative Anisotropy Imaging based on Spectral Interferometry." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/99424.

Full text
Abstract:
Spectral interferometry, also known as spectral-domain white light or low coherence interferometry, has seen numerous applications in sensing and metrology of physical parameters. It can provide phase or optical path information of interest in single shot measurements with exquisite sensitivity and large dynamic range. As fast spectrometer became more available in 21st century, spectral interferometric techniques start to dominate over time-domain interferometry, thanks to its speed and sensitivity advantage. In this work, a dual-modality phase/birefringence imaging system is proposed to offer a quantitative approach to characterize phase, polarization and spectroscopy properties on a variety of samples. An interferometric spectral multiplexing method is firstly introduced by generating polarization mixing with specially aligned polarizer and birefringence crystal. The retardation and orientation of sample birefringence can then be measured simultaneously from a single interference spectrum. Furthermore, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. The highly integrated system demonstrates its capability for noninvasive, label-free, highly sensitive birefringence, DIC and phase imaging on anisotropic materials and biological specimens, where multiple intrinsic contrasts are desired. Besides using different intrinsic contrast regime to quantitatively measure different biological samples, spectral multiplexing interferometry technique also finds an exquisite match in imaging single anisotropic nanoparticles, even its size is well below diffraction limit. Quantitative birefringence spectroscopy measurement over gold nanorod particles on glass substrate demonstrates that the proposed system can simultaneously determine the polarizability-induced birefringence orientation, as well as the scattering intensity and the phase differences between major/minor axes of single nanoparticles. With the anisotropic nanoparticles� spectroscopic polarizability defined prior to the measurement with calculation or simulation, the system can be further used to reveal size, aspect ratio and orientation information of the detected anisotropic nanoparticle. Alongside developing optical anisotropy imaging systems, the other part of this research describes our effort of investigating the sensitivity limit for general spectral interferometry based systems. A complete, realistic multi-parameter interference model is thus proposed, while corrupted by a combination of shot noise, dark noise and readout noise. With these multiple noise sources in the detected spectrum following different statistical behaviors, Cramer-Rao Bounds is derived for multiple unknown parameters, including optical pathlength, system-specific initial phase, spectrum intensity as well as fringe visibility. The significance of the work is to establish criteria to evaluate whether an interferometry-based optical measurement system has been optimized to its hardware best potential. An algorithm based on maximum likelihood estimation is also developed to achieve absolute optical pathlength demodulation with high sensitivity. In particular, it achieves Cramer-Rao bound and offers noise resistance that can potentially suppress the demodulation jump occurrence. By simulations and experimental validations, the proposed algorithm demonstrates its capability of achieving the Cramer-Rao bound over a large dynamic range of optical pathlengths, initial phases and signal-to-noise ratios.
PHD
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Ye. "Quantitative magnetization transfer imaging: validation and analysis tool development." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123021.

Full text
Abstract:
An on-resonance balanced steady-state free precession technique for quantitative magnetization transfer (qMT) imaging is examined through an initial validation process against the existing "gold-standard" off-resonance spoiled gradient-echomodel. Numerical simulation and sensitivity analysis of the analytical model are performed and confirm the reliability of the analytical model for the normal range of magnetization transfer (MT) parameters. In vivo comparison betweenbalanced steady-state free precession and spoiled-gradient models show agreement between the two models. This new model is shown to be valid and promises to have advantages over the existing methods for its clinical practicality.A user-friendly software package for qMT simulation as well as data analysis and model fitting was also developed as part of this project. The package will be released in the public domain, with the intention to become a standard tool forqMT researchers and users.
Au travers d'un processus de validation initiale, nous comparons une technique d'imagerie quantitative par transfert d'aimantation (qMT) basée sur une séquence ≪en résonance≫ en précession libre avec état d'équilibre et gradients équilibrés, à la référence communément admise que constitue le modèle ≪hors-résonance≫ en écho de gradient avec destruction de l'aimantation transversale résiduelle.Nous réalisons une simulation numérique et une analyse de sensibilité du modèle analytique et confirmons ainsi la fiabilité de ce dernier dans une gamme habituelle de paramètres de transfert d'aimantation.La comparaison in-vivo entre le modèle en état d'équilibre à précession libre et le modèle avec destruction de l'aimantation transversale résiduelle montre une cohérence. Ce nouveau modèle apparat comme valide et semble prometteur en terme d'utilisation clinique de par sa facilité d'utilisation, comparé aux méthodes existantes.Dans le cadre de ce projet, nous avons également développé un logiciel de simulation du transfert d'aimantation quantitatif facile d'emploi, ainsi qu'un outil d'analyse des données et d'ajustement du modèle. Le logiciel est sur le point d'être proposé dans le domaine public et nous espérons qu'il devienne un outil d'analyse standard pour les chercheurs et les utilisateurs du transfert d'aimantation quantitatif.
APA, Harvard, Vancouver, ISO, and other styles
4

Agrawal, Vishesh. "Quantitative Imaging Analysis of Non-Small Cell Lung Cancer." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:27007763.

Full text
Abstract:
Quantitative imaging is a rapidly growing area of interest within the field of bioinformatics and biomarker discovery. Due to the routine nature of medical imaging, there is an abundance of high-quality imaging linked to clinical and genetic data. This data is particularly relevant for cancer patients who receive routine CT imaging for staging and treatment purposes. However, current analysis of tumor imaging is generally limited to two-dimensional diameter measurements and assessment of anatomic disease spread. This conventional tumor-node-metastasis (TNM) staging system stratifies patients to treatment protocols including decisions regarding adjuvant therapy. Recently there have been several studies suggesting that these images contain additional unique information regarding tumor phenotype that can further aid clinical decision-making. In this study I aimed to develop the predictive capability of medical imaging. I employed the principles of quantitative imaging and applied them to patients with non-small cell lung cancer (NSCLC). Quantitative imaging, also termed radiomics, seeks to extract thousands of imaging data points related to tumor shape, size and texture. These data points can potentially be consolidated to develop a tumor signature in the same way that a tumor might contain a genetic signature corresponding to mutational burden. To accomplish this I applied radiomics analyses to patients with early and late stage NSCLC and tested these for correlation with both histopathological data as well as clinical outcomes. Patients with both early and late stage NSCLC were assessed. For locally advanced NSCLC (LA-NSCLC), I analyzed patients treated with preoperative chemoradiation followed by surgical resection. To assess early stage NSCLC, I analyzed patients treated with stereotactic body radiation therapy (SBRT). Quantitative imaging features were extracted from CT imaging obtained prior to chemoradiation and post-chemoradiation prior to surgical resection. For patients who underwent SBRT, quantitative features were extracted from cone-beam CTs (CBCT) at multiple time points during therapy. Univariate and multivariate logistic regression were used to determine association with pathologic response. Concordance-index and Kaplan-Meier analyses were applied to time dependent endpoints of overall survival, locoregional recurrence-free and distant metastasis. In this study, 127 LA-NSCLC patients were identified and treated with preoperative chemoradiation and surgical resection. 99 SBRT patients were identified in a separate aim of this study. Reduction of CT-defined tumor volume (OR 1.06 [1.02-1.09], p=0.002) as continuous variables per percentage point was associated with pathologic complete response (pCR) and locoregional recurrence (LRR). Conventional response assessment determined by diameter (p=0.213) was not associated with pCR or any survival endpoints. Seven texture features on pre-treatment tumor imaging were associated with worse pathologic outcome (AUC 0.61-0.66). Quantitative assessment of lymph node burden demonstrated that pre-treatment and post-treatment volumes are significantly associated with both OS and LRR (CI 0.62-0.72). Textural analyses of these lymph nodes further identified 3 unique pre-treatment and 7 unique post-treatment features significantly associated with either LRR, DM or OS. Finally early volume change showed associated with overall survival in CBCT scans of early NSCLC. Quantitative assessment of NSCLC is thus strongly associated with pathologic response and survival endpoints. In contrast, conventional imaging response assessment was not predictive of pathologic response or survival endpoints. This study demonstrates the novel application of radiomics to lymph node texture, CBCT volume and patients undergoing neoadjuvant therapy for NSCLC. These examples highlight the potential within the rapidly growing field of quantitative imaging to better describe tumor phenotype. These results provide evidence to the growing radioimics literature that there is significant association between imaging, pathology and clinical outcomes. Further exploration will allow for more complete models describing tumor imaging phoentype with clinical outcomes.
APA, Harvard, Vancouver, ISO, and other styles
5

Hinsdale, Taylor A. "Laser Speckle Imaging: A Quantitative Tool for Flow Analysis." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1251.

Full text
Abstract:
Laser speckle imaging, often referred to as laser speckle contrast analysis (LASCA), has been sought after as a quasi-real-time, full-field, flow visualization method. It has been proven to be a valid and reliable qualitative method, but there has yet to be any definitive consensus on its ability to be used as a quantitative tool. The biggest impediment to the process of quantifying speckle measurements is the introduction of additional non dynamic speckle patterns from the surroundings. The dynamic speckle pattern under investigation is often obscured by noise caused by background static speckle patterns. One proposed solution to this problem is known as dynamic laser speckle imaging (dLSI). dLSI attempts to isolate the dynamic speckle signal from the previously mentioned background and provide a consistent dynamic measurement. This paper will investigate the use of this method over a range of experimental and simulated conditions. While it is believable that dLSI could be used quantitatively, there were inconsistencies that arose during analysis. Simulated data showed that if the mixed dynamic and static speckle patterns were modeled as the sum of two independent speckle patterns, increasing static contributions led to decreasing dynamic contrast contributions, something not expected by theory. Experimentation also showed that there were scenarios where scattering from the dynamic media obscured scattering from the static medium, resulting in poor estimates of the velocities causing the dynamic scattering. In light of these observations, steps were proposed and outlined to further investigate into this method. With more research it should be possible to create a set of conditions where dLSI is known be accurate and quantitative.
APA, Harvard, Vancouver, ISO, and other styles
6

Elagamy, Samar H. "Advancing ATR-FTIR Imaging into The Realm or Quantitative Analysis." Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1574416533908128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Premraj, Senthil Kumar. "Facilitating four-dimensional quantitative analysis of aortic MRI for clinical use." Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/260.

Full text
Abstract:
Marfan Syndrome leads to the weakening of the thoracic aorta and ultimate rupture causing death of the patient. Current monitoring method involves measuring the diameter of the aorta near the heart. Our approach is to develop a new technology that will provide clinicians the ability to evaluate the size, shape and motion of the entire thoracic aorta using four-dimensional cardiac MRI. This project alters the existing research algorithms to provides an integrated application for processing the images and provides novel measurements about the aorta from a data set of 32 normal subjects and 38 patients with serial scans.
APA, Harvard, Vancouver, ISO, and other styles
8

Snyder, William C. "An in-scene parameter estimation method for quantitative image analysis /." Online version of thesis, 1994. http://hdl.handle.net/1850/11061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Biffar, Andreas. "Quantitative Analysis of Diffusion-weighted Magnetic Resonance Imaging in the Spine." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-126230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Kun. "Medical imaging of the heart : quantitative analysis of three-dimensional echocardiographic images." Thesis, University of Newcastle Upon Tyne, 2011. http://hdl.handle.net/10443/1392.

Full text
Abstract:
Accurate, reproducible determination of cardiac chamber volume, especially left ventricular (LV) volume, is important for clinical assessment, risk stratification, selection of therapy, and serial monitoring of patients with cardiovascular disease. Echocardiography is the most widely used imaging modality in the clinical diagnosis of left ventricular function abnormalities. In the last 15 years, developments in real time three-dimensional echocardiography (RT3DE) have achieved superior accuracy and reproducibility compared with conventional two-dimensional echocardiography (2DE) for measurement of left ventricular volume and function. However, RT3DE suffers from the limitations inherent to the ultrasonic imaging modality and the cost of increased effort of data handling and image analysis. There were two aims of this research project. Firstly, it aimed to develop different new semi-automated algorithms for LV endocardial surface delineation, LV volume and EF quantification from clinical RT3DE images. Secondly, through assessing and comparing the performance of these algorithms in the aspects of accuracy and reproducibility, this project aimed to investigate what factors in real time 3D echo images influenced the performance of each algorithm, so that advantages and drawbacks of 3D echo images can be better understood. The basic structure of the content of this thesis is as follows: Chapter 1 introduces the background and the aims of this project. Chapter 2 describes the development of the new semi-automated algorithms. Chapter 3 to Chapter 6 presents the four studies designed to assess and compare the accuracy and reproducibility of each algorithm. These studies were the balloon phantom study, the tissue-mimicking phantom study, the clinical cardiac magnetic resonance images study and the clinical contrast enhanced 3D stress echo images study. Chapter 7 summarises all these studies, draws conclusions, and describes future work. In conclusion, it has been shown that the semi-automated algorithms can measure LV volume and EF quantitatively in clinical 3D echo images. To achieve better accuracy and reproducibility, 3D echo images should be analysed from all three dimensions.
APA, Harvard, Vancouver, ISO, and other styles
11

Vogel, Abby Jeanne. "Noninvasive imaging techniques as a quantitative analysis of Kaposi's sarcoma skin lesions." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7679.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Fischell Dept. of Bioengineering . Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
12

White, Nathan S. "Quantitative diffusion magnetic resonance imaging of the brain validation, acquisition, and analysis /." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3389027.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed Feb. 18, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
13

Wong, Wilbur Chun-Kit. "Segmentation algorithms for quantitative analysis of vascular abnormalities on three dimensional angiography /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20WONG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bernhem, Kristoffer. "Quantitative bioimaging in single cell signaling." Doctoral thesis, KTH, Tillämpad fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215076.

Full text
Abstract:
Imaging of cellular samples has for several hundred years been a way for scientists to investigate biological systems. With the discovery of immunofluorescence labeling in the 1940’s and later genetic fluorescent protein labeling in the 1980’s the most important part in imaging, contrast and specificity, was drastically improved. Eversince, we have seen a increased use of fluorescence imaging in biological research, and the application and tools are constantly being developed further. Specific ion imaging has long been a way to discern signaling events in cell systems. Through use of fluorescent ion reporters, ionic concentrations can be measured inliving cells as result of applied stimuli. Using Ca2+ imaging we have demonstrated that there is a inverse influence by plasma membrane voltage gated calcium channels on angiotensin II type 1 receptor (a protein involved in blood pressure regulation). This has direct implications in treatment of hypertension (high blood pressure),one of the most common serious diseases in the western civilization today with approximately one billion afflicted adults world wide in 2016. Extending from this more lower resolution live cell bioimaging I have moved into super resolution imaging. This thesis includes works on the interpretation of super resolution imaging data of the neuronal Na+, K+ - ATPase α3, a receptor responsible for maintaining cell homeostasis during brain activity. The imaging data is correlated with electrophysiological measurements and computer models to point towards possible artefacts in super resolution imaging that needs to be taken into account when interpreting imaging data. Moreover, I proceeded to develop a software for single-molecule localization microscopy analysis aimed for the wider research community and employ this software to identify expression artifacts in transiently transfected cell systems. In the concluding work super-resultion imaging was used to map out the early steps of the intrinsic apoptotic signaling cascade in space and time. Using superresoultion imaging, I mapped out in intact cells at which time points and at which locations the various proteins involved in apoptotic regulation are activated and interact.
Avbildning av biologiska prover har i flera hundra år varit ett sätt för forskare att undersöka biologiska system. Med utvecklingen av immunofluoresens inmärkn-ing och fluoresens-mikroskopi förbättrades de viktigaste aspekterna av mikroskopi,kontrast och specificitet. Sedan 1941 har vi sett kontinuerligt mer mångsidigt och frekvent användning av fluorosense-mikroskopi i biologisk forskning. Jon-mikroskopi har länge varit en metod att studera signalering i cell-system. Genom användning av fluorosenta jon-sensorer går det att mäta variationer avjon koncentrationer i levande celler som resultat av yttre påverkan. Genom att använda Ca2+ mikroskopi har jag visat att det finns en omvänd koppling mellan kalcium-kanaler i plasma-membran och angiotensin II typ 1 receptorn (ett proteininvolverat i blodtrycksreglering). Detta har direkta implikationer för behandlingav högt blodtryck, en av de mer vanliga sjukdomarna i västvärlden idag med överen miljard drabbade patienter i världen 2016. Efter detta projekt vidgades mitt fokus till att inkludera superupplösnings-mikroskopi. Denna avhandling inkluderar ett arbete fokuserat på tolkningen av superupplösnings-mikroskopi data från neuronal Na+, K+ - ATPase α3, en jon-pump som återställer cellernas jonbalans i samband med cell signalering. Mikroskopi-datan korreleras mot elektrofysiologi experiment och modeller för att illustrera möjliga artefakter i superupplösnings-mikroskopi som måste tas i beaktande i samband med tolkning av data. Jag fortsatte med att utveckla mjukvara för analys av data från singel-molekyl-lokalisations-mikroskopi där fokuset för mjukvaran framförallt varit på användarvänligheten. Detta då jag hoppas att den kommer vara användbar för ett bredare forskingsfält. Mjukvaran användes även i ett separat projekt för att identifiera överuttrycks-artefakter i transfekterade celler. I det avslutande arbetet använder jag superupplösnings-mikroskopi för att karakterisera de tidiga stegen i mitokondriell apoptos. Jag identifierar när och var i cellen de olika proteinerna involverade i apoptos signaleringen är aktiverade och interagerar.

QC 20171003

APA, Harvard, Vancouver, ISO, and other styles
15

Mehndiratta, Amit. "Quantitative measurements of cerebral hemodynamics using magnetic resonance imaging." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:b9dfb1a4-f297-47b9-a95f-b60750065008.

Full text
Abstract:
Cerebral ischemia is a vascular disorder that is characterized by the reduction of blood supply to the brain, resulting in impaired metabolism and finally death of brain cells. Cerebral ischemia is a major clinical problem associated with global morbidity and mortality rates of about 30%. Clinical management of cerebral ischemia relies heavily on perfusion analysis using dynamic susceptibility contrast MRI (DSC-MRI). DSC-MRI analysis is performed using mathematical models that simulate the underlying vascular physiology of brain. Cerebral perfusion is calculated using perfusion imaging and is used as a marker of tissue health status; low perfusion being an indicator of impaired tissue metabolism. In addition to measurement of cerebral perfusion, it is possible to quantify the blood flow variation within the capillary network referred to as cerebral microvascular hemodynamics. It has been hypothesized that microvascular hemodynamics are closely associated with tissue oxygenation and that hemodynamics might undergo a considerable amount of variation to maintain normal tissue metabolism under conditions of ischemic stress. However with DSC-MRI perfusion imaging, quantification of cerebral hemodynamics still remains a big challenge. Singular Value Decomposition (SVD) is currently a standard methodology for estimation of cerebral perfusion with DSC-MRI in both research and clinical settings. It is a robust technique for quantification of cerebral perfusion, however, the quantification of hemodynamic information cannot be achieved with SVD methods because of the non-physiological behaviour of SVD in microvascular hemodynamic estimation. SVD is sensitive to the noise in the MR signal which appears in the calculated microvascular hemodynamics, thus making it difficult to interpret for pathophysiological significance. Other methods, including model-based approaches or methods based on likelihood estimation, stochastic modeling and Gaussian processes, have been proposed. However, none of these have become established as a means to study tissue hemodynamics in perfusion imaging. Possibly because of the associated constrains in these methodologies that limited their sensitivity to hemodynamic variation in vivo. The objective of the research presented in this thesis is to develop and to evaluate a method to perform a quantitative estimation of cerebral hemodynamics using DSC-MRI. A new Control Point Interpolation (CPI) method has been developed to perform a non-parametric analysis for DSC-MRI. The CPI method was found to be more accurate in estimation of cerebral perfusion than the alternative methods. Capillary hemodynamics were calculated by estimating the transit time distribution of the tissue capillary network using the CPI method. The variations in transit time distribution showed quantitative differences between normal tissue and tissue under ischemic stress. The method has been corrected for the effects of macrovascular bolus dispersion and tested over a larger clinical cohort of patients with atherosclerosis. CPI method is thus a promising method for quantifying cerebral hemodynamics using perfusion imaging. CPI method is an attempt to evaluate the use of quantitative hemodynamic information in diagnostic and prognostic monitoring of patients with ischemia and vascular diseases.
APA, Harvard, Vancouver, ISO, and other styles
16

Uthama, Ashish. "3D spherical harmonic invariant features for sensitive and robust quantitative shape and function analysis in brain MRI." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/438.

Full text
Abstract:
A novel framework for quantitative analysis of shape and function in magnetic resonance imaging (MRI) of the brain is proposed. First, an efficient method to compute invariant spherical harmonics (SPHARM) based feature representation for real valued 3D functions was developed. This method addressed previous limitations of obtaining unique feature representations using a radial transform. The scale, rotation and translation invariance of these features enables direct comparisons across subjects. This eliminates need for spatial normalization or manually placed landmarks required in most conventional methods [1-6], thereby simplifying the analysis procedure while avoiding potential errors due to misregistration. The proposed approach was tested on synthetic data to evaluate its improved sensitivity. Application on real clinical data showed that this method was able to detect clinically relevant shape changes in the thalami and brain ventricles of Parkinson's disease patients. This framework was then extended to generate functional features that characterize 3D spatial activation patterns within ROIs in functional magnetic resonance imaging (fMRI). To tackle the issue of intersubject structural variability while performing group studies in functional data, current state-of-the-art methods use spatial normalization techniques to warp the brain to a common atlas, a practice criticized for its accuracy and reliability, especially when pathological or aged brains are involved [7-11]. To circumvent these issues, a novel principal component subspace was developed to reduce the influence of anatomical variations on the functional features. Synthetic data tests demonstrate the improved sensitivity of this approach over the conventional normalization approach in the presence of intersubject variability. Furthermore, application to real fMRI data collected from Parkinson's disease patients revealed significant differences in patterns of activation in regions undetected by conventional means. This heightened sensitivity of the proposed features would be very beneficial in performing group analysis in functional data, since potential false negatives can significantly alter the medical inference. The proposed framework for reducing effects of intersubject anatomical variations is not limited to functional analysis and can be extended to any quantitative observation in ROIs such as diffusion anisotropy in diffusion tensor imaging (DTI), thus providing researchers with a robust alternative to the controversial normalization approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Shichao. "High-sensitivity Full-field Quantitative Phase Imaging Based on Wavelength Shifting Interferometry." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/102502.

Full text
Abstract:
Quantitative phase imaging (QPI) is a category of imaging techniques that can retrieve the phase information of the sample quantitatively. QPI features label-free contrast and non-contact detection. It has thus gained rapidly growing attention in biomedical imaging. Capable of resolving biological specimens at tissue or cell level, QPI has become a powerful tool to reveal the structural, mechanical, physiological and spectroscopic properties. Over the past two decades, QPI has seen a broad spectrum of evolving implementations. However, only a few have seen successful commercialization. The challenges are manifold. A major problem for many QPI techniques is the necessity of a custom-made system which is hard to interface with existing commercial microscopes. For this type of QPI techniques, the cost is high and the integration of different imaging modes requires nontrivial hardware modifications. Another limiting factor is insufficient sensitivity. In QPI, sensitivity characterizes the system repeatability and determines the quantification resolution of the system. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent. In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. On one hand, the full-field implementations, compared to point-scanning, spectral domain QPI techniques, require no mechanical scanning to form a phase image. On the other, WSI has the advantage of preserving the integrity of the interferometer and compatibility with multi-modal imaging requirement. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. In WSI, the shifts in wavelength can be applied in fine steps, termed swept source digital holographic phase microscopy (SS-DHPM), or a multi-wavelength-band manner, termed low coherence wavelength shifting interferometry (LC-WSI). SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation. Besides experimentally demonstrating the high sensitivity (limited by only shot noise) with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework examines the Cramér-Rao bound (CRB), algorithmic sensitivity and experimental sensitivity, and facilitates the diagnosis of algorithm efficiency and system efficiency. The framework can be applied not only to the WSI techniques we proposed, but also to a broad range of QPI techniques. Several popular phase shifting interferometry techniques as well as off-axis interferometry is studied. The comparisons between them are shown to provide insights into algorithm optimization and energy efficiency of sensitivity.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
18

Liang, Xuwei. "MODELING AND QUANTITATIVE ANALYSIS OF WHITE MATTER FIBER TRACTS IN DIFFUSION TENSOR IMAGING." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/818.

Full text
Abstract:
Diffusion tensor imaging (DTI) is a structural magnetic resonance imaging (MRI) technique to record incoherent motion of water molecules and has been used to detect micro structural white matter alterations in clinical studies to explore certain brain disorders. A variety of DTI based techniques for detecting brain disorders and facilitating clinical group analysis have been developed in the past few years. However, there are two crucial issues that have great impacts on the performance of those algorithms. One is that brain neural pathways appear in complicated 3D structures which are inappropriate and inaccurate to be approximated by simple 2D structures, while the other involves the computational efficiency in classifying white matter tracts. The first key area that this dissertation focuses on is to implement a novel computing scheme for estimating regional white matter alterations along neural pathways in 3D space. The mechanism of the proposed method relies on white matter tractography and geodesic distance mapping. We propose a mask scheme to overcome the difficulty to reconstruct thin tract bundles. Real DTI data are employed to demonstrate the performance of the pro- posed technique. Experimental results show that the proposed method bears great potential to provide a sensitive approach for determining the white matter integrity in human brain. Another core objective of this work is to develop a class of new modeling and clustering techniques with improved performance and noise resistance for separating reconstructed white matter tracts to facilitate clinical group analysis. Different strategies are presented to handle different scenarios. For whole brain tractography reconstructed white matter tracts, a Fourier descriptor model and a clustering algorithm based on multivariate Gaussian mixture model and expectation maximization are proposed. Outliers are easily handled in this framework. Real DTI data experimental results show that the proposed algorithm is relatively effective and may offer an alternative for existing white matter fiber clustering methods. For a small amount of white matter fibers, a modeling and clustering algorithm with the capability of handling white matter fibers with unequal length and sharing no common starting region is also proposed and evaluated with real DTI data.
APA, Harvard, Vancouver, ISO, and other styles
19

Rivière, Bathilde. "Objective and quantitative analysis of corneal transparency with clinical (in vivo) imaging technology." Thesis, KTH, Tillämpad fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kreilkamp, Barbara A. K. "Advanced magnetic resonance imaging and quantitative analysis approaches in patients with refractory focal epilepsy." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3017303/.

Full text
Abstract:
Background Epilepsy has a high prevalence of 1%, which makes it the most common serious neurological disorder. The most difficult to treat type of epilepsy is temporal lobe epilepsy (TLE) with its most commonly associated lesion being hippocampal sclerosis (HS). About 30-50% of all patients undergoing resective surgery of epileptogenic tissue continue to have seizures postoperatively. Indication for this type of surgery is only given when lesions are clearly visible on magnetic resonance images (MRI). About 30% of all patients with focal epilepsy do not show an underlying structural lesion upon qualitative neuroradiological MRI assessment (MRI-negative). Objectives The work presented in this thesis uses MRI data to quantitatively investigate structural differences between brains of patients with focal epilepsy and healthy controls using automated imaging preprocessing and analysis methods. Methods All patients studied in this thesis had electrophysiological evidence of focal epilepsy, and underwent routine clinical MRI prior to participation in this study. There were two datasets and both included a cohort of age-matched controls: (i) Patients with TLE and associated HS who later underwent selective amygdalahippocampectomy (cohort 1) and (ii) MRI-negative patients with medically refractory focal epilepsy (cohort 2). The participants received high- resolution routine clinical MRI as well as additional sequences for gray and white matter (GM/WM) structural imaging. A neuroradiologist reviewed all images prior to analysis. Hippocampal subfield volume and automated tractography analysis was performed in patients with TLE and HS and related to post-surgical outcomes, while images of MRI- negative patients were analyzed using voxel-based morphometry (VBM) and manual/automated tractography. All studies were designed to detect quantitative differences between patients and controls, except for the hippocampal subfield analysis as control data was not available and comparisons were limited to patients with persistent postoperative seizures and those without. Results 1. Automated hippocampal subfield analysis (cohort 1): The high-resolution hippocampal subfield segmentation technique cannot establish a link between hippocampal subfield volume loss and post-surgical outcome. Ipsilateral and contralateral hippocampal subfield volumes did not correlate with clinical variables such as duration of epilepsy and age of onset of epilepsy. 2. Automated WM diffusivity analysis (cohort 1): Along-the-tract analysis showed that ipsilateral tracts of patients with right/left TLE and HS were more extensively affected than contralateral tracts and the affected regions within tracts could be specified. The extent of hippocampal atrophy (HA) was not related to (i) the diffusion alterations of temporal lobe tracts or (ii) clinical characteristics of patients, whereas diffusion alterations of ipsilateral temporal lobe tracts were significantly related to age at onset of epilepsy, duration of epilepsy and epilepsy burden. Patients without any postoperative seizure symptoms (excellent outcomes) had more ipsilaterally distributed WM tract diffusion alterations than patients with persistent postoperative seizures (poorer outcomes), who were affected bilaterally. 3. Automated epileptogenic lesion detection (cohort 2): Comparison of individual patients against the controls revealed that focal cortical dysplasia (FCD) can be detected automatically using statistical thresholds. All sites of dysplasia reported at the start of the study were detected using this technique. Two additional sites in two different patients, which had previously escaped neuroradiological assessment, could be identified. When taking these statistical results into account during re-assessment of the dedicated epilepsy research MRI, the expert neuroradiologist was able to confirm these as lesions. 4. Manual and automated WM diffusion tensor imaging (DTI) analysis (cohort 2): The analysis of consistency across approaches revealed a moderate to good agreement between extracted tract shape, morphology and space and a strong correlation between diffusion values extracted with both methods. While whole-tract DTI-metrics determined using Automated Fiber Quantification (AFQ) revealed correlations with clinical variables such as age of onset and duration of epilepsy, these correlations were not found using the manual technique. The manual approach revealed more differences than AFQ in group comparisons of whole-tract DTI-metrics. Along-the-tract analysis provided within AFQ gave a more detailed description of localized diffusivity changes along tracts, which correlated with clinical variables such as age of onset and epilepsy duration. Conclusions While hippocampal subfield volume loss in patients with TLE and HS was not related with any clinical variables or to post-surgical outcomes, WM tract diffusion alterations were more bilaterally distributed in patients with persistent postoperative seizures, compared to patients with excellent outcomes. This may indicate that HS as an initial precipitating injury is not affected by clinical features of the disorder and automated hippocampal subfield mapping based on MRI is not sufficient to stratify patients according to outcome. Presence of persisting seizures may depend on other pathological processes such as seizure propagation through WM tracts and WM integrity. Automated and time-efficient three-dimensional voxel-based analysis may complement conventional visual assessments in patients with MRI-negative focal epilepsy and help to identify FCDs escaping routine neuroradiological assessment. Furthermore, automated along-the-tract analysis may identify widespread abnormal diffusivity and correlations between WM integrity loss and clinical variables in patients with MRI-negative epilepsy. However, automated WM tract analysis may differ from results obtained with manual methods and therefore caution should be exercised when using automated techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Mani, Meenakshi. "Quantitative Analysis of Open Curves in Brain Imaging: Applications to White Matter Fibers and Sulci." Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00851505.

Full text
Abstract:
Il y a dans le cerveau humain environ 100 sillons corticaux, et plus de 100 milliards de faisceaux de matière blanche. Si le nombre, la configuration et la fonction de ces deux structures anatomiques diffèrent, elles possèdent toutefois une propriété géométrique commune: ce sont des courbes ouvertes continues. Cette thèse se propose d'étudier comment les caractéristiques des courbes ouvertes peuvent être exploitées afin d'analyser quantitativement les sillons corticaux et les faisceaux de matière blanche. Les quatre caractéristiques d'une courbe ouverte-forme, taille, orientation et position- ont des propriétés différentes, si bien que l'approche usuelle est de traiter chacune séparément à l'aide d'une métrique ad hoc. Nous introduisons un cadre riemannien adapté dans lequel il est possible de fusionner les espaces de caractéristiques afin d'analyser conjointement plusieurs caractéristiques. Cette approche permet d'apparier et de comparer des courbes suivant des distances géodésiques. Les correspondances entre courbes sont établies automatiquement en utilisant une métrique élastique. Dans cette thèse, nous validerons les métriques introduites et nous montrerons leurs applications pratiques, entre autres dans le cadre de plusieurs problèmes cliniques importants. Dans un premier temps, nous étudierons spécifiquement les fibres du corps calleux, afin de montrer comment le choix de la métrique influe sur le résultat du clustering. Nous proposons ensuite des outils permettant de calculer des statistiques sommaires sur les courbes, ce qui est un premier pas vers leur analyse statistique. Nous représentons les groupes de faisceaux par la moyenne et la variance de leurs principales caractéristiques, ce qui permet de réduire le volume des données dans l'analyse des faisceaux de matière blanche. Ensuite, nous présentons des méthodes permettant de détecter les changements morphologiques et les atteintes de la matière blanche. Quant aux sillons corticaux, nous nous intéressons au problème de leur labellisation.
APA, Harvard, Vancouver, ISO, and other styles
22

Mani, Meenakshi. "Quantitative analysis of open curves in brain imaging : applications to white matter fibres and sulci." Rennes 1, 2011. http://www.theses.fr/2011REN1S026.

Full text
Abstract:
This thesis is a study of how the physical attributes of open curves can be used to advantage in the many varied quantitative applications of white matter fibers and sulci. Shape, scale, orientation and position, the four physical features associated with open curves, have different properties so the usual approach has been to design different metrics and spaces to treat them individually. We use a comprehensive Riemannian framework where joint feature spaces allow for analysis of combinations of features. This is an alternative approach where we can compare curves using geodesic distances. In this thesis, we validate the metrics we use, demonstrate practical uses and apply the tools to important clinical problems. To begin, specific tract configurations in the corpus callosum are used to showcase clustering results that vary with the different Riemannian distance metrics. This nicely argues for the judicious selection of metrics in various applications, a central premise in our work. The framework also provides tools for computing statistical summaries of curves, a first step in statistical analysis. We represent fiber bundles with a mean and variance which describes their essential characteristics. This is a convenient way to work with the large volume in white matter fiber analysis. Next, we design and implement methods to detect morphological changes in the corpus callosum and to track progressive white matter disease. With sulci, we address the specific problem of labeling. An evaluation of physical features and methods such as clustering leads us to a pattern matching solution in which the sulcal configuration itself is the best feature
Cette thèse se propose d'étudier comment les caractéristiques des courbes ouvertes peuvent être exploitées afin d'analyser quantitativement les sillons corticaux et les faisceaux de matière blanche. Les quatre caractéristiques d'une courbe ouverte--forme, taille, orientation et position--ont des propriétés différentes, si bien que l'approche usuelle est de traiter chacune séparément à l'aide d'une métrique ad hoc. Nous introduisons un cadre riemannien adapté dans lequel il est possible de fusionner les espaces de caractéristiques afin d'analyser conjointement plusieurs caractéristiques. Cette approche permet d'apparier et de comparer des courbes suivant des distances géodésiques. Les correspondances entre courbes sont établies automatiquement en utilisant une métrique élastique. Dans cette thèse, nous validerons les métriques introduites et nous montrerons leurs applications pratiques, entre autres dans le cadre de plusieurs problèmes cliniques importants. Dans un premier temps, nous étudierons spécifiquement les fibres du corps calleux, afin de montrer comment le choix de la métrique influe sur le résultat du clustering. Nous proposons ensuite des outils permettant de calculer des statistiques sommaires sur les courbes, ce qui est un premier pas vers leur analyse statistique. Nous représentons les groupes de faisceaux par la moyenne et la variance de leurs principales caractéristiques, ce qui permet de réduire le volume des données dans l'analyse des faisceaux de matière blanche. Ensuite, nous présentons des méthodes permettant de détecter les changements morphologiques et les atteintes de la matière blanche. Quant aux sillons corticaux, nous nous intéressons au problème de leur labellisation
APA, Harvard, Vancouver, ISO, and other styles
23

Vogel, Abby Jeanne. "Non-invasive imaging techniques as a quantitative analysis of skin damage due to ionizing radiation." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1667.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Biological Resources Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Boliang. "3D analysis of bone ultra structure from phase nano-CT imaging." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI016/document.

Full text
Abstract:
L'objectif de cette thèse était de quantifier le réseau lacuno-canaliculaire du tissu osseux à partir d’images 3D acquises en nano CT synchrotron de phase. Ceci a nécessité d’optimiser les processus d’acquisition et de reconstruction de phase, ainsi que de développer des méthodes efficaces de traitement d'images pour la segmentation et l’analyse 3D. Dans un premier temps, nous avons étudié et évalué différents algorithmes de reconstruction de phase. Nous avons étendu la méthode de Paganin pour plusieurs distances de propagation et l’avons évaluée et comparée à d’autres méthodes, théoriquement puis sur nos données expérimentales Nous avons développé une chaine d’analyse, incluant la segmentation des images et prenant en compte les gros volumes de données à traiter. Pour la segmentation des lacunes, nous avons choisi des méthodes telles que le filtre médian, le seuillage par hystérésis et l'analyse par composantes connexes. La segmentation des canalicules repose sur une méthode de croissance de région après rehaussement des structures tubulaires. Nous avons calculé des paramètres de porosité, des descripteurs morphologiques des lacunes ainsi que des nombres de canalicules par lacune. Par ailleurs, nous avons introduit des notions de paramètres locaux calculés dans le voisinage des lacunes. Nous avons obtenu des résultats sur des images acquises à différentes tailles de voxel (120nm, 50nm, 30nm) et avons également pu étudier l’impact de la taille de voxel sur les résultats. Finalement ces méthodes ont été utilisées pour analyser un ensemble de 27 échantillons acquis à 100 nm dans le cadre du projet ANR MULTIPS. Nous avons pu réaliser une analyse statistique pour étudier les différences liées au sexe et à l'âge. Nos travaux apportent de nouvelles données quantitatives sur le tissu osseux qui devraient contribuer à la recherche sur les mécanismes de fragilité osseuse en relation avec des maladies comme l’ostéoporose
Osteoporosis is a bone fragility disease resulting in abnormalities in bone mass and density. In order to prevent osteoporotic fractures, it is important to have a better understanding of the processes involved in fracture at various scales. As the most abundant bone cells, osteocytes may act as orchestrators of bone remodeling which regulate the activities of both osteoclasts and osteoblasts. The osteocyte system is deeply embedded inside the bone matrix and also called lacuno-canalicular network (LCN). Although several imaging techniques have recently been proposed, the 3D observation and analysis of the LCN at high spatial resolution is still challenging. The aim of this work was to investigate and analyze the LCN in human cortical bone in three dimensions with an isotropic spatial resolution using magnified X-ray phase nano-CT. We performed image acquisition at different voxel sizes of 120 nm, 100 nm, 50 nm and 30 nm in the beamlines ID16A and ID16B of the European Synchrotron Radiation Facility (ESRF - European Synchrotron Radiation Facility - Grenoble). Our first study concerned phase retrieval, which is the first step of data processing and consists in solving a non-linear inverse problem. We proposed an extension of Paganin’s method suited to multi-distance acquisitions, which has been used to retrieve phase maps in our experiments. The method was compared theoretically and experimentally to the contrast transfer function (CTF) approach for homogeneous object. The analysis of the 3D reconstructed images requires first to segment the LCN, including both the segmentation of lacunae and of canaliculi. We developed a workflow based on median filter, hysteresis thresholding and morphology filters to segment lacunae. Concerning the segmentation of canaliculi, we made use of the vesselness enhancement to improve the visibility of line structures, the variational region growing to extract canaliculi and connected components analysis to remove residual noise. For the quantitative assessment of the LCN, we calculated morphological descriptors based on an automatic and efficient 3D analysis method developed in our group. For the lacunae, we calculated some parameters like the number of lacunae, the bone volume, the total volume of all lacunae, the lacunar volume density, the average lacunae volume, the average lacunae surface, the average length, width and depth of lacunae. For the canaliculi, we first computed the total volume of all the canaliculi and canalicular volume density. Moreover, we counted the number of canaliculi at different distances from the surface of each lacuna by an automatic method, which could be used to evaluate the ramification of canaliculi. We reported the statistical results obtained on the different groups and at different spatial resolutions, providing unique information about the organization of the LCN in human bone in three dimensions
APA, Harvard, Vancouver, ISO, and other styles
25

Selva, Luis Enrique. "Quantitative multivariate analysis of human brain structures in vivo using magnetic resonance imaging at 3.0 tesla." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1692357341&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dwyer, Michael G. "Development and application of novel algorithms for quantitative analysis of magnetic resonance imaging in multiple sclerosis." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6298.

Full text
Abstract:
This document is a critical synopsis of prior work by Michael Dwyer submitted in support of a PhD by published work. The selected work is focused on the application of quantitative magnet resonance imaging (MRI) analysis techniques to the study of multiple sclerosis (MS). MS is a debilitating disease with a multi-factorial pathology, progression, and clinical presentation. Its most salient feature is focal inflammatory lesions, but it also includes significant parenchymal atrophy and microstructural damage. As a powerful tool for in vivo investigation of tissue properties, MRI can provide important clinical and scientific information regarding these various aspects of the disease, but precise, accurate quantitative analysis techniques are needed to detect subtle changes and to cope with the vast amount of data produced in an MRI session. To address this, eight new techniques were developed by Michael Dwyer and his co-workers to better elucidate focal, atrophic, and occult/"invisible" pathology. These included: a method to better evaluate errors in lesion identification; a method to quantify differences in lesion distribution between scanner strengths; a method to measure optic nerve atrophy; a more precise method to quantify tissue-specific atrophy; a method sensitive to dynamic myelin changes; and a method to quantify iron in specific brain structures. Taken together, these new techniques are complementary and improve the ability of clinicians and researchers to reliably assess various key elements of MS pathology in vivo.
APA, Harvard, Vancouver, ISO, and other styles
27

Ronteix, Gustave. "Inferring cell-cell interactions from quantitative analysis of microscopy images." Thesis, Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAX111.

Full text
Abstract:
Les systèmes biologiques sont bien plus que la somme de leurs constituants. En effet, ils sont souvent caractérisés par des comportements macroscopiques complexes résultant de boucles d'interactions et de rétroactions. Par exemple, la régulation et le rejet éventuel des tumeurs par le système immunitaire est le résultat de multiples réseaux de régulation, influençant à la fois le comportement des cellules cancéreuses et immunitaires. Pour simuler ces effets complexes in-vitro, j'ai conçu une puce microfluidique permettant de confronter des sphéroïdes de mélanome à de multiples cellules T et d'observer les interactions qui en résultent avec une haute résolution spatio-temporelle et sur de longues périodes de temps. En utilisant de l'analyse d'images avancée, combinée à des modèles mathématiques, je démontre qu'une boucle de rétroaction positive conduit l'accumulation de cellules T sur la tumeur, ayant pour conséquence une fragmentation accrue des sphéroïdes. Cette étude met en lumière l'initiation de la réponse immunitaire à l'échelle de la cellule unique : elle montre que même le tout premier contact entre une cellule T et un sphéroïde tumoral augmente la probabilité que la cellule T suivante arrive sur la tumeur. Elle montre également qu'il est possible de récapituler des comportements antagonistes complexes in-vitro, ce qui ouvre la voie à l'élaboration de protocoles plus sophistiqués, impliquant par exemple un micro-environnement tumoral plus complexe.De nombreux processus biologiques sont le résultat d'interactions entre de multiples types de cellules, en particulier au cours du développement. Le foie fœtal est le lieu de la maturation et de l'expansion du système hématopoïétique, mais on sait peu de choses sur sa structure et son organisation. De nouveaux protocoles expérimentaux ont été récemment mis au point pour imager cet organe et j'ai développé des outils pour interpréter et quantifier ces données, permettant la construction d'un "réseau jumeau" de chaque foie fœtal. Cette méthode permet de combiner les échelles unicellulaire et de l'organe dans une seule analyse, révélant l'accumulation de cellules myéloïdes autour des vaisseaux sanguins irriguant le foie fœtal aux derniers stades du développement de l'organe. À l'avenir, cette technique permettra d'analyser précisément les environnements de cellules d'intérêt de manière quantitative. Ceci pourrait à son tour nous aider à comprendre les étapes du développement de types cellulaires cruciaux tels que les cellules souches hématopoïétiques.Les interactions entre les bactéries et leur environnement sont essentielles pour comprendre l'émergence de comportements collectifs complexes tels que la formation de biofilms. Un mécanisme d'intérêt est celui de la rhéotaxie, par lequel le mouvement bactérien est entraîné par les gradients de la contrainte de cisaillement du fluide dans lequel les cellules se déplacent. J'ai développé une méthode pour calculer les équations semi-analytiques guidant le mouvement des bactéries dans la contrainte de cisaillement. Ces équations prédisent des comportements qui ne sont pas observés expérimentalement, mais la divergence est résolue une fois que la diffusion rotationnelle est prise en compte. Les résultats expérimentaux correspondent bien à la prédiction théorique : les bactéries dans les gouttelettes se séparent de manière asymétrique lorsqu'un cisaillement est généré dans le milieu
In his prescient article “More is different”, P. W. Anderson counters the reductionist argument by highlighting the crucial role of emergent properties in science. This is particularly true in biology, where complex macroscopic behaviours stem from communication and interaction loops between much simpler elements. As an illustration, I hereby present three different instances in which I developed and used quantitative methods in order to learn new biological processes.For instance, the regulation and eventual rejection of tumours by the immune system is the result of multiple positive and negative regulation networks, influencing both the behaviour of the cancerous and immune cells. To mimic these complex effects in-vitro, I designed a microfluidic assay to challenge melanoma tumour spheroids with multiple T cells and observe the resulting interactions with high spatiotemporal resolution over long (>24h) periods of time. Using advanced image analysis combined with mathematical modelling I demonstrate that a positive feedback loop drives T cell accumulation to the tumour site, leading to enhanced spheroid fragmentation. This study sheds light on the initiation if the immune response at the single cell scale: showing that even the very first contact between T cell and tumour spheroid increases the probability of the next T cell to come to the tumour. It also shows that it is possible to recapitulate complex antagonistic behaviours in-vitro, which paves the way for the elaboration of more sophisticated protocols, involving for example a more complex tumour micro-environment.Many biological processes are the result of complex interactions between cell types, particularly so during development. The foetal liver is the locus of the maturation and expansion of the hematopoietic system, yet little is known about its structure and organisation. New experimental protocols have been recently developed to image this organ and I developed tools to interpret and quantify these data, enabling the construction of a “network twin” of each foetal liver. This method makes it possible to combine the single-cell scale and the organ scale in the analysis, revealing the accumulation of myeloid cells around the blood vessels irrigating the foetal liver at the final stages of organ development. In the future, this technique will make it possible to analyse precisely the environmental niches of cell types of interest in a quantitative manner. This in turn could help us understand the developmental steps of crucial cell types such as hematopoietic stem cells.The interactions between bacteria and their environment is key to understanding the emergence of complex collective behaviours such a biofilm formation. One mechanism of interest is that of rheotaxis, whereby bacterial motion is driven by gradients in the shear stress of the fluid the cells are moving in. I developed a framework to calculate the semi-analytical equations guiding bacteria movement in shear stress. These equations predict behaviours that aren’t observed experimentally, but the discrepancy is solved once rotational diffusion is taken into account. Experimental results are well-fitted by the theoretical prediction: bacteria in droplets segregate asymmetrically when a shear is generated in the media.Although relating to very different topics, these three studies highlight the pertinence of quantitative approaches for understanding complex biological phenomena: biological systems are more than the sum of their constituents.a
APA, Harvard, Vancouver, ISO, and other styles
28

Verkhedkar, Ketki Dinesh. "Quantitative Analysis of DNA Repair and p53 in Individual Human Cells." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10660.

Full text
Abstract:
The goal of my research was to obtain a quantitative understanding of the mechanisms of DNA double-strand break (DSB) repair, and the activation of the tumor suppressor p53 in response to DSBs in human cells. In Chapter 2, we investigated how the kinetics of repair, and the balance between the alternate DSB repair pathways, nonhomologous end-joining (NHEJ) and homologous recombination (HR), change with cell cycle progression. We developed fluorescent reporters to quantify DSBs, HR and cell cycle phase in individual, living cells. We show that the rates of DSB repair depend on the cell cycle stage at the time of damage. We find that NHEJ is the dominant repair mechanism in G1 and in G2 cells even in the presence of a functional HR pathway. S and G2 cells use both NHEJ and HR, and higher use of HR strongly correlates with slower repair. Further, we demonstrate that the balance between NHEJ and HR changes gradually with cell cycle progression, with a maximal use of HR at the peak of active replication in mid-S. Our results establish that the presence of a sister chromatid does not affect the use of HR in human cells. Chapter 3 examines the sensitivity of the p53 pathway to DNA DSBs. We combined our fluorescent reporter for DSBs with a fluorescent reporter for p53, to quantify the level of damage and p53 activation in single cells. We find that the probability of inducing a p53 pulse increases linearly with the amount of damage. However, cancer cells do not have a distinct threshold of DSBs above which they uniformly induce p53 accumulation. We demonstrate that the decision to activate p53 is potentially controlled by cell-specific factors. Finally, we establish that the rates of DSB repair do not affect the decision to activate p53 or the dynamical properties of the p53 pulse. Collectively, this work emphasizes the importance of collecting quantitative dynamic information in single cells in order to gain a comprehensive understanding of how different DNA damage response pathways function in a coordinated manner to maintain genomic integrity.
APA, Harvard, Vancouver, ISO, and other styles
29

Millichope, Allen John. "Application of a charge coupled device Raman microscope imaging system for quantitative analysis of aqueous surfactant phases." Thesis, Liverpool John Moores University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Stacke, Karin. "Automatic Brain Segmentation into Substructures Using Quantitative MRI." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-128900.

Full text
Abstract:
Segmentation of the brain into sub-volumes has many clinical applications. Manyneurological diseases are connected with brain atrophy (tissue loss). By dividingthe brain into smaller compartments, volume comparison between the compartmentscan be made, as well as monitoring local volume changes over time. Theformer is especially interesting for the left and right cerebral hemispheres, dueto their symmetric appearance. By using automatic segmentation, the time consumingstep of manually labelling the brain is removed, allowing for larger scaleresearch.In this thesis, three automatic methods for segmenting the brain from magneticresonance (MR) images are implemented and evaluated. Since neither ofthe evaluated methods resulted in sufficiently good segmentations to be clinicallyrelevant, a novel segmentation method, called SB-GC (shape bottleneck detectionincorporated in graph cuts), is also presented. SB-GC utilizes quantitative MRIdata as input data, together with shape bottleneck detection and graph cuts tosegment the brain into the left and right cerebral hemispheres, the cerebellumand the brain stem. SB-GC shows promises of highly accurate and repeatable resultsfor both healthy, adult brains and more challenging cases such as childrenand brains containing pathologies.
APA, Harvard, Vancouver, ISO, and other styles
31

Miller, Brandon Lee. "Quantitative, Multiparameter Analysis of Fluorescently Stained, Negatively Enriched, Peripheral Blood from Cancer Patients." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1386005404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gulley-Stahl, Heather Jane. "An Investigation into Quantitative ATR-FT-IR Imaging and Raman Microspectroscopy of Small Mineral Inclusions in Kidney Biopsies." Miami University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=miami1272042834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Callen, David James Anthony. "Quantitative analysis of changes in limbic structures in probable Alzheimer's disease using coregistered SPECT and magnetic resonance imaging." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0028/NQ49952.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jacobs, Emily Jean. "Spatial Resolution of Quantitative Electroencephalography and Functional Magnetic Resonance Imaging During Phoneme Discrimination Tasks: An Abbreviated Meta-Analysis." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8938.

Full text
Abstract:
Phonological processing, the ability to recognize and manipulate the sounds of one's native language, is an essential linguistic skill. Deficits in this skill may lead to decreased social, educational, and financial success (Kraus & White-Schwoch, 2019). Additionally, phonological disorders have been shown to be highly variable and individualized (Bellon-Harn & Cradeur-Pampolina, 2016) and therefore difficult to treat effectively. A better understanding of the neural underpinnings of phonological processing, including the underlying skill of phonemic discrimination, could lead to the development of more individualized and effective intervention. Several studies, some using quantitative electroencephalography (qEEG) and others using functional magnetic resonance imaging (fMRI), have been conducted to investigate these neural underpinnings. When considering the relative strengths and weaknesses of qEEG and fMRI, the scientific community has traditionally believed qEEG to be excellent at determining when brain activity occurs (temporal resolution), but to have limited abilities in determining where it occurs (spatial resolution). On the other hand, the reverse is believed to be true for fMRI. However, the spatial resolution of qEEG has improved over recent decades and some studies have reached levels of specificity comparable to fMRI. This thesis provides an abbreviated meta-analysis determining the accuracy and consistency of source references, or areas where brain activation is determined to originate from, in qEEG studies evaluating phonemic discrimination. Nineteen experiments were analyzed using the Comprehensive Meta-Analysis software. A study's event rate was defined as the number of times an anatomical area was coded as a source reference, divided by the participants in the study. Results show that each of these experiments had relatively low event rates, culminating into a summary event rate of 0.240. This indicates that qEEG does not provide source references that are as accurate or consistent as fMRI. This meta-analysis concludes that although there is research suggesting qEEG may have developed to be comparable to fMRI in spatial resolution, this is not supported in the analysis of qEEG studies focused on phonemic discrimination.
APA, Harvard, Vancouver, ISO, and other styles
35

Ueda, Maho. "Combined multiphoton imaging and biaxial tissue extension for quantitative analysis of geometric fiber organization in human reticular dermis." Kyoto University, 2020. http://hdl.handle.net/2433/253178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Renjie. "Quantitative analysis of chromatin dynamics and nuclear geometry in living yeast cells." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30122/document.

Full text
Abstract:
L'analyse de l'organisation à grande échelle des chromosomes, par des approches d'imagerie et de biologie moléculaire, constitue un enjeu important de la biologie. Il est maintenant établi que l'organisation structurelle du génome est un facteur déterminant dans tous les aspects des " transactions " génomiques: transcription, recombinaison, réplication et réparation de l'ADN. Bien que plusieurs modèles aient été proposés pour décrire l'arrangement spatial des chromosomes, les principes physiques qui sous-tendent l'organisation et la dynamique de la chromatine dans le noyau sont encore largement débattus. Le noyau est le compartiment de la cellule dans lequel l'ADN chromosomique est confiné. Cependant, la mesure quantitative de l'influence de la structure nucléaire sur l'organisation du génome est délicate, principalement du fait d'un manque d'outils pour déterminer précisément la taille et la forme du noyau. Cette thèse est organisée en deux parties: le premier axe de mon projet était d'étudier la dynamique et les propriétés physiques de la chromatine dans le noyau de la levure S. cerevisiae. Le deuxième axe visait à développer des techniques pour détecter et quantifier la forme et la taille du noyau avec une grande précision. Dans les cellules de levure en croissance exponentielle, j'ai étudié la dynamique et les propriétés physiques de la chromatine de deux régions génomiques distinctes: les régions codant les ARN ribosomiques regroupés au sein d'un domaine nucléaire, le nucléole, et la chromatine du nucléoplasme. Le mouvement de la chromatine nucléoplasmique peut être modélisé par une dynamique dite de " Rouse ". La dynamique de la chromatine nucléolaire est très différente et son déplacement caractérisé par une loi de puissance d'exposant ~ 0,7. En outre, nous avons comparé le changement de la dynamique de la chromatine nucléoplasmique dans une souche sauvage et une souche porteuse d'un allèle sensible à la température (ts) permettant une inactivation conditionnelle de la transcription par l'ARN polymérase II. Les mouvements chromatiniens sont beaucoup plus importants après inactivation transcriptionnelle que dans la souche témoin. Cependant, les mouvements de la chromatine restent caractérisés par une dynamique dite de " Rouse ". Nous proposons donc un modèle biophysique prenant en compte ces résultats : le modèle de polymère dit "branched-Rouse". Dans la deuxième partie, j'ai développé "NucQuant", une méthode d'analyse d'image permettant la localisation automatique de la position de l'enveloppe nucléaire du noyau de levures. Cet algorithme comprend une correction post-acquisition de l'erreur de mesure due à l'aberration sphérique le long de l'axe Z. "NucQuant" peut être utilisée pour déterminer la géométrie nucléaire dans de grandes populations cellulaires. En combinant " NucQuant " à la technologie microfluidique, nous avons pu estimer avec précision la forme et la taille des noyaux en trois dimensions (3D) au cours du cycle cellulaire. "NucQuant" a également été utilisé pour détecter la distribution des regroupements locaux de complexes de pore nucléaire (NPCs) dans des conditions différentes, et a révélé leur répartition non homogène le long de l'enveloppe nucléaire. En particulier, nous avons pu montrer une distribution particulière sur la région de l'enveloppe en contact avec le nucléole. En conclusion, nous avons étudié les propriétés biophysiques de la chromatine, et proposé un modèle dit "branched Rouse-polymer" pour rendre compte de ces propriétés. De plus, nous avons développé "NucQuant", un algorithme d'analyse d'image permettant de faciliter l'étude de la forme et la taille nucléaire. Ces deux travaux combinés vont permettre l'étude des liens entre la géométrie du noyau et la dynamique de la chromatine
Chromosome high-order architecture has been increasingly studied over the last decade thanks to technological breakthroughs in imaging and in molecular biology. It is now established that structural organization of the genome is a key determinant in all aspects of genomic transactions. Although several models have been proposed to describe the folding of chromosomes, the physical principles governing their organization are still largely debated. Nucleus is the cell’s compartment in which chromosomal DNA is confined. Geometrical constrains imposed by nuclear confinement are expected to affect high-order chromatin structure. However, the quantitative measurement of the influence of the nuclear structure on the genome organization is unknown, mostly because accurate nuclear shape and size determination is technically challenging. This thesis was organized along two axes: the first aim of my project was to study the dynamics and physical properties of chromatin in the S. cerevisiae yeast nucleus. The second objective I had was to develop techniques to detect and analyze the nuclear 3D geomtry with high accuracy. Ribosomal DNA (rDNA) is the repetitive sequences which clustered in the nucleolus in budding yeast cells. First, I studied the dynamics of non-rDNA and rDNA in exponentially growing yeast cells. The motion of the non-rDNA could be modeled as a two-regime Rouse model. The dynamics of rDNA was very different and could be fitted well with a power law of scaling exponent ~0.7. Furthermore, we compared the dynamics change of non-rDNA in WT strains and temperature sensitive (TS) strains before and after global transcription was actived. The fluctuations of non-rDNA genes after transcriptional inactivation were much higher than in the control strain. The motion of the chromatin was still consistent with the Rouse model. We propose that the chromatin in living cells is best modeled using an alternative Rouse model: the “branched Rouse polymer”. Second, we developed “NucQuant”, an automated fluorescent localization method which accurately interpolates the nuclear envelope (NE) position in a large cell population. This algorithm includes a post-acquisition correction of the measurement bias due to spherical aberration along Z-axis. “NucQuant” can be used to determine the nuclear geometry under different conditions. Combined with microfluidic technology, I could accurately estimate the shape and size of the nuclei in 3D along entire cell cycle. “NucQuant” was also utilized to detect the distribution of nuclear pore complexes (NPCs) clusters under different conditions, and revealed their non-homogeneous distribution. Upon reduction of the nucleolar volume, NPCs are concentrated in the NE flanking the nucleolus, suggesting a physical link between NPCs and the nucleolar content. In conclusion, we have further explored the biophysical properties of the chromatin, and proposed that chromatin in the nucleoplasm can be modeled as "branched Rouse polymers". Moreover, we have developed “NucQuant”, a set of computational tools to facilitate the study of the nuclear shape and size. Further analysis will be required to reveal the links between the nucleus geometry and the chromatin dynamics
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Paul Chong Chan. "A QUALITATIVE AND QUANTITATIVE ANALYSIS OF SOFT TISSUE CHANGE EVALUATION BY ORTHODONTISTS IN CLASS II NON EXTRACTION ORTHODONTIC TREATMENT USING THE 3dMD SYSTEM." Master's thesis, Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/217032.

Full text
Abstract:
Oral Biology
M.S.
With the advent of cephalometrics in the 1930s, numerous studies have focused on the profile of a face to achieve a more esthetic orthodontic treatment outcome. With such heavy emphasis on facial esthetics, a shift in focus from the profile view to the oblique view has become necessary as the smile in the oblique view is what the general public evaluates. The purpose of this pilot study was to determine whether the current tools for diagnosis and treatment evaluation are sufficient. Currently, 2-dimensional composite photographs are utilized in evaluating the soft tissue. At Temple University, 3-dimensional images, which show all sides of the patient's face, are used adjunctively to 2-dimensional composite photographs. In this study, faculty members at the Temple University Department of Orthodontics were asked to complete surveys after viewing two different image modalities, 2-dimensional images and a 3-dimensional video of the same patient. They were asked to fill out the soft tissue goals for specific facial landmarks. Patient photos were in the smiling view as current literature lacks studies on this view. Faculty members' responses from analyzing the 2-dimensional images and 3-dimensional video for each patient were compared to determine which areas had frequent discrepancies from using two different image modalities. During the survey, a voice recorder captured any comments regarding the images. The ultimate goal of this qualitative pilot study was to identify when 3-dimensional imaging is necessary in treatment planning and evaluation, with an added hope to further advance research in 3-dimensional imaging and its vast possibilities to advance the field of orthodontics. Based on the data collected, the following conclusions were made: 1. The qualitative data highlighted that 3-dimensional imaging would be necessary in cases with skeletal deformities. 2. In the oblique view, 3-dimensional imaging is superior than 2-dimensional imaging by showing more accurate shadow, contour, and depth of the soft tissue. 3. Further improvement is necessary to create a virtual patient with treatment simulation abilities. 4. The comfort level among orthodontists of 2-dimensional imaging was higher than 3-dimensional imaging. With more widespread use of 3-dimensional imaging, more orthodontists may gradually reach a higher comfort level in using this relatively new technology. 5. Faculty members expressed high willingness to use 3-dimensional imaging if improvement in new technology could allow for more manipulation and accurate soft tissue prediction. 6. 3-dimensional imaging is superior in its efficiency, quick capture time, and lack of need for multiple images. Implementation of 3-dimensional imaging could streamline the records process and help with practice efficiency without compromising the image quality. 7. Both patients and orthodontists may benefit from using 3-dimensional imaging. Patients can see an accurate representation of themselves and possibly view their own treatment simulation upon further improvement in current technology. Orthodontists would benefit with much more accurate images that may serve as the virtual patient. 8. Besides the exorbitantly high cost, faculty members thought that more advances were needed and the current benefit was not great enough to justify the investment. The results were consistent with other studies that used the oblique view in that the 2-dimensional oblique view lacks depth and does not provide adequate information. With further improvement in current 3-dimensional imaging, this technology can benefit orthodontists in visualizing their patients. In addition, patients can benefit by hopefully seeing a live and accurate simulation of themselves instantly as a virtual patient. With these benefits of 3-dimensional imaging, it may one day be the new standard in patient records in the field of orthodontics.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
38

Nordbrøden, Mats. "Optimization of Magnetic Resonance Diffusion Tensor Imaging for Visualization and Quantification of Periprostatic Nerve Fibers." Thesis, KTH, Skolan för teknik och hälsa (STH), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179658.

Full text
Abstract:
Prostatectomy, surgical resection of the whole prostate is a common treatment for high- risk prostate cancer. Common side effects include long-time urinary and or erectile dysfunction due to damage inflicted to periprostatic nerves. The aim of this study was to identify an optimal magnetic resonance diffusion tensor imaging protocol for visualization and quantification of these nerves, as pre-surgery visualization may help nerve-sparing surgery. Both scanner filter, parameters for accelerated scan techniques, diffusion-related acquisition parameters and post- processing tractography parameters were investigated. Seven healthy volunteers were scanned with a state-of-art 3 T MRI scanner with varying protocol parameters. Diffusion data were processed and analysed using Matlab and Explore DTI. The resulting protocol recommendation included a normalized scanner filter, a parallel imaging acceleration factor of 2, partial Fourier sampling of 6/8, a right-left phase encoding direction, a b-value of 600 s/mm2, monopolar gradient polarity with applied eddy current correction, four acquisitions of 12 diffusion- sensitizing gradient directions, and a reverse phase encoding approach for correction of geometrical image distortions induced by static field inhomogeneity. For post-processing tractography, the recommended parameters were a lower limit for fractional anisotropy of 0.05, a minimum tract length of 3 centimetres and a maximum turning angle between voxels of 60 degrees. The limited parameter range that was tested and the low number of volunteers can be regarded as limitations to this study. Future work should address these issues. Furthermore, feasibility of periprostatic nerve tracking with the optimized protocol should be tested in a patient study.
APA, Harvard, Vancouver, ISO, and other styles
39

Xiong, Fengzhu. "Integrated Analysis of Patterning, Morphogenesis, and Cell Divisions in Embryonic Development by in toto Imaging and Quantitative Cell Tracking." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11160.

Full text
Abstract:
Patterning, morphogenesis, and cell divisions are distinct processes during development yet are concurrent and likely highly integrated. However, it has been challenging to investigate them as a whole. Recent advances in imaging and labeling tools make it possible to observe live tissues with high coverage and resolution. In this dissertation work, we developed a novel imaging platform that allowed us to fully capture the early neural tube formation process in live zebrafish embryos at cellular resolution. Importantly, these datasets allow us to reliably track single neural progenitors. These tracks carry information on the history of cell movement, shape change, division, and gene expression all together. By comparing tracks of different progenitor fates, we found they show a spatially noisy response to Sonic hedgehog (Shh) and become specified in a positionally mixed manner, in surprising contrast to the "French Flag" morphogen patterning model. Both cell movement and division contribute to cell mixing. In addition, we decoupled the temporal and genetic regulatory network (GRN) noises in Shh interpretation using tracks that carry both Shh signaling and cell fate reporters. Our tracks suggest that, after specification, progenitors undergo sorting to self-assemble a sharp pattern. Consistent with this hypothesis, we found ectopically induced progenitors move to correct locations. Furthermore, we show that proper adhesion is required for cell sorting to happen (Chapters 2 and 3). In the cleavage stage embryos, the cells on the surface undergo shape changes followed by lineage separation and differentiation. We quantitatively measured this morphogenesis process and tracked cell divisions. By applying a mathematical model we uncover a predictive, and perhaps general link between cell division orientation, mechanical interaction, and the morphogenetic behavior of the whole surface layer (Chapter 4). Finally, we discuss the concepts and tools of cell tracking including a multi-color cell labeling method we developed by modifying the "Brainbow" system (Chapter 5). Together this dissertation showcases the importance and promise of live observation based, quantitative and integrated analysis in our understanding of complex multi-cellular developmental processes.
APA, Harvard, Vancouver, ISO, and other styles
40

Alizadeh, Mahdi. "Multi Spectral Data Analysis for Diagnostic Enhancement of Pediatric Spinal Cord Injury." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/422530.

Full text
Abstract:
Bioengineering
Ph.D.
A key challenge in the imaging of spinal cord injury (SCI) patients is the ability to accurately determine structural or functional abnormality as well as level and severity of injury. Over the years a substantial number of studies have addressed this issue, however most of them utilized qualitative analysis of the acquired imaging data. Quantitative analysis of patients with SCI is an important issue in both diagnostic and treatment planning. Hence in this work new multispectral magnetic resonance (MR) image based approaches were developed for high-throughput extraction of quantitative features from pediatric spinal cord MR images and subsequent analysis using decision support algorithms. This may potentially improve diagnostic, prognostic, and predictive accuracy between typically developing (TD) pediatric spinal cord subjects and patients with SCI. The technique extracts information from both axial structural MRI images (such as T2-weighted gradient echo images) and functional MRI images (such as diffusion tensor images). The extracted data contains first order statistics (diffusion tensor tractography and histogram based texture descriptors), second order (co-occurrence indices) and high order (wavelet primitives) statistics. MRI data from total of 43 subjects that includes 23 healthy TD subjects with the age range of 6-16 (11.94±3.26 (mean ±standard deviation)) who had no evidence of SCI or pathology and 20 SCI subjects with the age range of 7-16 (11.28±3.00 (mean ±standard deviation)) were recruited and scanned using 3.0T Siemens Verio MR scanner. Standard 4-channel neck matrix and 8-channel spine array RF coils were used for data collection. After data collection various post processing methods were used to improve the data quality. A novel ghost artifact suppression technique was implemented and tested. Initially, 168 quantitative measures of multi-spectral images (functional and structural) were calculated by using regions of interest (ROIs) manually drawn on the whole cord along the entire spinal cord being anatomically localized by an independent board certified neuroradiologist. These measures were then statistically compared between TD and SCI groups using standard least squared linear regression model based on restricted maximum likelihood (REML) method. Statistically, significant changes have been shown in 44 features: 30 features obtained from functional images and 14 features selected from structural images. Also, it has been shown that the quantitative measures of the spinal cord in DTI and T2W-GRE images above and below injury level were altered significantly. Finally, tractography measures were also obtained on a subset of the patients to demonstrate quantitative analysis of the extracted white matter structures. Overall the results show that the proposed techniques may have potential to be used as surrogate biomarkers for detection of the injured spinal cord. These measures enable us to quantify the functional and structural plasticity in chronic SCI and consequently has the potential to improve our understanding of damage and recovery in diseased states of the spinal cord.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
41

Kawahira, Naofumi. "Quantitative analysis of 3D tissue deformation reveals key cellular mechanism associated with initial heart looping." Kyoto University, 2020. http://hdl.handle.net/2433/254507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Antunes, Jacob T. Antunes. "Quantitative Treatment Response Characterization In Vivo: UseCases in Renal and Rectal Cancers." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1467987922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Morimoto, Emiko. "Evaluation of Focus Laterality in Temporal Lobe Epilepsy: A Quantitative Study Comparing Double Inversion-Recovery MR Imaging at 3T with FDG-PET." Kyoto University, 2014. http://hdl.handle.net/2433/189344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Negre, Erwan. "Couplage ablation laser et imagerie spectrale rapide pour identification et analyses de plastiques : concept, développement et validation." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1036/document.

Full text
Abstract:
La spectroscopie de plasma induit par laser, ou LIBS (acronyme anglais de Laser Induced Breakdown Spectroscopy) est une technique d'analyse élémentaire basée sur l'émission d'un plasma issu de l'interaction laser-matière. Elle permet en principe une détection de l'ensemble des éléments du tableau périodique avec une sensibilité typiquement de l'ordre du ppm et ce sur tout type de matériaux : solides, liquides ou gazeux. Sa capacité à exploiter aussi bien le signal élémentaire que moléculaire en fait un candidat crédible à l'identification des matériaux organiques, par exemple dans le domaine du tri des déchets plastiques où les techniques d'analyses usuelles peinent à remplir toutes les contraintes liées à cette question. Cependant, le plasma induit par laser est un phénomène transitoire et correspondant à un milieu inhomogène parfois difficile à maitriser, notamment en comparaison avec un plasma à couplage inductif. En conséquence, la LIBS reste aujourd'hui marginale dans les applications où une information fiable et souvent quantitative est requise. Ce travail doctoral, fruit d'un partenariat entre le CRITT Matériaux Alsace et l'Institut Lumière Matière de Lyon dans le cadre 'un financement CIFRE, se propose d'étudier ces deux problématiques. Un nouvel instrument LIBS est tout d'abord présenté. Articulé autour de nombreux outils de contrôle pilotés par un logiciel dédié, il a permis de limiter considérablement les fluctuations du signal LIBS liées aux divers paramètres impliqués dans le processus d'ablation laser (énergie du laser, position de l'échantillon, position de la détection…). L'efficacité de cet instrument est montrée à travers une étude de quantification d'éléments en trace dans des matrices de verre
Laser Induced Breakdown Spectroscopy (LIBS) is an analytical technique based on the emission of a plasma arising from the laser-matter interaction. All the elements of the periodic table can be detected with a detection limit close to the ppm, regardless of the nature of sample: solid, liquid or gas. LIBS can perform elemental as well as molecular analysis, which makes it a trustworthy technique for the identification of organic materials, especially with reference to plastic waste sorting where the established techniques experience some difficulties to fulfill all the requirements of this issue. Nevertheless, the laser-induced plasma is a transient and inhomogeneous process regularly hard to master in comparison with an inductively coupled plasma. As a consequence, LIBS technique still remains marginal for the applications demanding a reliable and frequently quantitative information. This doctoral research, which falls within the framework of a partnership between the CRITT Matériaux Alsace and the Institut Lumière Matière in Lyon, proposes to examine the two issues mentioned above. A new LIBS instrument is first given. It is organized around several monitoring tools driven by a dedicated software which allowed us to considerably reduce the fluctuations of the LIBS signal coming from the different factors involved in the process of laser ablation (laser energy, sample and detection positions, etc…). The efficiency of this new LIBS instrument is then illustrated through the example of the quantification of trace elements in glass matrices
APA, Harvard, Vancouver, ISO, and other styles
45

Varghese, Bino Abel. "Quantitative Computed-Tomography Based Bone-Strength Indicators for the Identification of Low Bone-Strength Individuals in a Clinical Environment." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1300389623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Risser, Laurent. "Analyse quantitative de réseaux micro-vasculaires intra-corticaux." Toulouse 3, 2007. http://www.theses.fr/2007TOU30011.

Full text
Abstract:
Cette thèse développe une étude quantitative du réseau micro-vasculaire intra-cortical obtenu à partir d'une méthode originale d'imagerie micro-tomographique possédant une résolution micrométrique sur toute l'épaisseur de la substance grise. La première partie de la thèse concerne le traitement des images 3D de très grosse taille provenant de différents types de cortex sains et tumoraux chez le rat et le primate. Des méthodes classiques sont utilisées pour binariser et squelettiser les images. L'influence du protocole expérimental sur les données obtenues est évalué. Une méthode originale et performante de raccordement des vaisseaux mal injectés est proposée, développée, validée et testée. La deuxième partie de la thèse concerne l'analyse statistique des réseaux micro-vasculaires. Nous distinguons l'analyse géométrique, l'analyse locale et l'analyse topologique. L'analyse géométrique décrit la répartition spatiale micro-vasculaire à travers l'étude de la densité vasculaire et de la distance tissu/vaisseau. La nature multi-échelle de cette répartition spatiale a été mise en évidence par des analyses spectrales et fractales. Les volumes élémentaires représentatifs associés à cette répartition ont été évalués, et des différences significatives entre tissu sains et tumeurs ont été mesurées. L'analyse locale des longueurs de vaisseaux exhibe systématiquement une distribution exponentielle très nette qui met en évidence une longueur caractéristique pour chaque tissu analysé. Ces longueurs caractéristiques sont significativement différentes entre primates adultes et nouveaux nés. Ces résultats corroborent l'analyse effectuée sur les densités vasculaires et font émerger le schéma d'une maturation vasculaire développementale essentiellement associée à la croissance du lit capillaire. .
This work is a quantitative investigation of intra-cortical micro-vascular networks using a new micro-tomography imaging protocol which permits a complete scan of the entire gray matter with a micron resolution. The first part of the PhD is devoted to the analysis of very large 3D images coming from healthy rats and marmosets primate cortex, as well as tumour implanted rats brains. Classical methods are used for binarisation and squeletonization of the images. The influence of the experimental protocol on the obtained images is evaluated. A fast and original method is proposed to fill the gaps of incompletely injected vessels the efficiency of which is tested and validated. The second part of the PhD is concerned by the statistical analysis of geometrical, local and topological properties of micro-vascular networks. Geometrical properties are related to the spatial distribution of vessels from studying the vascular density and the vessel/tissue distance map. We brought to the fore the multi-scale properties of those fields from fractal and spectral analysis up to a some cut-off which defines the typical length-scale of an elementary representative volume. We found that this length-scale significantly differ in normal and tumoral tissues. The local analysis of vessel's segment length systematically exhibits exponential distribution, which leads to some characteristic segments length. Those length significantly differ in adult and new-born primates tissues. This analysis is consistent with the result obtained on the vascular density and leads to the conclusion that developmental angiogenesis occurs mainly at the capillary scale. .
APA, Harvard, Vancouver, ISO, and other styles
47

Deshmane, Anagha Vishwas. "Partial Volume Quantification Using Magnetic Resonance Fingerprinting." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1491572611420032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Moles, Lopez Xavier. "Characterization and Colocalization of Tissue-Based Biomarker Expression by Quantitative Image Analysis: Development and Extraction of Novel Features." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209330.

Full text
Abstract:
Proteins are the actual actors in the (normal or disrupted) physiological processes and immunohistochemistry (IHC) is a very efficient mean of visualizing and locating protein expression in tissue samples. By comparing pathologic and normal tissue, IHC is thus able to evidence protein expression alterations. This is the reason why IHC plays a grow- ing role to evidence tissue-based biomarkers in clinical pathology for diagnosing var- ious diseases and directing personalized therapy. Therefore, IHC biomarker evaluation significantly impacts the adequacy of the therapeutic choices for patients with serious pathologies, such as cancer. However, this evaluation may be time-consuming and dif- ficult to apply in practice due to the absence of precise positive cut-off values as well as staining (i.e. protein expression) heterogeneity intra- and inter-samples. Quantifying IHC staining patterns has thus become a crucial need in histopathology. For this task, automated image analysis has multiple advantages, such as avoiding the evidenced ef- fects of human subjectivity. The recent introduction of whole-slide scanners opened a wide range of possibilities for addressing challenging image analysis problems, includ- ing the identification of tissue-based biomarkers. Whole-slide scanners are devices that are able to image whole tissue slides at resolutions up to 0.1 micrometers per pixels, often referred to as virtual slides. In addition to quantification of IHC staining patterns, virtual slides are invaluable tools for the implementation of digital pathology work- flows. The present work aims to make several contributions towards this current digital shift in pathology. Our first contribution was to propose an automated virtual slide sharpness assessment tool. Although modern whole-slide scanner devices resolve most image standardization problems, focusing errors are still likely to be observed, requiring a sharpness assessment procedure. Our proposed tool will ensure that images provided to subsequent pathologist examination and image analysis are correctly focused. Virtual slides also enable the characterization of biomarker expression heterogeneity. Our sec- ond contribution was to propose a method to characterize the distribution of densely stained regions in the case of nuclear IHC biomarkers, with a focus on the identification of highly proliferative tumor regions by analyzing Ki67-stained tissue slides. Finally, as a third contribution, we propose an efficient mean to register virtual slides in order to characterize biomarker colocalization on adjacent tissue slides. This latter contribution opens new prospects for the analysis of more complex questions at the tissue level and for finely characterizing disease processes and/or treatment responses./Les protéines sont les véritables acteurs des processus physiologiques (normaux ou per- turbés) et l’immunohistochimie (IHC) est un moyen efficace pour visualiser et localiser leur expression au sein d’échantillons histologiques. En comparant des échantillons de tissus pathologiques et normaux, l’IHC permet de révéler des altérations dans des pro- fils d’expression protéique. C’est pourquoi l’IHC joue un rôle de plus en plus important pour mettre en évidence des biomarqueurs histologiques intervenant dans le diagnos- tic de diverses pathologies et dans le choix de thérapies personnalisées. L’évaluation de l’expression de biomarqueurs révélés par IHC a donc des répercussions importantes sur l’adéquation des choix thérapeutiques pour les patients souffrant de pathologies graves, comme le cancer. Cependant, cette évaluation peut être chronophage et difficile à appliquer en pratique, d’une part, à cause de l’hétérogénéité de l’expression protéique intra- et inter-échantillon, d’autre part, du fait de l’absence de critères de positivité bien définis. Il est donc devenu crucial de quantifier les profils d’expression de marquages IHC en histopathologie. A cette fin, l’analyse d’image automatisée possède de multiples avantages, comme celui d’éviter les effets de la subjectivité humaine, déjà démontrés par ailleurs. L’apparition récente des numériseurs de lames histologiques complètes, ou scanners de lames, a permis l’émergence d’un large éventail de possibilités pour traiter des problèmes d’analyse d’image difficiles menant à l’identification de biomar- queurs histologiques. Les scanners de lames sont des dispositifs capables de numériser des lames histologiques à une résolution pouvant atteindre 0,1 micromètre par pixel, expliquant la dénomination de "lames virtuelles" des images ainsi acquises. En plus de permettre la quantification des marquages IHC, les lames virtuelles sont des outils indis- pensables pour la mise en place d’un flux de travail numérique en pathologie. Le travail présenté ici vise à fournir plusieurs contributions au récent changement de cap vers une numérisation de la discipline médicale qu’est l’anatomie pathologique. Notre première contribution consiste en un outil permettant d’évaluer automatiquement la netteté des lames virtuelles. En effet, bien que les scanners de lames résolvent la plupart des pro- blèmes liés à la standardisation de l’acquisition, les erreurs de focus restent fréquentes, ce qui nécessite la mise en place d’une procédure de vérification de la netteté. L’outil que nous proposons assurera la netteté des images fournies à l’examen du pathologiste et à l’analyse d’image. Les lames virtuelles permettent aussi de caractériser l’hétérogénéité de l’expression de biomarqueurs. Ainsi, la deuxième contribution de ce travail repose sur une méthode permettant de caractériser la distribution de régions densément marquées par des biomarqueurs IHC nucléaires. Pour ce travail, nous nous sommes concentrés sur l’identification de régions tumorales présentant une forte activité proliférative en analysant des lames virtuelles révélant l’expression de la protéine Ki67. Finalement, la troisième contribution de ce travail fut de proposer un moyen efficace de recaler des lames virtuelles dans le but de caractériser la colocalisation de biomarqueurs IHC révé- lés sur des coupes de tissu adjacentes. Cette dernière contribution ouvre de nouvelles perspectives pour l’analyse de questions complexes au niveau histologique ainsi que la caractérisation fine de processus pathologiques et de réponses thérapeutiques.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Yubing. "Analyse de vitesse par migration quantitative dans les domaines images et données pour l’imagerie sismique." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM002/document.

Full text
Abstract:
Les expériences sismiques actives sont largement utilisées pour caractériser la structure de la subsurface. Les méthodes dites d’analyse de vitesse par migration ont pour but la détermination d’un macro-modèle de vitesse, lisse, et contrôlant la cinématique de propagation des ondes. Le modèle est estimé par des critères de cohérence d’image ou de focalisation d’image. Les images de réflectivité obtenues par les techniques de migration classiques sont cependant contaminées par des artefacts, altérant la qualité de la remise à jour du macro-modèle. Des résultats récents proposent de coupler l’inversion asymptotique, qui donne des images beaucoup plus propres en pratique, avec l’analyse de vitesse pour la version offset en profondeur. Cette approche cependant demande des capacités de calcul et de mémoire importantes et ne peut actuellement être étendue en 3D.Dans ce travail, je propose de développer le couplage entre l’analyse de vitesse et la migration plus conventionnelle par point de tir. La nouvelle approche permet de prendre en compte des modèles de vitesse complexes, comme par exemple en présence d’anomalies de vitesses plus lentes ou de réflectivités discontinues. C’est une alternative avantageuse en termes d’implémentation et de coût numérique par rapport à la version profondeur. Je propose aussi d’étendre l’analyse de vitesse par inversion au domaine des données pour les cas par point de tir. J’établis un lien entre les méthodes formulées dans les domaines données et images. Les méthodologies sont développées et analysées sur des données synthétiques 2D
Active seismic experiments are widely used to characterize the structure of the subsurface. Migration Velocity Analysis techniques aim at recovering the background velocity model controlling the kinematics of wave propagation. The first step consists of obtaining the reflectivity images by migrating observed data in a given macro velocity model. The estimated model is then updated, assessing the quality of the background velocity model through the image coherency or focusing criteria. Classical migration techniques, however, do not provide a sufficiently accurate reflectivity image, leading to incorrect velocity updates. Recent investigations propose to couple the asymptotic inversion, which can remove migration artifacts in practice, to velocity analysis in the subsurface-offset domain for better robustness. This approach requires large memory and cannot be currently extended to 3D. In this thesis, I propose to transpose the strategy to the more conventional common-shot migration based velocity analysis. I analyze how the approach can deal with complex models, in particular with the presence of low velocity anomaly zones or discontinuous reflectivities. Additionally, it requires less memory than its counterpart in the subsurface-offset domain. I also propose to extend Inversion Velocity Analysis to the data-domain, leading to a more linearized inverse problem than classic waveform inversion. I establish formal links between data-fitting principle and image coherency criteria by comparing the new approach to other reflection-based waveform inversion techniques. The methodologies are developed and analyzed on 2D synthetic data sets
APA, Harvard, Vancouver, ISO, and other styles
50

Fleckenstein, Florian Nima [Verfasser]. "3D Quantitative tumour burden analysis in patients with hepatocellular carcinoma before TACE : comparing single-lesion vs. multi-lesion imaging biomarkers as predictors of patient survival / Florian Nima Fleckenstein." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2018. http://d-nb.info/1153769026/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography