Dissertations / Theses on the topic 'High Content Imaging Analysis'

To see the other types of publications on this topic, follow the link: High Content Imaging Analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'High Content Imaging Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alibhai, Dominic. "Fluorescence lifetime imaging applied to multiwell plate FRET assays for high content analysis." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/40284.

Full text
Abstract:
The work reported in this thesis aims to develop and apply new assays for high content analysis (HCA) based on novel automated fluorescence lifetime imaging microscopy (FLIM) technology adapted for multiwell plate readers and evaluate their potential for drug discovery. Two such FLIM multiwell plate readers were investigated, one based on a custom-modified commercially available plate reader (GE Healthcare In Cell 1000) and the other based on an Olympus IX-81 wide-field microscope adapted for use as an automated multiwell plate imaging system. To evaluate the potential for drug discovery, an exemplar assay of HIV-1 Gag protein aggregation was developed and used to evaluate the performance of the multiwell plate readers. HIV-1 Gag is the major structural protein within HIV-1 virions and is thought to interact with other viral proteins, the viral genome and with a large number of host cell factors to orchestrate the formation of new virions. HIV-1 Gag protein oligomerisation is a precursor to virion production at the plasma membrane of the target cell during the HIV virus life cycle and so represents a potential readout for testing the efficacy of anti HIV drugs. The expression of HIV-1 Gag alone within living cells leads to the formation of virus-like particles (VLPs), which provide a convenient and safe means to study this late stage of the HIV cycle. This exemplar assay is based on Förster Resonance Energy Transfer (FRET) between appropriately (fluorescently) labelled HIV-1 Gag proteins. By tagging HIV-1 Gag proteins with either a donor fluorophore or an acceptor fluorophore, a FRET signal can be utilised to indicate when the oligomerisation brings the donor and acceptor within close ( < ~ 10 nm) proximity and this can be read out and mapped using FLIM to observe the decrease in donor fluorescence lifetime that is a consequence of FRET. In the first instance the Gag proteins were stochastically labelled with either CFP or YFP and FRET was mapped by imaging the CFP lifetime. The assay could also be implemented by labelling the Gag protein with CFP only and detecting the small change in lifetime that occurs during homo-FRET of CFP. To evaluate and validate the assay, biological controls were developed using mutants of the Gag protein that lacked the ability to be myristoylated, a pre-requisite for the Gag protein to assemble at the cell plasma membrane where the VLP are formed. Comparisons were made using both myristoylated (WT) and non- myristoylated (mutated) HIV-1 Gag proteins to demonstrate each plate reader's ability to read out levels of HIV-1 Gag protein aggregation. To further characterise the performance of the assay and the plate readers, a dose response study was undertaken using an inhibitor of the enzyme responsible for myristoylation in eukaryotic cells and the assay was fully characterised following standard pharmaceutical industry guidelines. Accounting for experimental factors such as pipetting errors, plate edge effects, spatial uniformity and drift over time, these characterisations and dose response studies yielded Z' factors to reflect the practical quality of the assays and thereby provided a robust means to compare different approaches, including different labelling strategies (e.g. hetero-FRET v. homo-FRET), imaging strategies (e.g. wide-field v. optically sectioned) and data analysis strategies (e.g. fitting models, image segmentation). To my knowledge this represents the first such robust and systematic evaluation of FLIM assays, e.g. using Z' from dose response curves, and is therefore of value to the pharmaceutical industry and other potential users of FLIM HCA.
APA, Harvard, Vancouver, ISO, and other styles
2

Mclay, Colin Anthony. "A distributed imaging framework for the analysis and visualization of multi-dimensional bio-image datasets, in high content screening applications." Thesis, Kingston University, 2015. http://eprints.kingston.ac.uk/35863/.

Full text
Abstract:
This research presents the DFrame, a modular and extensible distributed framework that simplifies and thus encourages the use of parallel processing, and that is especially targeted at the analysis and visualization of multi-dimensional bio-image datasets in high content screening applications. These applications typically apply pipelines of complex and time consuming algorithms to multiple bio-image dataset stream and it is highly desirable to use parallel resources to exploit the inherent concurrency, in order to achieve results in much reduced time scales. The DFrame allows pluggable extension and reuse of models implementing parallelizing patterns, and similarly provides for application extensibility. This facilitates the composition of novel parallelized 3D image processing application. A client server architecture is adopted to support both batch and long running interactive sessions. The DFrame client provides functions to author applications as workflows, and mediates interaction with the server. The DFrame server runs as multiple cooperating distributed instances, that together orchestrate to execture tasks according to a workflow's implied order. An inversion of control paradigm is used to drive the loading and running of the models that themselves then coordinate to load and parallelize the running of each task specified in a workflow. The design opens up the opportunity to incorporate advanced management features, including parallel pattern selection based on application context, dynamic 'in application' resource allocation, and adaptable partitioning and composition strategies. Generic partitioning and composition mechanisms for supporting both task and data parallelism are provided, with specific implementation support applicable to the domain of 3D image processing. Evaluations of the DFrame are conducted at the component levelm where specific parallelizing models are applied to discrete 3D image filtering and segmentation operators and to a ray tracing implementation. A complete integrated case study is then presented that composes component entities into multiple image processing pipeline to more fully demonstrate the power and utility of the DFrame, not only in terms of performance, but also to highlight the extensibility and adaptability that permeates through the design, and its applicability to the domain of multi-dimensional image processing. Results are discussed that evidence the utility of the approach, and avenues of future works are considered.
APA, Harvard, Vancouver, ISO, and other styles
3

Makovoz, Gennadiy. "Latent Semantic Analysis as a Method of Content-Based Image Retrieval in Medical Applications." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/227.

Full text
Abstract:
The research investigated whether a Latent Semantic Analysis (LSA)-based approach to image retrieval can map pixel intensity into a smaller concept space with good accuracy and reasonable computational cost. From a large set of computed tomography (CT) images, a retrieval query found all images for a particular patient based on semantic similarity. The effectiveness of the LSA retrieval was evaluated based on precision, recall, and F-score. This work extended the application of LSA to high-resolution CT radiology images. The images were chosen for their unique characteristics and their importance in medicine. Because CT images are intensity-only, they carry less information than color images. They typically have greater noise, higher intensity, greater contrast, and fewer colors than a raw RGB image. The study targeted level of intensity for image features extraction. The focus of this work was a formal evaluation of the LSA method in the context of large number of high-resolution radiology images. The study reported on preprocessing and retrieval time and discussed how reduction of the feature set size affected the results. LSA is an information retrieval technique that is based on the vector-space model. It works by reducing the dimensionality of the vector space, bringing similar terms and documents closer together. Matlab software was used to report on retrieval and preprocessing time. In determining the minimum size of concept space, it was found that the best combination of precision, recall, and F-score was achieved with 250 concepts (k = 250). This research reported precision of 100% on 100% of the queries and recall close to 90% on 100% of the queries with k=250. Selecting a higher number of concepts did not improve recall and resulted in significantly increased computational cost.
APA, Harvard, Vancouver, ISO, and other styles
4

Datar, Akshata. "HIGH CONTENT IMAGING ASSAYS ON MICROARRAY CHIP BASED PLATFORM." Cleveland State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=csu1462795576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goode, Ashley Harford. "High resolution ultrasonic imaging system." Thesis, University of Portsmouth, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jacques, Richard. "Statistical analysis of high content screening data." Thesis, University of Sheffield, 2009. http://etheses.whiterose.ac.uk/2220/.

Full text
Abstract:
High throughput screening experiments are typically used within the pharmaceutical industry for the identification and evaluation of candidate drugs. Using a high throughput screen with automated imaging platform allows a large number of compounds to be tested in a biological assay in order to identify any activity inhibiting or activating a biological process. High throughput fluorescent images contain information that can be used to define fully the effects of a compound on cells. It is for this reason that florescent imaging assays have been termed high content screening (Clemons, 2004). The studies analysed in this thesis involve the use of an automated robotic system to administer compounds to cellular assays and take high content images. These images are then analysed and quantified using imaging algorithms to produce a set of variables. Each high content screen may extend to a million or more individual assays. Supervised classification methods have important applications in high content screening experiments where they are used to predict which compounds have the potential to be developed into new drugs. The use of supervised classification for high content screening data is investigated and a new classification method is proposed for batches of compounds where the rule is updated sequentially using information from the classification of previous batches. This methodology accounts for the possibility that the training data are not a representative sample of the test data and that the underlying group distributions may change as new compounds are analysed. Unsupervised classification methods are used in the analysis of high content screening experiments to evaluate potential new drugs. The study in this thesis considers clustering compounds based on their toxicological effect on the liver. Drug induced liver injury is the most common cause for non approval and withdrawal by the Food and Drug Administration (Ainscow, 2007a) and therefore this is an important stage in drug development.
APA, Harvard, Vancouver, ISO, and other styles
7

Chaipraparl, Pornpun. "Thai High School Compute Literacy: A Content Analysis." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc330995/.

Full text
Abstract:
This study examined the extent to which each computer literacy objective domain, each specific mode of instruction, and each type of question were treated in Thai high school computer literacy text materials. Two textbooks and their accompanying teachers' manuals were examined using three analytical schemes as frameworks for the examinations. The Minnesota Educational Computing Consortium (MECC) computer literacy objectives were used to classify the content in the text materials in order to determine the degree of emphasis on each computer literacy objective domain. The Hawaii state Department of Education (HSDE) instructional modes were used to classify the content in the text materials in order to determine the degree of emphasis on each mode of instruction. Bloom's taxonomy of education, cognitive domain, was used to classify the review questions and exercises in the text materials in order to determine the degree of emphasis on each cognitive level. Detailed findings are given as numerals, percentages, and decimal values. Perspectives are offered on the need for textbooks which reflect the values and feelings objectives. Conclusions were that (a) text materials focus most on the programming/algorithms objectives and tend to exclude the values and feelings objectives; (b) text materials use only three modes of instruction, focusing first on the topic mode, second on the tutee mode, and last on the tool mode; (c) text material questions focus more on higher cognitive than on lower cognitive levels.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yalin. "Document analysis : table structure understanding and zone content classification /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Peng. "High resolution imaging and analysis using aberration-corrected stem." Thesis, University of Liverpool, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Munoz, Antonio. "High performance platform independent content analysis for network processing." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602692.

Full text
Abstract:
The Internet is the global infrastructure for communication, education, entertainment and commerce. As network systems increase in connection speeds and data volume, high performance network intrusion detection and prevention systems must evolve to protect users and businesses from organized and opportunistic crimes motivated by financial and political interests. A detailed study of several well-known network intrusion detection and prevention systems (e.g. Snort) revealed the platform dependency of security rules notation. This thesis describes the design and implementation of Snort2regex, an efficient and accurate tool for compiling Snort rules into regular expression syntax. The regular expression syntax provides a platform independent notation that ensures high levels of security in multiple environments. Several alternative parallel architectures are introduced to attempt to improve the performance of network intrusion detection and prevention systems. I~ order to show the benefits of the Snort2regex compiler, this work also presents SnortEX, a novel software based network intrusion detection and prevention system that benefits from the scalability of the parallel architectures previously introduced. The proposed architecture of SnortEX was evaluated. and several methods of optimization are studied [0 improve the performance and integration between the Snort2regex compiled rule set and SnortEX. Finally, the system is benchmarked and shows a 3 to 17x improvement in performance against a standard Snort implementation.
APA, Harvard, Vancouver, ISO, and other styles
11

Wildenhain, Jan. "Application of multivariate statistics and machine learning to phenotypic imaging and chemical high-content data." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/25665.

Full text
Abstract:
Image-based high-content screens (HCS) hold tremendous promise for cell-based phenotypic screens. Challenges related to HCS include not only storage and management of data, but critical analysis of the complex image-based data. I implemented a data storage and screen management framework and developed approaches for data analysis of a number high-content microscopy screen formats. I visualized and analysed pilot screens to develop a robust multi-parametric assay for the identification of genes involved in DNA damage repair in HeLa cells. Further, I developed and implemented new approaches for image processing and screen data normalization. My analyses revealed that the ubiquitin ligase RNF8 plays a central role in DNA-damage response and that a related ubiquitin ligase RNF168 causes the cellular and developmental phenotypes characteristic for the RIDDLE syndrome. My approaches also uncovered a role for the MMS22LTONSL complex in DSB repair and its role in the recombination-dependent repair of stalled or collapsed replication forks. The discovery of novel bioactive molecules is a challenge because the fraction of active candidate molecules is usually small and confounded by noise in experimental readouts. Cheminformatics can improve robustness of chemical high-throughput screens and functional genomics data sets by taking structure-activity relationships into account. I applied statistics, machine learning and cheminformatics to different data sets to discern novel bioactive compounds. I showed that phenothiazines and apomorphines are regulators for cell differentiation in murine embryonic stem cells. Further, I pioneered computational methods for the identification of structural features that influence the degradation and retention of compounds in the nematode C. elegans. I used chemoinformatics to assemble a comprehensive screening library of previously approved drugs for redeployment in new bioassays. A combination of chemical genetic interactions, cheminformatics and machine learning allowed me to predict novel synergistic antifungal small molecule combinations from sensitized screens with the drug library. In another study on the biological effects of commonly prescribed psychoactive compounds, I discovered a strong link between lipophilicity and bioactivity of compounds in yeast and unexpected off-target effects that could account for unwanted side effects in humans. I also investigated structure-activity relationships and assessed the chemical diversity of a compound collection that was used to probe chemical-genetic interactions in yeast. Finally, I have made these methods and tools available to the scientific community, including an open source software package called MolClass that allows researchers to make predictions about bioactivity of small molecules based on their chemical structure.
APA, Harvard, Vancouver, ISO, and other styles
12

Dietz, Yasmin [Verfasser]. "Etablierung eines high-content-imaging-basierten in-vitro-Testsystems zur Evaluierung genotoxischer Substanzen / Yasmin Dietz." Mainz : Universitätsbibliothek Mainz, 2014. http://d-nb.info/1050054040/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kelly, Douglas James. "An automated fluorescence lifetime imaging multiwell plate reader : application to high content imaging of protein interactions and label free readouts of cellular metabolism." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/29131.

Full text
Abstract:
This thesis reports on work performed in the development and application of an automated plate reading microscope implementing wide field time gated fluorescence lifetime imaging technology. High content analysis (HCA) imaging assays enabled by automated microscopy platforms allow hundreds of conditions to be tested in a single experiment. Though fluorescence lifetime imaging (FLIM) is established in life sciences applications as a method whereby quantitative information may be extracted from time-resolved fluorescence signals, FLIM has not been widely adopted in an HCA context. The FLIM plate reader developed throughout this PhD has been designed to allow HCA-FLIM experiments to be performed and has been demonstrated to be capable of recording multispectral, FLIM and bright field data from 600 fields of view in less than four hours. FLIM is commonly used as a means of reading out Förster resonance energy transfer (FRET) between fluorescent fusion proteins in cells. Using the FLIM plate reader to investigate large populations of cells per experimental condition without significant user input has allowed statistically significant results to be obtained in FRET experiments that present relatively small changes in mean fluorescent lifetime. This capability has been applied to investigations of FOXM1 SUMOylation in response to anthracycline treatment, and to studies of the spatiotemporal activation profiles of small GTPases. Furthermore, the FLIM plate reader allows FLIM-FRET to be applied to protein-protein interaction screening. The application of the instrument to screening RASSF proteins for interaction with MST1 is discussed. The FLIM plate reader was also configured to utilise ultraviolet excitation radiation and optimised for the measurement of autofluorescence lifetime for label-free assays of biological samples. Experiments investigating the autofluorescence lifetime of live cells under the influence of metabolic modulators are presented alongside the design considerations necessary when using UV excitation for HCA-FLIM.
APA, Harvard, Vancouver, ISO, and other styles
14

Leuchowius, Karl-Johan. "High Content Analysis of Proteins and Protein Interactions by Proximity Ligation." Doctoral thesis, Uppsala universitet, Molekylära verktyg, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-119530.

Full text
Abstract:
Fundamental to all biological processes is the interplay between biomolecules such as proteins and nucleic acids. Studies of interactions should therefore be more informative than mere detection of expressed proteins. Preferably, such studies should be performed in material that is as biologically and clinically relevant as possible, i.e. in primary cells and tissues. In addition, to be able to take into account the heterogeneity of such samples, the analyses should be performed in situ to retain information on the sub-cellular localization where the interactions occur, enabling determination of the activity status of individual cells and allowing discrimination between e.g. tumor cells and surrounding stroma. This requires assays with an utmost level of sensitivity and selectivity. Taking these issues into consideration, the in situ proximity-ligation assay (in situ PLA) was developed, providing localized detection of proteins, protein-protein interactions and post-translational modifications in fixed cells and tissues. The high sensitivity and selectivity afforded by the assay's requirement for dual target recognition in combination with powerful signal amplification enables visualization of single protein molecules in intact single cells and tissue sections. To further increase the usefulness and application of in situ PLA, the assay was adapted to high content analysis techniques such as flow cytometry and high content screening. The use of in situ PLA in flow cytometry offers the possibility for high-throughput analysis of cells in solution with the unique characteristics offered by the assay. For high content screening, it was demonstrated that in situ PLA can enable cell-based drug screening of compounds affecting post-translational modifications and protein-protein interactions in primary cells, offering superior abilities over current assays. The methods presented in this thesis provide powerful new tools to study proteins in genetically unmodified cells and tissues, and should offer exciting new possibilities for molecular biology, diagnostics and drug discovery. 
APA, Harvard, Vancouver, ISO, and other styles
15

Bergström, Simon, and Oscar Ivarsson. "Automation of a Data Analysis Pipeline for High-content Screening Data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122913.

Full text
Abstract:
High-content screening is a part of the drug discovery pipeline dealing with the identification of substances that affect cells in a desired manner. Biological assays with a large set of compounds are developed and screened and the output is generated with a multidimensional structure. Data analysis is performed manually by an expert with a set of tools and this is considered to be too time consuming and unmanageable when the amount of data grows large. This thesis therefore investigates and proposes a way of automating the data analysis phase through a set of machine learning algorithms. The resulting implementation is a cloud based application that can support the user with the selection of which features that are relevant for further analysis. It also provides techniques for automated processing of the dataset and training of classification models which can be utilised for predicting sample labels. An investigation of the workflow for analysing data was conducted before this thesis. It resulted in a pipeline that maps the different tools and software to what goal they fulfil and which purpose they have for the user. This pipeline was then compared with a similar pipeline but with the implemented application included. This comparison demonstrates clear advantages in contrast to previous methodologies in that the application will provide support to work in a more automated way of performing data analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

Nyström, Daniel. "High Resolution Analysis of Halftone Prints : A Colorimetric and Multispectral Study." Doctoral thesis, Linköpings universitet, Digitala Medier, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15888.

Full text
Abstract:
To reproduce color images in print, the continuous tone image is first transformed into a binary halftone image, producing various colors by discrete dots with varying area coverage. In halftone prints on paper, physical and optical dot gains generally occur, making the print look darker than expected, and making the modeling of halftone color reproduction a challenge. Most available models are based on macroscopic color measurements, averaging the reflectance over an area that is large in relation to the halftone dots. The aim of this study is to go beyond the macroscopic approach, and study halftone color reproduction on a micro-scale level, using high resolution images of halftone prints. An experimental imaging system, combining the accuracy of color measurement instruments with a high spatial resolution, opens up new possibilities to study and analyze halftone color prints. The experimental image acquisition offers a great flexibility in the image acquisition setup. Besides trichromatic RGB filters, the system is also equipped with a set of 7 narrowband filters, for multi-channel images. A thorough calibration and characterization of all the components in the imaging system is described. The spectral sensitivity of the CCD camera, which can not be derived by direct measurements, is estimated using least squares regression. To reconstruct spectral reflectance and colorimetric values from the device response, two conceptually different approaches are used. In the model-based characterization, the physical model describing the image acquisition process is inverted, to reconstruct spectral reflectance from the recorded device response. In the empirical characterization, the characteristics of the individual components are ignored, and the functions are derived by relating the device response for a set of test colors to the corresponding colorimetric and spectral measurements, using linear and polynomial least squares regression techniques. Micro-scale images, referring to images whose resolution is high in relation to the resolution of the halftone, allow for measurements of the individual halftone dots, as well as the paper between them. To capture the characteristics of large populations of halftone dots, reflectance histograms are computed as well as 3D histograms in CIEXYZ color space. The micro-scale measurements reveal that the reflectance for the halftone dots, as well as the paper between the dots, is not constant, but varies with the dot area coverage. By incorporating the varying micro-reflectance in an expanded Murray-Davies model, the nonlinearity caused by optical dot gain can be accounted for without applying the nonphysical exponentiation of the reflectance values, as in the commonly used Yule-Nielsen model. Due to their different intrinsic nature, physical and optical dot gains need to be treated separately when modeling the outcome of halftone prints. However, in measurements of reflection colors, physical and optical dot gains always co-exist, making the separation a difficult task. Different methods to separate the physical and optical dot gain are evaluated, using spectral reflectance measurements, transmission scans and micro-scale images. Further, the relation between the physical dot gain and the halftone dot size is investigated, demonstrated with FM halftones of various print resolutions. The physical dot gain exhibits a clear correlation with the dot size and the dot gain increase is proportional to the increase in print resolution. The experimental observations are followed by discussions and a theoretical explanation.
APA, Harvard, Vancouver, ISO, and other styles
17

Muckli, Lars. "Emergence of visual content in the human brain investigations of amblyopia, blindsight and high-level motion perception with fMRI /." Aachen : Maastricht : Shaker ; University Library, Maastricht University [Host], 2002. http://arno.unimaas.nl/show.cgi?fid=7138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cao, Hongfei. "High-throughput Visual Knowledge Analysis and Retrieval in Big Data Ecosystems." Thesis, University of Missouri - Columbia, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877134.

Full text
Abstract:

Visual knowledge plays an important role in many highly skilled applications, such as medical diagnosis, geospatial image analysis and pathology diagnosis. Medical practitioners are able to interpret and reason about diagnostic images based on not only primitive-level image features such as color, texture, and spatial distribution but also their experience and tacit knowledge which are seldom articulated explicitly. This reasoning process is dynamic and closely related to real-time human cognition. Due to a lack of visual knowledge management and sharing tools, it is difficult to capture and transfer such tacit and hard-won expertise to novices. Moreover, many mission-critical applications require the ability to process such tacit visual knowledge in real time. Precisely how to index this visual knowledge computationally and systematically still poses a challenge to the computing community.

My dissertation research results in novel computational approaches for highthroughput visual knowledge analysis and retrieval from large-scale databases using latest technologies in big data ecosystems. To provide a better understanding of visual reasoning, human gaze patterns are qualitatively measured spatially and temporally to model observers’ cognitive process. These gaze patterns are then indexed in a NoSQL distributed database as a visual knowledge repository, which is accessed using various unique retrieval methods developed through this dissertation work. To provide meaningful retrievals in real time, deep-learning methods for automatic annotation of visual activities and streaming similarity comparisons are developed under a gaze-streaming framework using Apache Spark.

This research has several potential applications that offer a broader impact among the scientific community and in the practical world. First, the proposed framework can be adapted for different domains, such as fine arts, life sciences, etc. with minimal effort to capture human reasoning processes. Second, with its real-time visual knowledge search function, this framework can be used for training novices in the interpretation of domain images, by helping them learn experts’ reasoning processes. Third, by helping researchers to understand human visual reasoning, it may shed light on human semantics modeling. Finally, integrating reasoning process with multimedia data, future retrieval of media could embed human perceptual reasoning for database search beyond traditional content-based media retrievals.

APA, Harvard, Vancouver, ISO, and other styles
19

Freeman, Norman A. "Design and analysis of a high-frequency needle-based ultrasound imaging system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0019/MQ54089.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhu, Xinghua, and 朱星华. "Multi-compartment model estimation and analysis in high angular resolution diffusion imaging." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206696.

Full text
Abstract:
Diffusion weighted magnetic resonance images offer unique insights into the neural networks of in vivo human brain. In this study, we investigate estimation and statistical analysis of multi-compartment models in high angular resolution diffusion imaging (HARDI) involving the Rician noise model. In particular, we address four important issues in multi-compartment diffusion model estimation, namely, the modelling of Rician noise in diffusion weighted (DW) images, the automatic determination of the number of compartments in the diffusion signal, the application of spatial prior on multi-compartment models, and the evaluation of parameter indeterminacy in diffusion models. We propose an expectation maximization (EM) algorithm to estimate the parameters of a multi-compartment model by maximizing the Rician likelihood of the diffusion signal. We introduce a novel scheme for automatically selecting the number of compartments, via a sparsity-inducing prior on the compartment weights. A non-local weighted maximum likelihood estimator is proposed to improve estimation accuracy utilizing repetitive patterns in the image. Experimental results show that the proposed algorithm improves estimation accuracy in low signal-to-noise-ratio scenarios, and it provides better model selection than several alternative strategies. In addition, we derive the Cram´er-Rao Lower Bound (CRLB) of the maximum Rician likelihood estimator for the balland-stick model and general differentiable diffusion models. The CRLB provides a general theoretical tool for comparing diffusion models and examining parameter indeterminacy in the maximum likelihood estimation problem.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
21

Salisbury, Victoria Alice. "High resolution imaging and analysis of endothelial tubulogenesis and blood vessel formation." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7322/.

Full text
Abstract:
The process of angiogenesis in which new blood vessels form from pre-existing vessels, can be intensively studied through the use of in vitro and in vivo models. The in vitro co-culture tube formation assay is used to assess the ability of endothelial cells to develop into three dimensional tubular structures which mimics the growth of capillaries. Different fluorescent labelling techniques were developed and utilised alongside confocal microscopy to visualise endothelial tubulogenesis and investigate the mechanisms of lumenogenesis. Imaging the actin cytoskeletal organisation by expressing the lifeact peptide conjugated to fluorescent proteins revealed that Factin fibres outline lumens within endothelial tubules and enabled clear visualisation of filopodia formation. Further studies presented in this thesis aimed to develop, test and evaluate computational tools for analysing endothelial sprouting from fluorescently labelled spheroids generated using the in vitro hanging drop spheroid assay and quantify blood vessel formation in the in vivo zebrafish model. The results confirmed that both analysis tools were able to rapidly quantify a wide range of angiogenic images and generated comparable results to frequently used manual methods. The developed computational analysis tools are user friendly and can be used to assess the effects of inhibitor compounds and silencing vascular related genes.
APA, Harvard, Vancouver, ISO, and other styles
22

Lum, John William. "High-speed imaging and analysis of the solidification of undercooled alloy melts." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/39762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Gongyin 1968. "Study of high-energy gamma-ray imaging detectors for fast neutron analysis." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/85318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tulukcuoglu, Güneri Ezgi. "Development of microfluidic device for high content analysis of circulating tumor cells." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066583/document.

Full text
Abstract:
Le cancer est l'une des principales causes de décès dans le monde. D'après la société américaine contre le cancer; en 2015, un quart des décès aux Etats-Unis est du au cancer du poumon avant même les maladies cardiaques. Cette situation nous incite et bien d'autres scientifiques dans le monde à développer des moyens plus efficaces de traitement, le diagnostic et le dépistage de la maladie. Parce que près de 90% des décès par cancer sont dus à des métastases, de nombreuses études se sont concentrées sur le mécanisme de métastases et sur son impact clinique. Les cellules tumorales circulantes (CTC) sont les cellules s’échappent de tumeurs primaires ou métastatiques pour rejoindre le flux sanguin périphérique, ces cellules sont un élément de transition dans le processus métastatique et portent ainsi des informations cruciales sur ce mécanisme encore mal compris. Les CTCs ont déjà montré leur potentiel comme biomarqueur de pronostic de la progression de la maladie et de l'indicateur de l'efficacité du traitement en fonction l’augmentation ou de la diminution de leur nombre. Leur caractérisation moléculaire peut également donner des informations vis à vis de cibles thérapeutiques possibles et des mécanismes de progression de la maladie ou de la résistance aux médicaments. Leur comptage au cours du traitement combiné avec leur caractérisation moléculaire devrait améliorer la prise en charge des patients dans le cadre de la médecine personnalisée. Cependant CTCs sont extrêmement rares, 1 à 10 cellules / ml de sang parmi les 106 globules blancs et 109 globules rouges, leur capture à partir du sang reste donc un challenge analytique. Dans les dernières décennies, Une grande variété de techniques d'enrichissement et de capture a été mise au point et l'approche microfluidique est l'une des méthodes efficaces, flexibles et à haut débit. Au sein de notre équipe, un dispositif microfluidique (système Ephesia) puissant pour la capture et l'analyse des cellules tumorales circulantes a déjà été mis au point précédemment. Le principe de capture est basé sur l'auto-assemblage de billes magnétiques greffées par des anticorps, grâce aux quelles les cellules sont enrichies via l’interaction Ab- l'antigène de surface EpCAM que l'on trouve communément dans les cellules cancéreuses d'origine épithéliale. Ce système a déjà été validé avec des lignées cellulaires et des échantillons de patients. Cependant, le système n'a pas permis l'isolement / détection des sous-populations de CTCs ou d'effectuer une caractérisation moléculaire très poussée. Par conséquent, mon projet de thèse vise à améliorer encore les capacités du système sur les deux principaux aspects: le ciblage sous-populations de CTC et à l'étude des interactions des protéines à la surface des CTCs dans le Système Ephesia
Metastasis is the advanced stage of cancer progression and is the cause of 90% of deaths in cancer disease. During metastatic cascade, it is suggested that the successful metastatic initiation depends on the survival of circulating tumor cells (CTCs). CTCs are the cells that shed from the primary or secondary tumor sites into the blood circulation. it is now widely recognized as potential biomarker for companion diagnostics in which high number of CTCs in blood can indicate association with poor survival or high risk of disease progression. Besides, following the number of CTCs during the course of treatment can help to adapt the selected therapy and predict the treatment efficacy. On the other hand molecular characterization can provide patient stratification and identifying the therapeutic targets. However they are extremely rare in the bloodstream, estimated between 1-10 CTC among 6×106 leukocytes, 2×108 platelets and 4×109 erythrocytes per one mL of blood which makes their isolation very challenging. A very attractive way of isolation of CTCs is to integrate microfluidics. Microfluidics offers great advantages such as low volume of reagent consumption and short analysis times with automation as well as isolation and detection analysis can be integrated resulting in highly efficient biomedical devices for diagnostics. As parallel to state of the art, a powerful microfluidic device for circulating tumor cells capture and analysis had already been developed previously in our laboratory. The principle of capture is based on self-assembly of antibody-coated (EpCAM) magnetic beads in which the cells are enriched by EpCAM surface antigen which is found commonly in epithelial origin cancer cells. This system was already validated with cell lines and patients samples. However, the system did not allow isolation/detection of subpopulations of CTCs or performing high content molecular characterization. Therefore, my PhD project aimed at further improving the capabilities of the system on the main two aspects: targeting subpopulations of CTC and studying of protein interactions of CTCs in Ephesia System
APA, Harvard, Vancouver, ISO, and other styles
25

Penney, Kimberley. "Anatomy of junior high science textbooks : a content analysis of textual characteristics /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0030/MQ62414.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lo, Ernest. "Statistical analysis of a high-content screening assay of microtubule polymerization status." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=92232.

Full text
Abstract:
The present work describes the analysis of the first high-content, double-immunofluorescence assay of microtubule polymerization status. Two novel features of the work are the extraction of a new class of cell metrics that target fiber-based cell phenotypes (using the Fiberscore algorithm) in a high-content assay, and the development of a non-uniformity correction algorithm that allows the unbiased analysis of dim cells.
Findings relevant to HCS data analysis in general include: (1) Spatial plate biases are significant in HCS data and manifest differently for different cell-level metrics; (2) Individual plates are separate statistical entities; thus cellular data in HCS cannot in general be pooled before proper normalization procedures are applied; (3) Inter-plate variance is significant in HCS data such that inter-plate replicates are a necessity. However HCS data also appears to be amenable for empirical Bayes methods for improving sensitivity to 'hit' compounds; (4) Cell populations are observed to respond heterogeneously to treatment compounds. Initial tests of an alternative cell population summary statistic (the AUC) thought to be suited for the detection of cell subpopulations, did not indicate significant improvements in sensitivity over conventional measures (such as the population median) however.
The correlation texture metric was identified as showing greatly increased sensitivity, when used on the Tyr-tubulin-specific channel. The identification of this cell-level metric provides a preliminary demonstration that high-content assays have the potential to provide superior performance over conventional whole-well HTS assays. An image-processing related issue termed the 'area-intensity confound', was also identified as a possible major source of variability that limited the performance of the alternative cell-level metrics that were developed. A resolution to this issue is proposed.
Many open questions and avenues of further investigation remain, and the current study represents only a preliminary step in the ongoing analysis of the HCS microtubule polymerization status assay, and the development of pertinent statistical inference methods.
La présente étude décrit l'analyse du premier criblage à haut-contenu (HCS) d'un test multi-paramétrique utilisant trois marqueurs fluorescents; ce test permet d'explorer l'état de polymérisation des microtubules.
Ce travail se distingue par deux caractéristiques novatrices:
1) L'extraction d'une nouvelle catégorie de mesures cellulaires qui caractérisent des organites cellulaires de type « fibres » (en utilisant l'algorithme Fiberscore) dans un criblage haut-contenu,
2) Le développement d'un algorithme pour la correction du manque d'uniformité des images permettant l'analyse non biaisée des cellules sombres.
Les résultats inhérents à l'analyse de données de HCS incluent en général les points suivants: (1) Les biais spatiaux liés aux plaques sont significatifs et se manifestent différemment selon les paramètres mesurés; (2) Chaque plaque est une entité statistique distincte; donc les données de HCS ne peuvent pas être compilées sans utilisation préalable de méthodes de normalisation appropriées; (3) La variance entre les plaques est significative; ainsi la présence de réplicats est cruciale. Cependant, les données HCS sont conformes à l'application de la méthode de Bayes empirique en vue d'améliorer la sensibilité aux composés bio-actifs; (4) Les cellules répondent de façon hétérogène au traitement par des petites molécules chimiques. Cependant, contrairement à notre attente, les tests réalisés avec l'approche statistique 'AUC' n'ont pas indiqué une amélioration de puissance statistique.
La mesure texturale de 'corrélation' a été identifiée comme une mesure qui présente une puissance statistique plus importante sur le canal de 'Tyr-tubulin'. L'identification de cette mesure fournit une démonstration préliminaire que les criblages HCS pourraient être plus performants que les criblages à haut débit conventionnels (HTS). Un problème lié au traitement d'images et plus particulièrement à la segmentation des images, que nous avons appellé 'the area-intensity confound', a été identifié comme une source importante de variabilité : ceci a limité la performance des mesures cellulaires alternatives qui ont été développées. Une solution à ce problème est proposée.
Il reste plusieurs questions ouvertes et de nombreuses voies de recherches à explorer. Cette étude n'est qu'une étape préliminaire dans l'analyse du criblage à haut-contenu sur l'état de polymérisation des microtubules, et le développement des méthodes statistiques pertinentes.
APA, Harvard, Vancouver, ISO, and other styles
27

Rameseder, Jonathan. "Multivariate methods for the statistical analysis of hyperdimensional high-content screening data." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92957.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references.
In the post-genomic era, greater emphasis has been placed on understanding the function of genes at the systems level. To meet these needs, biologists are creating larger, and increasingly complex datasets. In recent years, high-content screening (HCS) using RNA interference (RNAi) or other perturbation techniques in combination with automated microscopy has emerged as a promising investigative tool to explore intricate biological processes. Image-based HC screens produce massive hyperdimensional data sets. To identify novel components of the DNA damage response (DDR) after ionizing radiation, we recently performed an image-based HC RNAi screen in an osteosarcoma cell line. Robust univariate hit identication methods and manual network analysis identied an isoform of BRD4, a bromodomain and extra-terminal domain family member, as an endogenous inhibitor of DDR signaling. However, despite the plethora of data generated from our and other HC screens, little progress has been made in analyzing HC data using multivariate computational methods that exploit the full richness of hyperdimensional data and identify more than just the most salient knockdown phenotypes to gain a detailed understanding of how gene products cooperate to regulate complex cellular processes. We developed a novel multivariate method using logistic regression models and least absolute shrinkage and selection operator regularization for analyzing hyperdimensional HC data. We applied this method to our HC screen to identify genes that exhibit subtle but consistent phenotypic changes upon knockdown that would have been missed by conventional univariate hit identication approaches. Our method automatically selects the most predictive features at the most predictive time points to facilitate the more ecient design of follow-up experiments and puts the identied hits in a network context using the Prize-Collecting Steiner Tree algorithm. This method offers superior performance over the current gold standard for the analysis of HC RNAi screens. A surprising finding from our analysis is that training sets of genes involved in complex biological phenomena used to train predictive models must be broken down into functionally coherent subsets in order to enhance new gene discovery. Additionally, we found that in the case of RNAi screening, statistical cell-to-cell variation in phenotypic responses in a well of cells targeted by a single shRNA is an important predictor of gene dependent events.
by Jonathan Rameseder.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Claveau, Sandra. "Fluorescent nanodiamonds as siRNA vectors : in vitro efficacy evaluation and high-content/high-resolution quantifications of their distribution in vivo." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS119/document.

Full text
Abstract:
Le Sarcome d'Ewing est un cancer pédiatrique rare, principalement dû à l'expression de l'oncogène de jonction EWS-Fli1, et dont les traitements médicamenteux ont peu évolué au cours des dernières décennies. Nous nous intéressons à une nouvelle approche thérapeutique utilisant des siRNA, ciblant spécifiquement l'oncogène EWS-Fli1, et permettant l'inhibition de la croissance tumorale. Durant mon travail de thèse, j'ai utilisé des nanocristaux de diamant issus soit de détonation (DND), soit de synthèse haute pression-haute température (NDHPHT) pour vectoriser les siRNA, accrochés par interaction électrostatique. Pour ce faire, les NDs ont été rendus cationiques par différentes méthodes: (i) hydrogénation assistée par plasma, (ii) par recuit thermique, ou (iii) par traitement chimique pour les DNDs, ou (iv) greffage covalent d'un polymère cationique sur des NDHPHT (COP-NDHPHT).Mes travaux ont comporté deux axes: (i) étude in vitro des complexes ND:siRNA (caractérisations physico-chimiques des NDs et étude de l'efficacité d'inhibition de l'oncogène par les complexes); (ii) distribution tissulaire de COP-NDHPHT, injectés dans des souris, grâce à des NDHPHT fluorescents, contenant des défauts azote-lacune. Pour les détecter individuellement dans des coupes d'organes de souris portant une tumeur xénogreffée sous-cutanée, nous avons développé un système d'imagerie en épifluorescence à grande ouverture numérique, et résolu en temps afin de rejeter l'autofluorescence tissulaire (de durée de vie plus courte que celle des NDs). Nous avons quantifié le nombre, l'état d'agrégation et la localisation cellulaire de ces vecteurs (grâce à un marquage histopathologique imagé simultanément) 24h après injection. Les NDs ont été clairement détectés dans les différents organes, dont la tumeur, ouvrant la voie à un contrôle de la progression tumorale grâce au siRNA
Ewing Sarcoma is a rare pediatric cancer, caused in the majority of the cases by the expression of the fusion oncogene EWS-Fli1. Current treatments have not much evolved over the past decades. We are investigating a new therapy based on siRNA specifically targeting the oncogene and inhibiting the tumor growth. During my PhD thesis, I have tested different types of synthetic nanodiamonds (ND) used to vectorize siRNA electrostatically bound at their surface: ND produced by detonation (DND) or by High Pressure-High Temperature synthesis (NDHPTH). Their surfaces have been cationized by various processes: (i) plasma or (ii) thermal hydrogenation, (ii) chemical treatment, or (iv) covalent grafting of a copolymer (COP-NDHPHT).My PhD work included two main axis: (i) in vitro study of ND:siRNA complexes (NDs physico-chemical characterization and oncogene inhibition efficacy by the complexes); (ii) tissue distribution of COP-NDHPHT, injected into mice, using fluorescent NDHPHT containing nitrogen-vacancy defects. To detect them individually in sections of mouse organs carrying a subcutaneous xenograft tumor, we developed an epifluorescence imaging system with large numerical aperture and resolved in time to reject tissue autofluorescence (of a shorter lifetime than NDs). We quantified the number, the aggregation state and the cell localization (thanks to simultaneous histopathological imaging) of these vectors 24 hours after injection. NDs have been clearly detected in different organs, including the tumor, paving the way for tumor progression control with siRNA
APA, Harvard, Vancouver, ISO, and other styles
29

O'Neil, Alanna R. "Chemiluminescence and High Speed Imaging of Reacting Film Cooling Layers." University of Dayton / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1324042434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Aboulmagd, Khodier Sarah. "Analysis of Lipids in Kidney Tissue Using High Resolution MALDI Mass Spectrometry Imaging." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19443.

Full text
Abstract:
Massenspektrometrisches Imaging (MSI) ist unverzichtbar für die Untersuchung der räumlichen Verteilung von Molekülen in einer Vielzahl von biologischen Proben. Seit seiner Einführung hat sich MALDI zu einer dominierenden Bildgebungsmethode entwickelt, die sich als nützlich erwiesen hat, um die Komplexität von Lipidstrukturen in biologischen Geweben zu bestimmen. Einerseits ist die Rolle von Cisplatin bei der Behandlung von menschlichen malignen Erkrankungen gut etabliert, jedoch ist Nephrotoxizität eine limitierende Nebenwirkung, die Veränderungen des renalen Lipidprofils beinhaltet. Dies führte zu der Motivation, die Lipidzusammensetzung des Nierengewebes in mit Cisplatin behandelten Ratten zu untersuchen, um die involvierten Lipid-Signalwege aufzuklären. Es wurde eine Methode zur Kartierung der Lipidzusammensetzung in Nierenschnitten unter Verwendung von MALDI MSI entwickelt. Die Verteilung von Nierenlipiden in Cisplatin-behandelten Proben zeigte deutliche Unterschiede in Bezug auf die Kontrollgruppen. Darüber hinaus wurde die Beurteilung der Ionenbilder von Lipiden in Cisplatin-behandelten Nieren meist als qualitative Aspekte betrachtet. Relative quantitative Vergleiche wurden durch den variablen Einfluss von experimentellen und instrumentellen Bedingungen begrenzt. Daher bestand die Notwendigkeit, ein Normalisierungsverfahren zu entwickeln, das einen Vergleich der Lipidintensität verschiedener Proben ermöglicht. Das Verfahren verwendete einen Tintenstrahldrucker, um eine Mischung der MALDI-Matrix und der internen internen Lipid-Metall-Standards aufzubringen. Unter Verwendung von ICP-MS erlaubte der interne Metallstandard, die Konsistenz der Matrix und der internen Standards zu bestätigen. Die Anwendung der Methode zur Normalisierung von Ionenintensitäten von Nierenlipiden zeigte eine ausgezeichnete Bildkorrektur und ermöglichte einen relativen quantitativen Vergleich von Lipidbildern in Cisplatin-behandelten Proben.
Mass spectrometry imaging is indispensable for studying the spatial distribution of molecules within a diverse range of biological samples. Since its introduction, MALDI has become a dominant imaging method, which proved useful to sort out the complexity of lipid structures in biological tissues. The role of cisplatin in the treatment of human malignancies is well-established. However, nephrotoxicity is a limiting side effect that involves an acute injury of the proximal tubule and alterations in the renal lipid profile. This evolved the motivation to study the spatial distribution of lipids in the kidney tissue of cisplatin-treated rats to shed light on the lipid signaling pathways involved. A method for mapping of lipid distributions in kidney sections using MALDI-LTQ-Orbitrap was developed, utilizing the high performance of orbitrap detection. The distribution of kidney lipids in cisplatin-treated samples revealed clear differences with respect to control group, which could be correlated to the proximal tubule injury. The findings highlight the usefulness of MALDI MSI as complementary tool for clinical diagnostics. Furthermore, assessment of the ion images of lipids in cisplatin-treated kidney mostly considered qualitative aspects. Relative quantitative comparisons were limited by the variable influence of experimental and instrumental conditions. Hence, the necessity developed to establish a normalization method allowing comparison of lipid intensity in MALDI imaging measurements of different samples. The method employed an inkjet printer to apply a mixture of the MALDI matrix and dual lipid-metal internal standards. Using ICP-MS, the metal internal standard allowed to confirm the consistency of the matrix and internal standards application. Applying the method to normalize ion intensities of kidney lipids demonstrated excellent image correction and successfully enabled relative quantitative comparison of lipid images in control and cisplatin-treated samples.
APA, Harvard, Vancouver, ISO, and other styles
31

Nketia, Thomas. "Quantitative analysis of cell function and death in label-free high content screening." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:f58ce337-5422-4a29-bc35-5bccbccf9ddc.

Full text
Abstract:
Time-lapse data is increasingly being used to conduct more detailed high throughput studies. Quantitative information that is being derived from vast and complex image data sets is essential in our understanding of basic cellular processes. Advances in cell culturing methods combined with sophisticated molecular probes allow the monitoring of a broad array of cellular functions in vivo. These experiments provide a wealth of multi-channel time-lapse data that renders traditional manual interpretation infeasible. The work outlined in the thesis is aimed at quantitative analysis of such time-lapse data resulting in statistical analysis methods and software tools that offers an improvement in the efficiency and impact of phenotypic screening experiments. The research contributions towards the stated aim is outlined in three (3) main areas: cell morphology and lineage, population-based summaries for cell function, and cell state labelling. Firstly, cell morphology and lineage tasks address the two fundamental tasks in time-lapse quantitative biological image analysis; segmentation and tracking. Cell segmentation presents a key challenge in any image-based single cell analysis as most further analysis is heavily dependent on this step. Details of a segmentation approach based on an existing light phase retardation model and results on sample data of phase contrast HeLa cervical cells is presented. Using phase retardation feature extraction to precondition images for deep learning is also explored. Cell shape and texture features from segmentation are then used in the tracking to obtain a lineage metric to assess the progression of cell morphology changes and motion over time. Details of a tracking scheme based on coupled minimum cost ow network is presented. Secondly, metrics that quantify cell function of a population such as proliferation, viability and migration based on a summary of single cell measurements are common. Such metrics however generally do not account for segmentation errors resulting from cell crowding and overlapping cell boundaries. Details of an approach that allows incorporating the confidence in segmentation accuracy for each single cell in the population metric is presented. Analysis on simulated data shows that the proposed method provides better summary that is representative of the cell population and hence could improve conclusions made from quantitative analysis of cell populations. Determining the variability in the mode of death of cells is important in multiple live cell phenotypic toxicity screens. The mechanism associated with the mode a cell progresses towards death can be observed as a sequence of morphological events associated with cell death. As these events are observed morphologically, changes in shape and texture features can be used to model the time series process. Here, such temporal evolution is modelled as hierarchical dirichlet process of a hidden markov model (HDP-HMM). The model eliminates the limit on the number of states representing morphological events and hence allows new states to be discovered as more cell data from new screens are added. Details also include sequence analysis methods used to group similar cells based on the temporal changes of their morphological features. Overall, the proposed work provides methods and software tools that enable the efficient interpretation of patterns and groups in large amounts of data obtained from high throughput phenotypic screens that otherwise would be infeasible to obtain manually. Hence, the work improves efficiency of analysis as well as help obtain objective and repeatable conclusions from live cell experiments.
APA, Harvard, Vancouver, ISO, and other styles
32

Sardiello, Ezequiel Josue Miron. "Comprehensive mapping of the 3D epigenome by high-content super-resolution image analysis." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:d6c0f208-1acd-459b-94cf-a4e44de9bc43.

Full text
Abstract:
A full understanding of the relationship between the density and packing arrangements of chromatin in interphase mammalian nuclei and epigenetic function at the nanometer size scale remains an elusive goal in the field of chromatin biology. Over the last years great technical leaps have been made through sequencing-based methods to address this question. Advancements have also allowed research to bypass long-held optical limits to resolve chromatin at a scale where its 3D topology can start to be analysed. The work in this thesis represents attempts at high-throughput data mining of 3D SIM datasets, encoding rich nanometer-scale spatial information from immunofluorescence detection of histone modifications and key epigenetic markers relative to a chromatin landscape. To do so, an automated and high-throughput image analysis workflow (ChaiN , for ChaiN analysis of the in situ Nucleome) was developed. Novel metrics for the quantitation and correlation of chromatin and the 3D epigenome reveal a chromatin network of filaments at the size scale proposed from sequencing approaches, and with segregated regions of genomic activities as a function of chromatin accessibility (its density and depth). Furthermore, ChaiN has allowed characterisation of the local and global rearrangements of chromatin when subjected to replication pressures or exogenous perturbation showing for the first time how individual genomic markers are affected at different locations throughout the chromatin network. The model hypothesised from these results tries to reconcile previous data obtained from Hi-C population ensemble studies and in silico modelling, to single cell observations at super-resolution.
APA, Harvard, Vancouver, ISO, and other styles
33

Gomez, Gonzalez Carlos Alberto, Olivier Wertz, Olivier Absil, Valentin Christiaens, Denis Defrère, Dimitri Mawet, Julien Milli, et al. "VIP: Vortex Image Processing Package for High-contrast Direct Imaging." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/624676.

Full text
Abstract:
We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source. position and flux estimation, and sensitivity curve. generation. Among the reference point-spread. function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github. com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Shichao. "High-sensitivity Full-field Quantitative Phase Imaging Based on Wavelength Shifting Interferometry." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/102502.

Full text
Abstract:
Quantitative phase imaging (QPI) is a category of imaging techniques that can retrieve the phase information of the sample quantitatively. QPI features label-free contrast and non-contact detection. It has thus gained rapidly growing attention in biomedical imaging. Capable of resolving biological specimens at tissue or cell level, QPI has become a powerful tool to reveal the structural, mechanical, physiological and spectroscopic properties. Over the past two decades, QPI has seen a broad spectrum of evolving implementations. However, only a few have seen successful commercialization. The challenges are manifold. A major problem for many QPI techniques is the necessity of a custom-made system which is hard to interface with existing commercial microscopes. For this type of QPI techniques, the cost is high and the integration of different imaging modes requires nontrivial hardware modifications. Another limiting factor is insufficient sensitivity. In QPI, sensitivity characterizes the system repeatability and determines the quantification resolution of the system. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent. In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. On one hand, the full-field implementations, compared to point-scanning, spectral domain QPI techniques, require no mechanical scanning to form a phase image. On the other, WSI has the advantage of preserving the integrity of the interferometer and compatibility with multi-modal imaging requirement. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. In WSI, the shifts in wavelength can be applied in fine steps, termed swept source digital holographic phase microscopy (SS-DHPM), or a multi-wavelength-band manner, termed low coherence wavelength shifting interferometry (LC-WSI). SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation. Besides experimentally demonstrating the high sensitivity (limited by only shot noise) with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework examines the Cramér-Rao bound (CRB), algorithmic sensitivity and experimental sensitivity, and facilitates the diagnosis of algorithm efficiency and system efficiency. The framework can be applied not only to the WSI techniques we proposed, but also to a broad range of QPI techniques. Several popular phase shifting interferometry techniques as well as off-axis interferometry is studied. The comparisons between them are shown to provide insights into algorithm optimization and energy efficiency of sensitivity.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
35

Wheeler-Kingshott, Claudia A. "High speed MRI : analysis of new approaches to fast imaging using Burst-based sequences." Thesis, University of Surrey, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Scharfman, Barry Ethan. "Analysis of multiphase fluid flows via high speed and synthetic aperture three dimensional imaging." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78188.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Spray flows are a difficult problem within the realm of fluid mechanics because of the complicated interfacial physics involved. Complete models of sprays having even the simplest geometries continue to elude researchers and practitioners. From an experimental viewpoint, measurement of dynamic spray characteristics is made difficult by the optically dense nature of many sprays. Flow features like ligaments and droplets break off the bulk liquid volume during the atomization process and often occlude each other in images of sprays. In this thesis, two important types of sprays are analyzed. The first is a round liquid jet in a cross flow of air, which applies, for instance, to fuel injection in jet engines and the aerial spraying of crops. This flow is studied using traditional high-speed imaging in what is known as the bag breakup regime, in which partial bubbles that look like bags are formed along the downstream side of the liquid jet due to the aerodynamic drag exerted on it by the cross flow. Here, a new instability is discovered experimentally involving the presence of multiple bags at the same streamwise position along the jet. The dynamics of bag expansion and upstream column wavelengths are also investigated experimentally and theoretically, with experimental data having found to generally follow the scaling arguments predicted by the theory. The second flow that is studied is the atomization of an unsteady turbulent sheet of water in air, a situation encountered in the formation and breakup of ship bow waves. To better understand these complicated flows, the emerging light field imaging (LFI) and synthetic aperture (SA) refocusing techniques are combined to achieve three-dimensional (3D) reconstruction of the unsteady spray flow fields. A multi-camera array is used to capture the light field and raw images are reparameterized to digitally refocus the flow field post-capture into a volumetric image. These methods allow the camera array to effectively "see through" partial occlusions in the scene. It is demonstrated here that flow features, such as individual droplets and ligaments, can be located in 3D by refocusing throughout the volume and extracting features on each plane.
by Barry Ethan Scharfman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
37

Walker, Lucy. "High interest, low content : a content analysis of 2004 campaign information found in five leading consumer magazines aimed at young adults /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433112.

Full text
Abstract:
Thesis (M.A.)--University of Nevada, Reno, 2005.
"August, 2005." Includes bibliographical references (leaves 80-88). Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2005]. 1 microfilm reel ; 35 mm. Online version available on the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
38

Collinet, Claudio. "System Survey of Endocytosis by Functional Genomics and Quantitative Multi-Parametric Image Analysis." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-38278.

Full text
Abstract:
Endocytosis is an essential cellular process consisting of the internalization of extracellular cargo and its transport towards different intracellular destinations. Multiple endocytic routes are tailored for the internalization and trafficking of different types of cargo and multiple endocytic organelles provide specialized biochemical environments where different molecular events take place. Membrane receptors and cargo molecules are internalized by both Clathrin-dependent and –independent endocytosis into early endosomes. From here two main endocytic routes are followed: 1) the recycling route, mainly followed by membrane receptor and other molecules like Transferrin, brings the cargo back to the plasma membrane and 2) the degradative route, followed by molecules like Epidermal Growth Factor (EGF) and Lipoprotein particles (LDL), leads the cargo to degradation into late endosomes/lysosomes. In addition to the basic function of intracellular cargo transport, the endocytic system fulfils many other cellular and developmental functions such as transmission of proliferative and survival signals and defence against pathogens. In order for cells to properly perform their various and numerous functions in organs and tissues, the activity of the endocytic system needs to be coordinated between cells and, within individual cells, integrated with other cellular functions. Even though molecules orchestrating the endocytic sorting and transport of different types of cargo have long been investigated, our understanding of the molecular machinery underlying endocytosis and its coordination into the cellular systems remains fragmentary. The work presented in this thesis aimed at understanding how this high-order regulation and integration is achieved. This requires not only a comprehensive analysis of molecular constituents of the endocytic system but also an understanding of the general design principles underlying its function. To this end, in collaboration with several members of the Zerial group and with the HT-Technology Development Studio (TDS) at MPI-CBG, I developed a new strategy to accurately profile the activity of human genes with respect to Transferrin (Tfn) and Epidermal Growth Factor (EGF) endocytosis by combining genome-wide RNAi with several siRNA/esiRNA per gene, automated high-resolution confocal microscopy, quantitative multi-parametric image analysis and high-performance computing. This provided a rich and complex genomic dataset that was subsequently subjected to analysis with a combination of tools such as a multi-parametric correlation of oligo profiles, phenotypic clustering and pathways analysis, and a Bayesian network reconstruction of key endocytic features. Altogether, the genomic endeavour and the subsequent analyses provided a number of important results: first, they revealed a much higher extent of off-target effects from RNAi and provided novel tools to infer the specific effects of genes loss of function; second, they identified a large number of novel molecules exerting a regulatory role on the endocytic system, including uncharacterized genes and genes implicated in human diseases; third, they uncovered the regulatory activity of signalling pathways such as Wnt, Integrin, TGF-β, and Notch, and found new genes regulating the sorting of cargo to a specialized subset of early endosomes that function as intracellular signalling platforms; and fourth, a systems analysis by Bayesian networks revealed that the cell specifically regulates the number, size, concentration of cargo and intracellular position of endosomes, thus uncovering novel properties of the endocytic system. In conclusion, the work presented here not only provided a dataset extremely rich of information whose potential has just begun to be uncovered but also shows how genomic datasets can be used to reveal design principles governing the functioning of biological processes.
APA, Harvard, Vancouver, ISO, and other styles
39

Karphammar, Anette, and Maria Behrns. "Advertising in high- and low context cultures : A comparative content analysis between Sweden and Brazil." Thesis, Högskolan i Halmstad, Akademin för ekonomi, teknik och naturvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37075.

Full text
Abstract:
In today's increasingly globalised world, research within cultural differences is called for to be able to categorize nations and aid cross border communications around the world. This thesis is a quantitative study of differences in advertising communication between what is considered high and low context cultures, through a deeper look into Sweden and Brazil. Trade agreements between these two nations are well-established and highly profitable, but differences in cultural bases are vast, potentially leading to misunderstandings and wrongful communications conduct if not taken in consideration. Studies within cultural differences classify Sweden as a low context individualistic nation and Brazil as a high context collectivistic nation, but these classifications were made many years ago and research within cultural imperialism, globalisation and transnational consumerism state that the world is changing and that further research within the specific communities is needed today. With this problem in mind we have in this study chosen to ask the question of what the differences in advertising context are between Sweden and Brazil and if the theories actually match reality as it is today? The purpose of this question is to further the frame of reference within the theories and to aid in cross border communications. This in order to understand the connection and if needed re-categorize the nations within the spectrum. The study was made through a comparative content analysis of television advertising in both countries, determining differences in context attribute frequency. The results and conclusions of the study show that theories of high and low context classifications do not match reality between these two nations, and that globalism has in fact had an effect on advertising communications.
APA, Harvard, Vancouver, ISO, and other styles
40

Schweller, Ryan. "Development of Dynamic DNA Probes for High-Content in situ Proteomic Analyses." Thesis, 2012. http://hdl.handle.net/1911/64649.

Full text
Abstract:
Dynamic DNA complexes are able to undergo multiple hybridization and dissociation events through a process called strand displacement. This unique property has facilitated the creation of programmable molecular detection systems and chemical logic gates encoded by nucleotide sequence. This work examines whether the ability to selective exchange oligonucleotides among different thermodynamically-stable DNA complexes can be harnessed to create a new class of imaging probes that permit fluorescent reporters to be sequentially activated (“turned on”) and erased (“turned off”). Here, dynamic DNA complexes detect a specific DNA-conjugated antibody and undergo strand displacement to liberate a quencher strand and activate a fluorescent reporter. Subsequently, incubation with an erasing complex allows the fluorophore to be stripped from the target strand, quenched, and washed away. This simple capability therefore allows the same fluorescent dyes to be used multiple times to detect different markers within the same sample via sequential rounds of fluorescence imaging. We evaluated and optimized several DNA complex designs to function efficiently for in situ molecular analyses. We also applied our DNA probes to immunofluorescence imaging using DNA-conjugated antibodies and demonstrated the ability to at least double the number of detectable markers on a single sample. Finally, the probe complexes were reconfigured to act as AND-gates for the detection of co-localized proteins. Given the ability to visualize large numbers of cellular markers using dynamic DNA probe complexes, high-content proteomic analyses can be performed on a single sample, enhancing the power of fluorescence imaging techniques. Furthermore, dynamic DNA complexes offer new avenues to incorporate DNA-based computations and logic for in situ molecular imaging and analyses.
APA, Harvard, Vancouver, ISO, and other styles
41

Ho, Derek. "CMOS Contact Imagers for Spectrally-multiplexed Fluorescence DNA Biosensing." Thesis, 2013. http://hdl.handle.net/1807/35849.

Full text
Abstract:
Within the realm of biosensing, DNA analysis has become an indispensable research tool in medicine, enabling the investigation of relationships among genes, proteins, and drugs. Conventional DNA microarray technology uses multiple lasers and complex optics, resulting in expensive and bulky systems which are not suitable for point-of-care medical diagnostics. The immobilization of DNA probes across the microarray substrate also results in substantial spatial variation. To mitigate the above shortcomings, this thesis presents a set of techniques developed for the CMOS image sensor for point-of-care spectrally-multiplexed fluorescent DNA sensing and other fluorescence biosensing applications. First, a CMOS tunable-wavelength multi-color photogate (CPG) sensor is presented. The CPG exploits the absorption property of a polysilicon gate to form an optical filter, thus the sensor does not require an external color filter. A prototype has been fabricated in a standard 0.35μm digital CMOS technology and demonstrates intensity measurements of blue (450nm), green (520nm), and red (620nm) illumination. Second, a wide dynamic range CMOS multi-color image sensor is presented. An analysis is performed for the wide dynamic-range, asynchronous self-reset with residue readout architecture where photon shot noise is taken into consideration. A prototype was fabricated in a standard 0.35μm CMOS process and is validated in color light sensing. The readout circuit achieves a measured dynamic range of 82dB with a peak SNR of 46.2dB. Third, a low-power CMOS image sensor VLSI architecture for use with comparator based ADCs is presented. By eliminating the in-pixel source follower, power consumption is reduced, compared to the conventional active pixel sensor. A 64×64 prototype with a 10μm pixel pitch has been fabricated in a 0.35μm standard CMOS technology and validated experimentally. Fourth, a spectrally-multiplexed fluorescence contact imaging microsystem for DNA analysis is presented. The microsystem has been quantitatively modeled and validated in the detection of marker gene sequences for spinal muscular atropy disease and the E. coli bacteria. Spectral multiplexing enables the two DNA targets to be simultaneously detected with a measured detection limit of 240nM and 210nM of target concentration at a sample volume of 10μL for the green and red transduction channels, respectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Yeh, Chih-wen, and 葉志文. "The Content Analysis of Senior High School English Textbooks." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/25731228925478953492.

Full text
Abstract:
碩士
國立高雄師範大學
教育學系
91
The goal of this research is to analyze the content of English textbooks to know the vocabulary frequency, the difficulty of the reading, themes and paragraph précis. The researcher also discussed about the difference between textbooks. In this research, content analysis is adopted. The first thing is to research and refer to domestic and foreign documents and literature related to develop the measurements to analyze the content of senior high school English textbooks. Three versions of English have been analyzed: Far East version, Lung-teng version and NICT (National Institute for Compilation and Translation) version. The measurements applied in this study included “frequency band” in Collins Cobuild English Dictionary (1995), “Lix formula” by Bjornsson(1968) and “themes and paragraph précis form”. The results are as follows: 1.The vocabulary frequency is affected by theme and phraseology. Phraseology is the particular way in which words and phrases are arranged when saying or writing something. The vocabulary frequency between three versions is similar. The number of vocabulary in NICT version is more than the others. Far East version is more than Lung-teng version. 2.The difficulty of reading is affected by phraseology. Three versions are all from simple sentences to complicated sentences. Reading in NICT version is the most difficult. Reading in Lung-teng version is more difficult than reading in Far East version. In all versions, the readings about dialogue and stories are always the longest. The readings are always the shortest while the readings about the poems. 3.The form of themes in Far East version and Lung-teng version are different. There is one theme for one lesson in Far East version. And there is one theme for two lessons in Lung-teng version. The paragraph précis in three versions are different and hard to conclude. According to the conclusions, following are the suggestions: 1.Some technicalities should be designed carefully, or they could lower students’ motivation to learn English. 2.The difficulty between books should be from simple sentences to complicated sentences. The readings in lower grades should have more simple sentences than in older grades. The difference in books should be large to fit the need of senior high students. 3.The themes in English textbooks should be vivid and relevant to life in order to fit students’ learning process. And the paragraph précis should be connected with the themes to give students a complete conception about the lesson. 4.English teachers can choose the proper articles by Lix formula in their teaching. And Lix formula could also be the indicator of teaching designs and teaching methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Shih-Hsin, and 張世欣. "Theoretical Analysis and Applications of the High Dynamic Range Imaging." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81518880223770095608.

Full text
Abstract:
博士
國立雲林科技大學
工程科技研究所博士班
99
Recently, the study of high dynamic range imaging technology which is founded on the camera response function suggested that an image detector could replace many traditional photo-diode sensors. Until now, measurement of an automotive headlamp light distribution pattern is a very tedious and time-consuming processes by using photometers. Also in photoelasticity, many researches tried improve traditional measurement method that the cumbersome process of data collection troubled people. There are many studies on photoelasticity have been conducted, and many equations for photoelastic analysis based on digital images were proposed. While these equations were all presented by the light intensity emitted from the analyzer, pixel values of the digital image were actually used in the real calculations and this is not for real situations. In this thesis, high dynamic range imaging technology and camera response function were applied to these two measurements by using a digital camera. Traditional measurement of the headlamp distribution has been based on a point by point approach using goniophotometer. In this thesis, an imaging photometer is developed by combining a regular digital camera and high dynamic range imaging technique to achieve faster and more complete measurement of the entire distribution. The experimental results indicate that errors of the measurements are within 10% of the true values, which is better than the 20% requirements of the industry. Furthermore, the proposed approach can provide a very wide dynamic range only limited by the shutter speed of the camera. We believe this new method would provide the headlamp industries with a much more friendly environment for evaluating their designs of new products in a faster and very economical way. In the second part of this thesis, a proposal of using relative light intensity obtained by the camera response function to replace the pixel value for photoelastic analysis was investigated. Generation of isochromatic images based on relative light intensity and pixel value were compared to evaluate the effectiveness of the new approach. The results showed that when relative light intensity was used, the quality of isochromatic image can be greatly improved both visually and quantitatively. The technique proposed in this paper can also be used to improve the performance for the other types of photoelastic analysis using digital images.
APA, Harvard, Vancouver, ISO, and other styles
44

Yao, Xiaohui. "Mining high-level brain imaging genetic associations." Diss., 2018. http://hdl.handle.net/1805/15831.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Imaging genetics is an emerging research field in neurodegenerative diseases. It studies the influence of genetic variants on brain structure and function. Genome-wide association studies (GWAS) of brain imaging has identified a few independent risk loci for individual imaging quantitative trait (iQT), which however display only modest effect size and explain limited heritability. This thesis focuses on mining high-level imaging genetic associations and their applications on neurodegenerative diseases. This thesis first presents a novel network-based GWAS framework for identifying functional modules, by employing a two-step strategy in a top-down manner. It first integrates tissue-specific network with GWAS of corresponding phenotype in regression models in addition to classification, to re-prioritize genome-wide associations. Then it detects densely connected and disease-relevant modules based on interactions among top reprioritizations. The discovered modules hold both phenotypical specificity and densely interaction. We applied it to an amygdala imaging genetics analysis in the study of Alzheimer's disease (AD). The proposed framework effectively detects densely interacted modules; and the reprioritizations achieve highest concordance with AD genes. We then present an extension of the above framework, named GWAS top-neighbor-based (tnGWAS); and compare it with previous approaches. This tnGWAS extracts densely connected modules from top GWAS findings, based on the hypothesis that relevant modules consist of top GWAS findings and their close neighbors. It is applied to a hippocampus imaging genetics analysis in AD research, and yields the densest interactions among top candidate genes. Experimental results demonstrate that precise context does help explore collective effects of genes with functional interactions specific to the studied phenotype. In the second part, a novel imaging genetic enrichment analysis (IGEA) paradigm is proposed for discovering complex associations among genetic modules and brain circuits. In addition to genetic modules, brain regions of interest also grouped to play role. We expand the scope of one-dimensional enrichment analysis into imaging genetics. This framework jointly considers meaningful gene sets (GS) and brain circuits (BC), and examines whether given GS-BC module is enriched in gene-iQT findings. We conduct the proof-of-concept study and demonstrate its performance by applying to a brain-wide imaging genetics study of AD.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Yu-Shen, and 李育慎. "Accessing the aquiferous content using high frequency eletrical impedance analysis." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/20925104657276069173.

Full text
Abstract:
碩士
中原大學
生物醫學工程研究所
98
Under human skin, it is a complex tissue which consists of muscles, fat, and many more with different physiological states. However, the muscle and fat under the skin do not change in a short period of time except the aquiferous content inside and under the skin. Therefore, we will know the metabolism state of human being by observing the change of aquiferous content in superficial tissues under the skin in a short period of time. The FPGA programmable system chip was specifically designed to stimulate the body using constant current high frequency signal. In this way, the system has the ability to monitor and record biological impedance signals of body. The prototype system was also including functions of data display with LCD, data storage with FLASH memory, data communication with USB. An off-line PC window program was also provided for reviewing the recorded impedance and analyzed impedance with high frequency. The system was used to access the psychophysic effect of aquiferous content monitoring over four circumstances. They were situations of monitoring the before drinking water, after drinking water, before meal, and after meal. It was to examine the impact of aquiferous content on different physiological states. During different frequency stimulation, the impedance of skin changed, so we used high-frequency scan to find the range of frequency. And, we also observed the changes of aquiferous content of shallow tissue under human skin in patients with diabetes. The relationship between changes in the aquiferous content of the skin and in blood glucose was to be discussed.
APA, Harvard, Vancouver, ISO, and other styles
46

Meng, Yi-Ting, and 孟依亭. "Design and Analysis of High Moisture Content Paddy Drying Simulator." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/18460924848117562626.

Full text
Abstract:
碩士
國立臺灣大學
生物產業機電工程學研究所
102
High moisture content paddy emergency harvested in rainy season is a serious problem in Taiwan. It might prolong drying time to congest operating schedule of paddy drying centers and also increase drying costs. Currently, commercial circulating dryers are mature commodities. However, various designs and operations might lead to different drying efficiency and energy costs. The purpose of this study is to device a drying simulator based on circulating drying principles to investigate dryer designing and operation for high moisture content paddy. Pre-drying operation tests were conducted in the simulator to dry high moisture content paddy to its critical moisture content. Various parameters were chosen and model of Box Behnken Design (BBD) was adopted under three levels of factors. Levels of paddy layer thickness are 6cm, 9cm, and 12cm; levels of specific air flow rate are 0.3CMM/kg, 0.5CMM/kg, and 0.7CMM/kg; levels of drying time are 10min, 20min, and 30min. It is difficult to measure internal changes in the paddy layer during drying, so heat and mass transfer model was built and simulated by FEM software (Comsol Multiphysics 4.3a) to predict internal changes in the paddy layer. Experimental results showed 98% total energy consumed in drying process being used in heating and desorption of water in grain and the rest being used in air flowing to carry mass and energy. The most efficient way in pre-drying was to dry wet grain to the critical moisture content in one pass. Various passes drying might consume more energy in heating up grain to drying temperature in each pass; besides, the benefit of tempering was insignificant in pre-drying stage. In the range of operations, combinations of specific air flow rate and drying time in 0.3CMM/kg-30min, 0.5CMM/kg-20min, and 0.7CMM/kg-20min with high energy efficiency and high air flow rate, also resulted in high drying rate. Because specific air flow rate was used in experimental design the influence of the paddy layer thickness was not significant. However, simulation results showed that thick layer drying might lead to high moisture content variation along the air flow direction. This phenomenon may affect subsequent drying operation, so it is suggested for further research.
APA, Harvard, Vancouver, ISO, and other styles
47

XU, ZHE-WEI, and 許哲維. "Content Analysis and Research on Geography Textbooksof Junior High School." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/5zfd4v.

Full text
Abstract:
碩士
國立臺中教育大學
區域與社會發展學系碩士班
100
This research examines the distribution of geographic concepts in geography textbooks of junior high school at the fourth learning stage of Taiwan’s Grade 1-9 Curriculum and compiles the presentation of the competence indicator of each grade intended to be achieved by the nine subjects in these textbooks. The researcher reviews both foreign and domestic literatures and constructs a category table of different geographic concepts found in the junior high school geography textbooks. The content analysis method is applied to analyze the contents of the three editions of Taiwan’s junior high school geography textbooks and supplementary materials. The competence indicator of each grade which is intended to be achieved by the three editions of the geography textbooks is also compiled. The major points under discussion are: (1) How are geographic concepts distributed in the different editions of junior high school geography textbooks? (2) Is there any difference in the distribution of geographic concepts among different editions of junior high school geography textbooks? (3) What is the competence indicator of each grade intended by the different editions of junior high school geography textbooks? (4) Is there any difference in the competence indicator of each grade among the different editions of junior high school geography textbooks? The research findings are as follows: (1) In respect of the sub-fields, the distributions of geographic concepts in the three editions are the same. Geographical concepts appear most frequently in the sub-field of “Human Geography”, and appear least frequently in the sub-field of “Geographical Skills.” (2) In respect of each individual sub-field, although geographic concepts appear in the four sub-fields with various degrees, the chi-square test concludes that there is no significant difference in the three editions regarding the distribution of geographic concepts in the same sub-field. (3) In respect of the sub-fields in the three editions of junior high school geography textbooks, there is only difference in the distribution of the main categories in the sub-field of Physical Geography. There is no significant difference in the other three sub-fields among the three editions. (4) In respect of the main categories in the three editions of the geography textbooks of junior high school, there is only difference in the distribution of the sub-categories in the two main categories of “Economy” and “Culture and Humanity.” There is no significant difference in the other main categories among the three editions. (5) The distribution of the competence indicator of each grade which is intended to be achieved in the three editions is highly uneven, with 90% concentrated in the subject of “People and Space.” (6) Among the three editions of geography textbooks, the competence indicator of each grade clearly appears more frequently in Nan-Yi and Han-Lin editions than Kang Hsuan edition in all learning stages with the only exception of the fourth learning stage (junior high school). Key words: Junior high school geography textbook, Geographic concept, the competence indicator of each grade, Content analysis
APA, Harvard, Vancouver, ISO, and other styles
48

CHEN, YI-CHUN, and 陳怡君. "The Content Analysis of Science Textbooks for Junior High School." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/vem33r.

Full text
Abstract:
碩士
明道大學
課程與教學研究所
106
The Content Analysis of Science Textbooks for Junior High School Abstract The purpose of the study was to analyze the differences between the illustrations of physical and chemical contents in Science textbooks for junior high school. The study was to investigate the space that illustrations use and ways that illustrations presenting and to discuss the functions that illustrations have in textbooks. It is expected to find out the differences between the versions of Science textbooks .The content analysis method was adopted in this study. Take the three versions of Kangxuan, Hanlin and Nanyi on book 6 of physical and chemical contents in Science textbooks for junior high school was used as a research sample. The study found that:(1)The K version accounted for about 29.94% of the textbooks, and the H version accounted for about 27.96% of the textbooks,and the N version accounted for about 24.55% of the textbooks. (2)The illustrations of the three versions are all presented with a higher percentage of the graphic illustrations. The K version accounted for about 64.30% of the total inlay area, and the H version accounted for about 67.38% of the total inlay area,and the N version accounted for about 66.58% of the total inlay area.(3)With represent to the functional illustrations in the three versions, organizational flow illustrations all rate the highest and decorative illustrations all rate the lowest in every versions. The organizational flow illustrations of the K version accounted for about 32.40% of the total inlay area, and the organizational flow illustrations of the H version accounted for about 25.39% of the total inlay area,and the organizational flow illustrations of the N version accounted for about 38.02% of the total inlay area. The decorative illustrations of the K version accounted for about 0.68% of the total inlay area, and the decorative illustrations of the H version accounted for about 0.48% of the total inlay area,and the decorative illustrations of the N version accounted for about 0.33% of the total inlay area.(4)The three versions all focus on the organizational flow illustrations and simple performance illustrations.However, the total inlay area of the third and fourth places in the K version are structured illustrations and situation-directed illustrations.The total inlay area of the third and fourth places in the H version are situation-directed illustrations and symbolic illustrations. The total inlay area of the third and fourth places in the N version are structured illustrations and statistics illustrations. According to the research results, provide teachers with reference to textbooks. Keywords:junior high school, science, textbook, content analysis, illustration
APA, Harvard, Vancouver, ISO, and other styles
49

Patel, Rita R. "High speed digital imaging and kymographic analysis of vocal fold vibrations." 2006. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hou, Pei-Chun, and 侯佩君. "A Content Analysis on Biodiversity in Junior-High-School Mandarin Textbooks." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/7jgsyq.

Full text
Abstract:
碩士
銘傳大學
教育研究所碩士班
93
Biodiversity is one of the core topics to environmental education. It is also an important topic in the 9-Year-Continuous Curriculum of Taiwan which emphasizes environmental education should be inclusive into each learning field. The purpose of the study is to explore the distribution and presentation of biodiversity in junior-high-school mandarin textbooks. The researcher gives some suggestions for education and the future study, and hopes the result of the study would facilitate the development and revision of mandarin textbooks. The analytic category of biodiversity which is based on “Convention on Biological Diversity” and “Taiwan National Biodiversity Report” includes four aspects-the levels of biodiversity, the values of biodiversity, the disappearance of biodiversity, and the conservation of biodiversity. The study has the following findings: 1.The central-theme and sub-theme in junior-high-school mandarin textbooks are mainly of the values of biodiversity. 2.In the values of biodiversity, esthetical value is the major topics discussed in mandarin textbooks. 3.In the levels of biodiversity, species diversity is the major topics discussed in mandarin textbooks. 4.In the disappearance of biodiversity, over-consumption natural resource is the major reason discussed in mandarin textbooks. 5.In the conservation of biodiversity, revive conservation is the only topic appeared in mandarin textbooks. 6.Among the five goals of environmental education, mandarin teaching in junior high school emphasizes the importance of environmental ethical values.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography